When we start to talk about artificial intelligence, we make first analogies with Machine Learning, Deep Learning, Neural Networks, Evolutionary Computation, Vision and Robotics. Sometimes following this flow seems so natural, that a transdisciplinary cybernetic approach appears to be a valid first choice to be adopted.
But if asked, just now, by your colleague or geeky curious university peers what is AI all about, well, the first thought we might have is: Hey, wait a sec., what is the clearer way to explain what AI means?
Although pretty keen to consider and dig deep in cognitive science and robotics, I noticed the bad and busy bits of content I were missing in my reasoning! No worries, they are the same ones I am not expecting to clarify with this post, even though it might be a start!
OK, now let´s begin our mental order journey, telling what is usually associated with the broad, stunning, wise word Intelligence.
Human experience is specialized to solve problems. We do that recurring to Intelligence, consciously or not, recurring to the significant infrastructure of our peers, family, and institution that combine libraries and easy-to-use bits of knowledge that fits with the network we like to refer to.
Why? Well, some of the reasons are fascinating personalized suggestions, content and experiences that play a role in how we enter our next step of planning and acting to solve our problems.
On another meridian, we can consider that science is also an institution moved by a plethora of tested intelligent algorithms and data structures made to offer us what we need to rock the world. Right?
Science is an externalized system, self-improved, that is intelligent because you can refer to it but is not hierarchical as in real life the diverse subjects do not have a strong delimitation between each other. Science is connected with technology as we need stronger and fastest computers helping us to afford our daily number-crunching effort. Even if there are people who try to defend a reductionist view, the system always wins if asked to be part of this forced competition.
No system exists in isolation, just look on how science solves problems with the progression of computation, interdisciplinary research and human-driven applications: its outputs, determined by resources consumption in terms of human, national research funds and private investments, generate intelligence.
There is a big difference between mimicking and create a dense mapping of knowledge.
Now, we have also some standard that helps us to measure the timeline of scientific discoveries: the ones that are rate by experts of physics, math, biology, engineering, and so on, that define a field and determine which impact is worth to be considered by actual and future generations of triggered enthusiasts.
That is the picture we see in science: resources and geniuses grew exponentially, that´s very much true. But, the output, in terms of scientific discoveries, is recursive and is more resources and technologies consuming. Just think about that.
So, what should we do? Study less and save time and thoughts to dedicate ourselves better, relaxed and rewarding activities aligned with our personalities?
Maybe these sounds as suggestions for multiple researchers “deep bleeding” dedicated to projects, synchronized all over the world, whom with diverse levels of awareness are busy to ingest huge amounts of knowledge, ignoring the fight with their stomachs permanent hole. The geek and nerdy identity that should not be discarded.
I know, you know; we know.
Maybe some more sleep and Parkour to avoid the tourbillons of paradoxes can be two elements able to generate some surprise for lots of us…
Intelligence is multidimensional and dividing it into categories is just an exercise of scaffolding that helps us to question its plastic features without chocking from its complex process flow.
The power of the AI lego is too charming and able to create the deep learning flexible processes (able to map and find the correlation between inputs and outputs) that make diverse people so excited on their journeys, writing everything from scratch. Research and integration on more or less usability, connectivity and flexibility can be had by software loops, integrating tons of engineering and user device interface design considerations to be made, going through all of high or low coding level integrations. Simplicity is key here, to integrate constraints that should be respected when hierarchical architectures of concepts defining a problem must be considered.
So what is the next step for AI?
We have exciting opportunities that we can consider, looking to the data we have and what we are looking to optimize. AI is not about a simple combination, as what we should intend is how our deep learning models and inputs we want to consider building impressive and organized things together. Experience is key to make an order between points defining a “betweenness” network and algorithms that are not abstract, but easy to be applied.
Today, successful AI systems – like the ones used in robotics – are hybrid ones, and they use deep learning algorithms in combination with neural networks that act as perception models, allowing to run data through several layers. So, how that´s work?
Well, deep learning early layers learn to identify low-level features like edges, then the idea is that after a while these edges can combine information from first layers into a more holistic representation. For example, a middle layer might identify boundaries to detect part of an object or a human being such as an eye or a branch, while a deep layer will recognize a face or a three. Here the paradigm that the senior has the skills, experience and knowledge is effectively correct.
Unstructured data are the thing that deep learning is helping us to organize and learn, they only need to be respected is to give our model vast amounts of data-sets. Robotics recur to this technology employing several cutting edges technologies that make deep learning manageable and fit with models performing the best answer to problems at hand. A promise that could change forever our first thoughts to intend productivity that matters.
To let our neural network identifies the parts composing a tree, we should go through multiple images of branches, diverse in shapes and sizes, allowing our system learn if they could be correctly catalogued according to their nature, while other types of forms, even though VERY similar with them, do not fit in the category.
In other words, our task is to train the network to correctly identify heretofore new possibilities allowed by yet unseen images. For this reason, deep learning applied to neural networks, if accurate and based on abundant and flexible experience can recognize the code piece of writing of the software to “relate” with the external world.
To accomplish the same task in a conventional program of yesteryear would involve writing screeds of code attempting precisely to define the image of a beech tree as distinct from ash, sycamore and so on.
One of the many reasons why we are now witnessing an acceleration in AI-robotics driven by all sort of collateral industries able to perform autonomous sub-tasks, and other time-consuming repetitive tasks under the attentive eye of co-workers, gaining increasing trust-ability in respect with an operator.
‘AI will help in the visual diagnosis of diseases by recognizing lesions from images where clues and the type of context are not evident to the human eye.
Many politicians and influential people are the ideas we are now entering in the ´Second Machine Age´ which is about the automation of how knowledge work, thanks to a proliferation of real-time data analytics, deep and machine learning, and neural networks offspring.
The AI revolution is being driven by neural networks. I´ll make an example so is clearer for everyone. Essentially a conventional computer program is a detailed and precise set of instructions written to accomplish a particular task.
Pretty clear, right. Now, a neural network is a collection of algorithms designed to recognize numerical data and link these digital processing units roughly modelled on how the human brain process and translate all real-world inputs.
Let me simplify the concept saying that in deep-learning networks, each layer of nodes (through which data pass in a multi-step approach of pattern recognition) trains on a diverse set of features based on the antecedent layer’s output. The greater the degree of progress you have in the neural net, the more complex will be the features that your nodes can recognize, polarize and combine nicely with the previous layers.
A precise and ordered classification depends on how far the learning process is continuously enriched with details and matching examples. With these premises, the neural network self-efficacy is already one step over our inaugural intentions…
Less than five decades ago, the idea to have artificial intelligence was nothing more than questionable and speculative science fiction. While just now, investing for us some minutes to reflect on its devouring influence in our life, we might try to observe how it is getting closer and closer with the astounding tech accomplishments made by part of the people that live in our society.
Now, I don´t know what is more chilling.