Artificial Intelligence Edges Closer to Mimicking the Human Brain

A New Approach to Artificial Intelligence and Machine Learning

Even though AI is constantly compared to a human brain, it still is not that much like it. In a new machine learning approach, engineers scrapped parallels to the human brain, and turned to a “lowly worm” for inspiration. Knowing that many machine learning algorithms cannot hone in on specific skills during initial training periods, the idea of a liquid neural network was developed with a form of built-in “neuroplasticity,” inspired by the 302 neurons that make up the nervous system of C. elegans, a tiny nematode (or worm).

A team from MIT developed equations to mathematically model the worm’s neurons and built it into a neural network.

Liquid Machine Learning and Flexible Algorithms

MIT scientists are working towards a more innovative future of robot control, natural language processing, and video processing through the development of a new type of neural network. This network learns on the job, rather than just during the training phase. A network such as this one could help make better decisions in different tech industries such as autonomous driving and medical diagnosis. 

Since the system is created through flexible algorithms, it is often referred to as a “liquid network.” These algorithms constantly change their underlying equations to remain capable of adapting to new data. 

Ramin Hasani, the study’s lead author, said,“Let’s consider video processing, financial data, and medical diagnostic applications as examples of time series that are central to society. The vicissitudes of these ever-changing data streams can be unpredictable. Yet analyzing these data in real-time, and using them to anticipate future behavior, can boost emerging technologies like self-driving cars. So, we built an algorithm fit for the task.”

In different tests, this new neural network edged out other state-of-the-art time series algorithms by accurately predicting future values in datasets, ranging from atmosphere chemistry to traffic patterns. 

Prior to Liquid Machine Learning HTM was Developed
HTM: Hierarchical Temporal Memory

The Brain

The human brain is capable of doing a lot with less work, as only around 2% of a human’s neurons fire at any given moment. Analyzing new sights and experiences is a difficult task, so the brain saves a lot of effort by learning sequential inputs and predicting the next output. Hierarchical Temporal Learning (HTM) is attempting to mimic the architecture of the human neocortex by modeling the brain’s pattern recognition capabilities.

This is a contemporary neural net example. Each node has a value and each of the connections carries a weight. The neural nets learn by updating the weights of each connection (synapse) based on each training sample. 

Biological Background

The neocortex is responsible for cognition, most sensory perception, language, spatial thinking, and movement. It is able to perform these tasks through billions of neurons, which are each composed of thousands of synapses.

The neurons are stacked into cortical columns, where the neurons can receive input from synapses. Each neuron contains thousands of synaptic connections but only 10% can actually cause a neural spike. The other 90% (distal dendrites) handle pattern recognition, and if about 10 of one neuron’s distal dendrites fire together, it causes a dendritic spike, thus putting the neuron in a “predictive state”. The predictive neuron will react faster to the proximal stimulation faster than non-predictive neurons, and when it fires it also prevents neighboring (distally connected) neurons from firing. This is how the brain is able to put in a small amount of effort to still achieve a result.

Neural Architecture 

Here is a neuron compared to the HTM representation: 

The HTM neurons are organized into a structure mimicking the cortical column. “Sequence Memory” in the middle below indicates multiple micro-columns of neurons. Looking at the sequence from above, a Sparse Distributed Representation can be generated. SDRs are a byte-array where 1 = at least one neuron in the microcolumn is active.

Sequential Pattern Recognition

HTM models learn by receiving a series of SDRs, adjusting shapes accordingly, and eventually predicting the next input from any SDR.

Making Sense of ABCD and XBCY

“ABCD and XBCY is a bit abstract, so let’s talk about something more concrete.

Imagine you’re a loner who only eats chicken tenders, and you’ve only ever been to your new friend’s house twice: last week Monday afternoon and Wednesday evening.

Monday afternoon: they served you lunch, you ate the sandwich, you watched a movie on their couch. Wednesday evening: they served you dinner, you ate the spaghetti, you had indigestion.

Now if you suddenly think of Monday afternoon, you might think of eating lunch at your friend’s place and watching a movie, and thinking Wednesday afternoon would yield its own train of thought. If you think of burgers you might think of movies, and you might equate spaghetti with bad times.

But what if you just think of “your friend serving you food” in general? You might think of burgers and spaghetti, and then a movie and/or indigestion.

Thinking of ‘eating at your friend’s place’ without further details is like receiving a bursting input of B after knowing B_from_A and B_from_X. It’s the general version of an experience without specific context.

Similarly, thinking of ‘going to your friend’s place’ without thinking of afternoon or evening could mean thinking of lunch or dinner. Just like B1 and B2 are distinct combinations of neurons in the same column-arrangement, you have similar experiences (movie or indigestion?) following different contexts (burger or spaghetti?). Context matters.”

Future Applications

HTM is a great tool for data that is temporal or sequential. HTM is a work in progress that evolves as new developments are made in neuroscience. The primary developer of HTM libraries is Numeta, as they have developed a whole open source community.

Numenta 

Numenta research exists at the “intersection of biological and machine intelligence,” says Vincenzo Lomonaco, Postdoctoral Researcher at the University of Bologna. He has developed a single-entry-point and easy-to-follow guide to the HTM algorithm for people who have never been exposed to Numenta research but have a basic machine learning background. 

The Thousand Brains Theory

There is a theory that encompasses Numenta research efforts: The Thousand Brains Theory of Intelligence. This is a biologically constructed theory of intelligence, that is built through reverse engineering the neocortex.

Figure 1. Thousand Brains Theory of Intelligece

HTM

The HTM Algorithm focuses on three main properties: sequence learning, continual learning, and sparse distributed representations. 

Figure 2. Comparison of RNNs and HTMs

The HTM algorithm should possess:

  1. Sequence learning: being able to model temporally correlated patterns of intelligence. 
  2. High-order predictions: real-world sequences contain contextual dependencies that span multiple time steps
  3. Multiple simultaneous predictions: algorithm outputs a distribution of possible future outcomes
  4. Continual learning: important for processing continuous real-time perceptual streams
  5. Online learning: algorithm should be able to predict and learn patterns on-the-fly without the need for entire sequences
  6. Noise robustness and fault tolerance: should exhibit robustness to noise in the inputs
  7. No hyperparameter tuning: should have acceptable performance on a wide range of problems without any task-specific hyperparameter tuning

Numenta is positioned to lead the path towards the next wave of Neuroscience-Inspired AI research. Even though the HTM algorithm is quite new, early results show an outstanding design that displays how a neuroscience grounded theory of intelligence may inform and guide the development of future AI systems. 

Resources:

https://numenta.com/blog/2019/10/24/machine-learning-guide-to-htm 

https://www.techexplorist.com/liquid-machine-learning-type-neural-network-learns-job/37531/

https://medium.com/swlh/towards-deeper-learning-hierarchical-temporal-memory-24199c1470b9

https://innovationatwork.ieee.org/is-liquid-learning-the-next-revolution-in-machine-learning/

https://singularityhub.com/2021/01/31/new-liquid-ai-learns-as-it-experiences-the-world-in-real-time/

https://news.mit.edu/2021/machine-learning-adapts-0128