Tuesday, February 22, 2011

A Hypothesis of Brain Learning Based on Scale-Free Neural Networks

A simple neural net model I wrote sometime back
I am contemplating a new framework that tries to enhance the understanding of the learning process of human brains.

The basic functioning of the brain is based on Hawkins' memory prediction framework [see book "On Intelligence"]. However, I propose that the neural cells in the neocortex are connected as a scale-free network, rather than a static hierarchical structured outlined by Hawkins. Under this assumption, neural cells are not connected at birth through a "wiring diagram" found in our DNA. Rather, the connections are formed dynamically throughout our human lives, especially during infancy.

Information is stored in the brain as patterns of connections among neuron cells. Some neurons are more capable in attracting new links than others. With these two properties, growth and preferential attachment, I believe that neurons form a scale-free network, i.e. the distribution of links per node follow a power law [see book "Linked"]. Thus the neural network inherits all the fundamental properties of scale-free networks, such as robustness, and vulnerability. (Evidences of scale-free cortical network have been found recently by researchers.)

Remembering

Memory is stored as relations of sequential pattens. We remember by making associations of pattens that appear together spacially and temporally. When a child first see a triangular object, she is aware that this is an actual object because the pixels of the triangle (or rather pattens of neuron excitations) always move together in our retina each time our eyes samples the world. These pixels are associated with each other. At the same time, the triangle is also associated with the background, the present time, its color, the sound it makes, the lighting, the people around, and everything else that are happening around her at that moment. All these associations become connections between the neurons representing these excitation patterns.

The association mechanism is based on Hebbian learning, which can be simplistically summarized as cells that fire together, wire together.

Learning

Going back to the triangle analogy. The child is aware of the triangular object, yet she has no idea what object it is, until someone tells her that this particular triangular-shaped object is called a triangle. That is when the neurons representing the triangle are associated with name "triangle", which is by itself a set of neurons representing the actually symbol of the word. These neurons are also connected to other neurons that represent the sounding of the word "triangle", and any concrete facts associated with it. Learning happens when any new association is made.

Hawkins argues that during learning, some cells "learn" to fire when lower-level memories learn a sequence of patterns. These cells are passed to higher-level as abstract "names" of the detailed pattens. However he did not layout how these cells are selected in the first place. I think the naming is a natural result of association rather than explicit cell selection.

This explains why some cells are able to attract more links than others. The cells that represent abstract shapes, forms and concepts have a much larger probability to be associated with other cells. Whenever something happens, whether an image appears, a melody of music rings, or a train of thought proceeds, the general conceptual cells will fire if these concrete events fall into their category. Whenever this firing occurs, these conceptual cells make new associations.

Classifying

According to many studies, the brain learns to clasify information into categories. As information is passed from lower to higher levers of the memory hierarchy, more and more details are filtered out, forming abstraction. I think that this classification is a natural result of the threshold logic built into each and every neuronal connection. Each synapse collects electronic signal from all connections along its extension. Whenever the collective strength of the electronic signal become larger than its built-in threshold, it fires. When it fires, it propagates the its own firing power to all connections along its synapses.

Connectionists

This is in some sense similar to the "connectionists" theory of brain learning. However there is a fundamental difference. Rather than dividing the brain into functional modules, or sub-networks, each responsible for one particular domain, such as name networks and image networks, I believe all neurons are able to connect to any other neuron within their physical reach. I believe that neurons make connections, or associations, dynamically according to the information provided by all sensory input.

To be continued, on Hyppocampus.