Memory-prediction framework

Memory-prediction framework

The memory-prediction framework is a theory of brain function that was created by Jeff Hawkins and described in his 2004 book On Intelligence. This theory concerns the role of the mammalian neocortex and its associations with the hippocampus and the thalamus in matching sensory inputs to stored memory patterns and how this process leads to predictions of what will happen in the future.

Contents

Overview

The theory is motivated by the observed similarities between the brain structures (especially neocortical tissue) that are used for a wide range of behaviours available to mammals. The theory posits that the remarkably uniform physical arrangement of cortical tissue reflects a single principle or algorithm which underlies all cortical information processing. The basic processing principle is hypothesized to be a feedback/recall loop which involves both cortical and extra-cortical participation (the latter from the thalamus and the hippocampus in particular).

The memory-prediction framework provides a unified basis for thinking about the adaptive control of complex behavior. Although certain brain structures are identified as participants in the core 'algorithm' of prediction-from-memory, these details are less important than the set of principles that are proposed as basis for all high-level cognitive processing.

The basic theory: recognition and prediction in bi-directional hierarchies

The central concept of the memory-prediction framework is that bottom-up inputs are matched in a hierarchy of recognition, and evoke a series of top-down expectations encoded as potentiations. These expectations interact with the bottom-up signals to both analyse those inputs and generate predictions of subsequent expected inputs. Each hierarchy level remembers frequently observed temporal sequences of input patterns and generates labels or 'names' for these sequences. When an input sequence matches a memorized sequence at a given layer of the hierarchy, a label or 'name' is propagated up the hierarchy - thus eliminating details at higher levels and enabling them to learn higher-order sequences. This process produces increased invariance at higher levels. Higher levels predict future input by matching partial sequences and projecting their expectations to the lower levels. However, when a mismatch between input and memorized/predicted sequences occurs, a more complete representation propagates upwards. This causes alternative 'interpretations' to be activated at higher levels, which in turn generates other predictions at lower levels.

Consider, for example, the process of vision. Bottom-up information starts as low-level retinal signals (indicating the presence of simple visual elements and contrasts). At higher levels of the hierarchy, increasingly meaningful information is extracted, regarding the presence of lines, regions, motions, etc. Even further up the hierarchy, activity corresponds to the presence of specific objects - and then to behaviours of these objects. Top-down information fills in details about the recognized objects, and also about their expected behaviour as time progresses.

The sensory hierarchy induces a number of differences between the various layers. As one moves up the hierarchy, representations have increased:

  • Extent - for example, larger areas of the visual field, or more extensive tactile regions.
  • Temporal stability - lower-level entities change quickly, whereas, higher-level percepts tend to be more stable.
  • Abstraction - through the process of successive extraction of invariant features, increasingly abstract entities are recognized.

The relationship between sensory and motor processing is an important aspect of the basic theory. It is proposed that the motor areas of cortex consist of a behavioural hierarchy similar to the sensory hierarchy, with the lowest levels consisting of explicit motor commands to musculature and the highest levels corresponding to abstract prescriptions (e.g. 'resize the browser'). The sensory and motor hierarchies are tightly coupled, with behaviour giving rise to sensory expectations and sensory perceptions driving motor processes.

Finally, it is important to note that all the memories in the cortical hierarchy have to be learnt - this information is not pre-wired in the brain. Hence, the process of extracting this representation from the flow of inputs and behaviours is theorized as a process that happens continually during cognition.

Other terms

Hawkins has extensive training as an electrical engineer. Another way to describe the theory (hinted at in his book) is as a learning hierarchy of feed forward stochastic state machines. In this view, the brain is analyzed as an encoding problem, not too dissimilar from future-predicting error-correction codes. The hierarchy is a hierarchy of abstraction, with the higher level machines' states representing more abstract conditions or events, and these states predisposing lower-level machines to perform certain transitions. The lower level machines model limited domains of experience, or control or interpret sensors or effectors. The whole system actually controls the organism's behavior. Since the state machine is "feed forward", the organism responds to future events predicted from past data. Since it is hierarchical, the system exhibits behavioral flexibility, easily producing new sequences of behavior in response to new sensory data. Since the system learns, the new behavior adapts to changing conditions.

That is, the evolutionary purpose of the brain is to predict the future, in admittedly limited ways, so as to change it.

Neurophysiological implementation

The hierarchies described above are theorized to occur primarily in mammalian neocortex. In particular, neocortex is assumed to consist of a large number of columns (as surmised also by Vernon Benjamin Mountcastle from anatomical and theoretical considerations). Each column is attuned to a particular feature at a given level in a hierarchy. It receives bottom-up inputs from lower levels, and top-down inputs from higher levels. (Other columns at the same level also feed into a given column, and serve mostly to inhibit the activiation exclusive representations.) When an input is recognized - that is, acceptable agreement is obtained between the bottom-up and top-down sources - a column generates outputs which in turn propagate to both lower and higher levels.

Cortex

These processes map well to specific layers within mammalian cortex. (The cortical layers should not be confused with different levels of the processing hierarchy: all the layers in a single column participate as one element in a single hierarchical level). Bottom-up input arrives at layer 4 (L4), whence it propagates to L2 and L3 for recognition of the invariant content. Top-down activation arrives to L2 and L3 via L1 (the mostly axonal layer that distributes activation locally across columns. L2 and L3 compare bottom up and top-down information, and generate either the invariant 'names' when sufficient match is achieved, or the more variable signals that occur when this fails. These signals are propagated up the hierarchy (via L5) and also down the hierarchy (via L6 and L1).

Thalamus

To account for storage and recognition of sequences of patterns, a combination of two processes is suggested. The nonspecific thalamus acts as a 'delay line' - that is, L5 activates this brain area, which re-activates L1 after a slight delay. Thus, the output of one column generates L1 activity, which will coincide with the input to a column which is temporally subsequent within a sequence. This time ordering operates in conjunction with the higher-level identification of the sequence, which does not change in time; hence, activation of the sequence representation causes the lower-level components to be predicted one after the other. (Besides this role in sequencing, the thalamus is also active as sensory waystation - these roles apparently involve distinct regions of this anatomically non-uniform structure.)

Hippocampus

Another anatomically diverse brain structure which is hypothesized to play an important role in hierarchical cognition is the hippocampus. It is well known that damage to the hippocampus impairs the formation of long-term declarative memory; individuals with such damage are unable to form new memories of episodic nature, although they can recall earlier memories without difficulties and can also learn new skills. In the current theory, the hippocampus is thought as the top level of the cortical hierarchy; it is specialized to retain memories of events that propagate all the way to the top. As such events fit into predictable patterns, they become memorizable at lower levels in the hierarchy. (Such movement of memories down the hierarchy is, incidentally, a general prediction of the theory.) Thus, the hippocampus continually memorizes 'unexpected' events (that is, those not predicted at lower levels); if it is damaged, the entire process of memorization through the hierarchy is compromised.

Explanatory successes and predictions

The memory-prediction framework explains a number of psychologically salient aspects of cognition. For example, the ability of experts in any field to effortlessly analyze and remember complex problems within their field is a natural consequence of their formation of increasingly refined conceptual hierarchies. Also, the procession from 'perception' to 'understanding' is readily understandable as a result of the matching of top-down and bottom-up expectations. Mismatches, in contrast, generate the exquisite ability of biological cognition to detect unexpected perceptions and situations. (Deficiencies in this regard are a common characteristic of current approaches to artificial intelligence.)

Besides these subjectively satisfying explanations, the framework also makes a number of testable predictions. For example, the important role that prediction plays throughout the sensory hierarchies calls for anticipatory neural activity in certain cells throughout sensory cortex. In addition, cells that 'name' certain invariants should remain active throughout the presence of those invariants, even if the underlying inputs change. The predicted patterns of bottom-up and top-down activity - with former being more complex when expectations are not met - may be detectable, for example by functional magnetic resonance imaging (fMRI).

Although these predictions are not highly specific to the proposed theory, they are sufficiently unambiguous to make verification or rejection of its central tenets possible. See On Intelligence for details on the predictions and findings.

Contribution and limitations

By design, the current theory builds on the work of numerous neurobiologists, and it may be argued that most of these ideas have already been proposed by researchers such as Grossberg and Mountcastle. On the other hand, the novel separation of the conceptual machinery of bidirectional processing and invariant recognition from the biological details of neural layers, columns and structures lays the foundation for abstract thinking about a wide range of cognitive processes.

The most significant limitation of this theory is its current lack of detail. For example, the concept of invariance plays a crucial role; Hawkins posits "name cells" for at least some of these invariants. (See also Neural ensemble#Encoding for grandmother neurons which perform this type of function, and mirror neurons for a somatosensory system viewpoint.) But it is far from obvious how to develop a mathematically rigorous definition, which will carry the required conceptual load across the domains presented by Hawkins. Similarly, a complete theory will require credible details on both the short-term dynamics and the learning processes that will enable the cortical layers to behave as advertised.

Machine learning models

The memory-prediction theory claims a common algorithm is employed by all regions in the neocortex. The theory has given rise to a number of software models aiming to simulate this common algorithm using a hierarchical memory structure. The year in the list below indicates when the model was last updated.

Models based on Bayesian networks

The following models use belief propagation or belief revision in singly connected Bayesian networks.

  • Hierarchical Temporal Memory (HTM), a model, a related development platform and source code by Numenta, Inc. (2008).
  • HtmLib, an alternative implementation of HTM algorithms by Greg Kochaniak with a number of modifications for improving the recognition accuracy and speed (2008).
  • Project Neocortex, an open source project for modeling memory-prediction framework (2008).
    • Saulius Garalevicius' research page, research papers and programs presenting experimental results with a model of the memory-prediction framework, a basis for the Neocortex project (2007).
  • A Hierarchical Bayesian Model of Invariant Pattern Recognition in the Visual Cortex, a paper describing earlier pre-HTM Bayesian model by Dileep George, co-founder of Numenta, Inc. (2005). This is the first model of memory-prediction framework that uses Bayesian networks and all the above models are based on these initial ideas. Matlab source code of this model had been freely available for download for a number of years.

Other models

  • Implementation of MPF, a paper by Saulius Garalevicius describing a method of classification and prediction in a model that stores temporal sequences and employs unsupervised learning (2005).
  • M5, a pattern machine for Palm OS that stores pattern sequences and recalls the patterns relevant to its present environment (2007).
  • BrainGame, open source predictor class which learns patterns and can be linked to other predictors (2005).

See also

Further reading

External links


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • Memory — For other uses, see Memory (disambiguation). Neuropsychology Topics …   Wikipedia

  • Hierarchical Temporal Memory — (HTM) is a machine learning model developed by Jeff Hawkins and Dileep George of Numenta, Inc. that models some of the structural and algorithmic properties of the neocortex using an approach somewhat similar to Bayesian networks. HTM model is… …   Wikipedia

  • Specific State Memory Recall — (SSMR) The theory explores the notion that the memory pathways forged under the influence of specific chemicals, drugs such as alcohol, can be more effectively accessed again when under the influence of that drug.This explains the apparent… …   Wikipedia

  • Hierarchical Temporal Memory — Ein hierarchischer Temporalspeicher (Hierarchical Temporal Memory; HTM) ist ein Modell des maschinellen Lernens, welches von Jeff Hawkins und George Dileep (Numenta, Inc.) entwickelt wurde. Dieses Modell bildet einige Eigenschaften des Neocortex… …   Deutsch Wikipedia

  • Visual short term memory — In the study of vision, visual short term memory (VSTM) is one of three broad memory systems including iconic memory and long term memory. VSTM is a type of short term memory, but one limited to information within the visual domain. The term VSTM …   Wikipedia

  • On Intelligence —   …   Wikipedia

  • Outline of thought — The following outline is provided as an overview of and topical guide to thinking: Thought (and thinking) – the mental process in which beings form psychological associations and models of the world. Thinking is manipulating information, as when… …   Wikipedia

  • Jeff Hawkins — (born June 1, 1957 in Huntington, New York) is the founder of Palm Computing (where he invented the Palm Pilot) [Jeff Hawkins, On Intelligence , p.28] and Handspring (where he invented the Treo). [Jeff Hawkins, On Intelligence , p.1] He has since …   Wikipedia

  • List of psychology topics — This page aims to list all topics related to psychology. This is so that those interested in the subject can monitor changes to the pages by clicking on Related changes in the sidebar. It is also to see the gaps in Wikipedia s coverage of the… …   Wikipedia

  • Bayesian network — A Bayesian network, Bayes network, belief network or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). For example …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”