19/07/28 17:20:31.99 WB6ziRYun
A number of papers have studied spiking neuron models (Ghosh-Dastidar and Adeli, 2009; Maass, 1997) in the context A key consideration in learning algorithms is the issue of generalization, or the ability to robustly deal with novel patterns.
The sequence memory mechanism we have outlined learns by forming synapses to small samples of active neurons in streams of sparse patterns.
The properties of sparse representations naturally allow such a system to generalize.
Two randomly selected sparse patterns will have very little overlap.
Even a small overlap (such as 20%) is highly significant and implies that the representations share significant semantic meaning.
Dendritic thresholds are lower than the actual number of synapses on each segment, thus segments will recognize novel but semantically related patterns as similar.
The system will see similarity between different sequences and make novel predictions based on analogy.
Recently we showed that our sequence memory method can learn a predictive model of sensory-motor sequences (Cui et al., 2015).
We also see it is likely that cortical motor sequences are generated using a variation of the same network model.
Understanding how layers of cells can perform these different functions and how they work together is the focus of our current research.
5.
Materials and Methods
Here we formally describe the activation and learning rules for an HTM sequence memory network.
There are three basic aspects to the rules: initialization, computing cell states, and updating synapses on dendritic segments.
These steps are described below, along with notation and some implementation details.