-
Notifications
You must be signed in to change notification settings - Fork 109
Modification to sensory motor temporal memory (Layer 4) algorithm
The current algorithm is described here.
We made the observation that if learn-on-one-cell (hysteresis) is in effect, then we can replace the context-free (first order) sensory input on distal dendrites with lateral connections, and the algorithm would still work. This is because the hysteresis causes the same cells to represent a particular sensory input (element) in a given spatial configuration (world). Thus, predictions will not get confused within this world, even with random movements.
So this is the resulting algorithm:
- Start with basic temporal memory, allowing lateral distal connections
- Enable learn-on-one-cell hysteresis
- Add motor signal (efference copy) to input on distal dendrites
In a nutshell, this modification converts the algorithm from essentially a first-order sensory-motor temporal memory to a contextual sensory-motor temporal memory, while still making good predictions in the face of random motor behavior.
- Simpler. It's more similar to basic temporal memory, and it reduces the complexity of inputs on the distal dendrites.
- Matches known neuroscience better. Dendrites between cells in layer 4 have been observed, albeit fewer than those coming from other layers. This explains that observation.
- Makes more accurate predictions. Before this modification, the algorithm would predict all first-order transitions, regardless of what world is currently being observed. Now it would make more accurate, contextual predictions. More on this below.
- Makes extra predictions in the case of self-movements. This is a minor disadvantage, may not actually even be a problem. More on this below.
- Relies more on hysteresis. If hysteresis doesn't work correctly, it could break predictions. Before this modification, the algorithm wasn't so sensitive to errors in the learn-on-one-cell mechanism. That's because it was basically a first-order sensory-motor temporal memory.
In the case of shared elements between worlds, the algorithm before modification would make extra predictions. For example, with two worlds ABCD
and DCBA
, seeing B
and moving +1
would predict both C
and A
. This is because sensory-motor predictions were first-order.
With the modification, if you're currently moving in world DCBA
, and you move +1
from B
, it will predict only A
. This is because the B
cells active in DCBA
are different from B
cells active in ABCD
, and these cells with the motor signal cause the specific A
cells to become active.
If you allow movements of 0
, or movements that go from a sensory input back to itself, then the algorithm with modification will learn extraneous connections and make extra predictions.
Imagine two worlds, ABCD
and DCBA
, but you allow movements of 0
(moving from B
in either world with movement of 0
would result in B
). Let's say you've learned all transitions in ABCD
and are learning DCBA
. When on B
for the first time in this world, all B
cells will burst. Now, if you make a movement of 0
, the B
cells from ABCD
will become predicted.
Now we actually see B
, and the predicted cells become active. These B
cells, that actually represent the B
in ABCD
, will now learn transitions in the DCBA
world. Thus, we will make extra predictions in the future.
This may not actually be a problem, if we assume that movements of value 0
don't make much sense in the real world. Either way, the worst that will happen is a few extraneous connections.
We implemented this modification to the sensory-motor temporal memory algorithm, and examined the results of our integration and capacity tests.
The integration tests reported better predictions, as expected, with fewer extra predictions in the case of multiple worlds with shared elements. They also reported extra predictions in the case of learning with self-movements (movements of value 0
).
The capacity tests were unchanged in their results. This was expected, since all elements in the capacity tests are unique.