The memory-prediction framework has emphasized the role of stored memories in the most basic operations of the brain. Obviously, the approach presented here makes heavy use of computer memory to model both neural spaces and the functions that are computed on them. At first glance, the role of prediction is not obvious.
It is hidden in the innocuous notion of continuity, or local constancy. Naïve physics will tell us that spatial motion is continuous. Large objects don’t appear or disappear from one moment to the next. Acceleration is limited to a fairly small range. This continuity enables us to predict what will happen next. Successful prediction is just a special case, the construction of locally constant functions that depend on time. The effects of glueing along a temporal index will look like prediction.
If we interpret the index i of the hi in the logo as time, the pattern engine will work as well. The calculation of the signal that enables feedback becomes even simpler in this case. An integrator that is reset by vetoes will do the trick.
There is also a connection with Slow Feature Analysis
Predictive coding postulates that sensory systems are adapted to the structure of their input signals such that information about future inputs is encoded. Slow feature analysis (SFA) is a method for extracting slowly varying components from quickly varying input signals, thereby learning temporally invariant features.
There is a very simple way to extract slowly varying features from rapidly varying input: Just take averages over fibres!