A pattern engine working as an online classifier has some quite peculiar properties that resemble well–known subjective properties of perception.
Consider a letter in front of the small camera window, connected by different discretization maps to the address inputs. At first, different memory units give different votes, veto is high and the majority output is unreliable. We may gate the symbol output of the voting unit with a combination of majority, veto or entropy that turns the output to ⊥ in this case. Small fluctuations may raise majority and lower veto. Suddenly a threshold is reached, feedback sets in, majority jumps to 100% and the input is recognized. This happens almost instantaneously.
There are no in–betweens because recognition activates the feedback path. This blends well with the subjective perception of very,very small printed text or severely pixelated faces. At first, all is blurred. Suddenly, the image seems to become sharper and we recognize a face. See, for example, Michael Bach’s Optical Illusions and Visual Phenomena.
After the threshold has been exceeded, slowly changing input drags along the majority decision. Once a decision has been reached, it stays stable until the feedback path is interrupted somehow. I think that the gamma cycle in the visual cortex, strongly linked to microsakkades, can provide this strobing.
The orientation dependent cells in the visual cortex that respond to edges of certain orientation should exhibit a similar behaviour. If the orientation of an edge varies slowly, the response should stay constant. The center of tuning should be draggable across the surface of the brain. If the speed of change becomes too large, feedback stops at some point. After a rapid (sigmoidal ?) transition, another orientation is recognized. Activation is switched over to the next microcolumn, and this transition should be phase-locked to the local gamma cycle. Has anybody done such an experiment? At least, some connection between gamma cycle and microsakkades has been noted here.
Up to this point, I have discussed stability in a single pattern engine. Fast learning corresponds to updating the stored functions fi of the logo. The functions hi that define the spaces Xi are fixed. They correspond to the complex wiring of axons to dendrites. Each cell in a sheet of cortical tissue sees a slightly different projection of axonal input into its own small state space. It is well known that there is not enough genetic information to determine the exact connectivity. Experiments have shown a considerable amount of randomness. My OCR examples show that a pattern engine can work with a wide range of hi: X ⟶ Xi.
The feedback that lies behind sudden perception and the struggle of sections in a sheaf of functions allows for a quite different kind of plasticity. In addition to the fast dynamical rule that updates the functions fi we may allow variation of the hi: X ⟶ Xi, too.
In an hierarchical system where sheets of pattern engines are stacked above each other, changing hi without wreaking havoc is not a luxury. It is a necessity. The hi of the higher levels are the output of lower levels. Changing f: X ⟶ Y will change one of the hi in a higher stage.