Stability and Plasticity

 

A pattern engine working as an online classifier has some quite peculiar properties that resemble well–known subjective properties of perception.
Consider a letter in front of the small camera window, connected by different discretization maps to the address inputs. At first, different memory units give different votes, veto is high and the majority output is unreliable. We may gate the symbol output of the voting unit with a combination of majority, veto or entropy that turns the output to ⊥ in this case. Small fluctuations may raise majority and lower veto. Suddenly a threshold is reached, feedback sets in, majority jumps to 100% and the input is recognized. This happens almost instantaneously.
There are no in–betweens because recognition activates the feedback path. This blends well with the subjective perception of very,very small printed text or severely pixelated faces. At first, all is blurred. Suddenly, the image seems to become sharper and we recognize a face. See, for example, Michael Bach’s Optical Illusions and Visual Phenomena.
After the threshold has been exceeded, slowly changing input drags along the majority decision. Once a decision has been reached, it stays stable until the feedback path is interrupted somehow. I think that the gamma cycle in the visual cortex, strongly linked to microsakkades, can provide this strobing.

The orientation dependent cells in the visual cortex that respond to edges of certain orientation should exhibit a similar behaviour. If the orientation of an edge varies slowly, the response should stay constant. The center of tuning should be draggable across the surface of the brain. If the speed of change becomes too large, feedback stops at some point. After a rapid (sigmoidal ?) transition, another orientation is recognized. Activation is switched over to the next microcolumn, and this transition should be phase-locked to the local gamma cycle. Has anybody done such an experiment? At least, some connection between gamma cycle and microsakkades has been noted here.

Up to this point, I have discussed stability in a single pattern engine. Fast learning corresponds to updating the stored functions fi of the logo. The functions hi that define the spaces Xi are fixed. They correspond to the complex wiring of axons to dendrites. Each cell in a sheet of cortical tissue sees a slightly different projection of axonal input into its own small state space. It is well known that there is not enough genetic information to determine the exact connectivity. Experiments have shown a considerable amount of randomness. My OCR examples show that a pattern engine can work with a wide range of hi: X ⟶ Xi.

The feedback that lies behind sudden perception and the struggle of sections in a sheaf of functions allows for a quite different kind of plasticity. In addition to the fast dynamical rule that updates the functions fi we may allow variation of the hi: X ⟶ Xi, too.

  • If we remove a single hi: X ⟶ Xi, the pattern engine will still work.
  • If we add a new hi: X ⟶ Xi and a new memory unit to store fi, the feedback path will fill up the memory unit with useful information. Recognition will become easier and faster.
  • We may even change the encoding of data. If we apply a hash function to an address, reversible or not, the contents of the memory unit become garbage. The feedback path will overwrite useless information. Soon, the memory unit will again contribute to recognition.
    Such changes of encoding occur frequently in living brains. PlosOne: Repeated Stimulus Exposure Alters the Way Sound Is Encoded in the Human Brain. In fact, dynamic changes of neural encoding are an integral part of neural dynamics. Any attempt to derive a “signal”, or to “decode” a neural message from a few neurons is doomed to fail.
  • If an Xi is too small, it will not be able to give useful contributions. The memory entries are constantly overwritten. This local signal may induce the growth of new synaptic connections. The space Xi is enlarged.
  • If an Xi is too large, the amount of memory that is necessary to store fi may not be available. The space Xi may be split by projections into smaller spaces, just like X is split by the hi.
  • We may modify an Xi by the formation of a product with the space Y of another pattern engine. Veto will disappear because we will form a covering space.

In an hierarchical system where sheets of pattern engines are stacked above each other, changing hi without wreaking havoc is not a luxury. It is a necessity. The hi of the higher levels are the output of lower levels. Changing f: X ⟶ Y will change one of the hi in a higher stage.

 Date Posted: 22 Aug 2009 @ 03 36 PM
Last Modified: 26 Feb 2012 @ 12 37 PM
Posted By: Hardy
EmailPermalink
 

Responses to this post » (None)

 

Post a Comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

\/ More Options ...
Change Theme...
  • Users » 3
  • Posts/Pages » 40
  • Comments » 3
Change Theme...
  • VoidVoid
  • LifeLife « Default
  • EarthEarth
  • WindWind
  • WaterWater
  • FireFire
  • LightLight

On Digital Memory



    No Child Pages.