Hamming Distance shows the erratic behavior of the usual hamming distance in the space of small bitmaps. This space is a product of single-bit spaces that are just too small. The hamming distance in Prod(i=1,n,Xi) behaves much better if the spaces Xi are not too small.
Feature Detectors illustrates how cells that are tuned to edges at various orientations may arise in the visual cortex. Although the model is crude, it does not require carefully crafted connection topologies and delicate adjustments. All that is required is some concrete implementation of a Pattern Engine that will grow random germs of functions under the influence of sensory data.
A puzzling problem is how neural ensembles provide a uniform, high-resolution visual representation in spite of irregularities in the RFs of individual cells. This problem was approached by simultaneously mapping the RFs of hundreds of primate retinal ganglion cells. As observed in previous studies, RFs exhibited irregular shapes that deviated from standard Gaussian models. Surprisingly, these irregularities were coordinated at a fine spatial scale: RFs interlocked with their neighbors, filling in gaps and avoiding large variations in overlap.
Consider random germs in a pattern engine. Each germ will grow fast initially, until lateral inhibition ( veto ) will stop the growth. A slight variation may either increase or decrease the veto. The veto may vanish, too. In this case the germ will grow a little further. We will end up with areas of constancy whose borders are interlocked in complex patterns. I believe that the so-called feature detectors in early vision arise in this way. Their erratic complexity has puzzled researchers for a long time.
Our data reveal neural mechanisms in musicians that are able to detect errors prior to the execution of erroneous movements. The underlying mechanism probably relies on predictive control processes that compare the predicted outcome of an action with the action goal.
In a pattern engine that glues along a temporal index, the entropy ( veto ) signal indicates that we shouldn’t generalize. Warning: Singularity ahead!
I’ve just read the wikipedia entry on neural darwinism.
The last part of the theory attempts to explain how we experience spatiotemporal consistency in our interaction with environmental stimuli. Edelman proposes a model of reentrant signaling whereby a disjunctive, multimodal sampling of the same stimulus event correlated in time leads to self-organizing intelligence. Put another way, multiple neuronal groups can be used to sample a given stimulus set in parallel and communicate between these disjunctive groups with incurred latency.
The index i in ( hi, Xi ) may be time itself. In this case, the calculation of majority and veto in the Pattern Engine can be done by a simple integrator. When the integrated majority vote is high enough, the result is fed back into storage. A veto or high feedback activity will quench the feedback path.
Update: We may start with some spaces Xi and increase their number by:
The effects of glueing along a temporal index will look like prediction.
The theory of neural darwinism has been criticised on the ground that there is selection but no reproduction in this system ( Wikipedia). This is food for thought.
Unfortunately, there is some uncanny clash of terminology.
I will simply list the worst offenders:
Of course, the poetic, colorful language of mathematics exposes some gems, too.
About a year ago I stumbled upon the book ‚Die Zukunft der Intelligenz‘ by Jeff Hawkins, on a bookshelf in my local library. I, too, was disappointed with the state of artificial intelligence and the many approaches which again and again have led from initial euphoria into dead ends.
Suddenly I realized that my work might be new,
that there is no scholarly consensus on how the brain works,
and that the simple programs and ideas I was playing with might be the missing links.
Next: The Logo