Kohonen Maps

 

A Kohonen map, also called SOM for self-organizing map, is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional (typically two dimensional), discretized representation of the input space of the training samples, called a map. Self-organizing maps are different than other artificial neural networks in the sense that they use a neighborhood function to preserve the topological properties of the input space.

This abstract description of Kohonen Maps tries to capture the essentials while leaving out details. You should note that my own approach is covered by these terms, too. The use of several maps to provide both high resolution and the glueing data for local patches, however, seems to be new.

Self-Organizing Map

Self-Organizing Map

Kohonen’s approach was inspired by the discovery of detailed topographical maps on the surface of the brain. The two-dimensional organization is a prevailing feature of the brain.
I propose a branched covering of 2D-space instead. In the image above, the coloured areas should try to avoid the border by branching out into the covering. The dynamical rule that governs the motion will be the same as in an ordinary Kohonen map. The second dynamical rule – minimization of entropy production – governs the branching that modifies the underlying space.

An analysis of such a dynamical system, where the trajectories taken will modify the state space itself, is far beyond my mathematical means. Of course, we may turn to a more conventional situation by embedding the system in a much larger space, where the structure of the covering becomes a set of dynamical variables. Parameterizing the set of all covering spaces is not easy. Curvature and connectivity may change during the evolution of the dynamical system. Shudder.

Simulation, however, remains relatively simple. Replacing the grid of weight vectors with its cellular automata neighborhood by such a covering is computationally cheap.
I’ve not done experiments on this idea because I do not believe in the set R. The approach presented here does not make use of vectors. Dimension as a measure of size is replaced by log(# of points). The global structure of a vector space and the superposition principle are not used at all. The map (hi) from X to X1⨉ ⋯ ⨉Xn, with a vector space norm on the left and Hamming distance on the right preserves the local structure of X by construction. The global structure is initially unknown.

 Date Posted: 04 Apr 2009 @ 01 22 AM
Last Modified: 01 Jun 2010 @ 07 40 PM
Posted By: Hardy
EmailPermalink
 

Responses to this post » (None)

 

Post a Comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

\/ More Options ...
Change Theme...
  • Users » 3
  • Posts/Pages » 40
  • Comments » 3
Change Theme...
  • VoidVoid
  • LifeLife « Default
  • EarthEarth
  • WindWind
  • WaterWater
  • FireFire
  • LightLight

On Digital Memory



    No Child Pages.