This is just an unordered list of pointers to various documents on the web. I am trying to grasp where my approach fits in. Or does not fit in, I may be chasing down a blind alley with no heed for the signs.
Two main goals have been proposed for algorithms in unsupervised learning: density estimation and dimensionality reduction.
Our algorithm addresses a longstanding problem at the intersection of geometry and statistics: to compute a low dimensional embedding of high dimensional data assumed to lie on a nonlinear manifold. Many types of high dimensional data can be characterized in this way—for example, images generated by diﬀerent views of the same three dimensional object. The use of manifolds to represent continuous percepts is also a recurring theme in computational neuroscience [Seung and Lee (2000)].
Two canonical forms of dimensionality reduction are the methods of principal component analysis (PCA) [Jolliﬀe (1986)] and multidimensional scaling (MDS) [Cox and Cox (1994)].
Indeed, the LLE algorithm operates entirely without recourse to measures of distance or relation between faraway data points.
The map h: X → X’ with the (normalized) hamming metric on X’ = Prod(k=1,n,Xk) preserves the local metric structure. Larger distances, however, are unimportant. The closed ball B1 is the entire space X’ . If the object of interest is not the manifold itself but a classifier that factors through it, we might try to skip the manifold construction. The classifier is the only global object we will need.
Abstract. We present a minimal spiking network that can polychronize, i.e., exhibit reproducible time-locked but not synchronous firing patterns with millisecond precision, as in synfire braids. The network consists of cortical spiking neurons with axonal conduction delays and spike-timing-dependent plasticity (STDP); a ready-to-use MATLAB program and C++ program code is included. It exhibits sleep-like oscillations, gamma (40 Hz) rhythms, conversion of firing rates to spike-timings, and other interesting regimes. Due to the interplay between the delays and STDP, the spiking neurons spontaneously self-organize into groups and generate patterns of stereotypical polychronous activity. To our surprise, the number of co-existing polychronous groups far exceeds the number of neurons in the network, resulting in an unprecedented memory capacity of the system. We speculate on the significance of polychrony to the theory of neuronal group selection (TNGS, Neural Darwinism), cognitive neural computations, binding and gamma rhythm, mechanisms of attention, and consciousness as “attention to memories”.
The model is a large system of difference-differential equations. Its state space depends on the connectivity of delay lines, so it’s already unimaginably complex from a geometric point of view. The theory of dynamical systems does not give much of an insight. Statistical mechanics doesn’t cut it either.
Polychronous groups that are growing, moving,competing with each other may be viewed as sections in a sheaf over a suitable topological space. Making these vague ideas precise is well beyond my mathematical means.
The MCL algorithm is short for the Markov Cluster Algorithm, a fast and scalable unsupervised cluster algorithm for graphs based on simulation of (stochastic) flow in graphs. The algorithm was invented/discovered by Stijn van Dongen (…) at the Centre for Mathematics and Computer Science (also known as CWI) in the Netherlands. The PhD thesis Graph clustering by flow simulation is centered around this algorithm, the main topics being the mathematical theory behind it, its position in cluster analysis and graph clustering, issues concerning scalability, implementation, and benchmarking, and performance criteria for graph clustering in general.
The human brain can adapt to changing demands even in adulthood, but MIT neuroscientists have now found evidence of it changing with unsuspected speed. Their findings suggest that the brain has a network of silent connections that underlie its plasticity.
“So the visual cortex changes its response almost immediately to sensory deprivation and to new input,” Kanwisher explained. “Our study shows the stunning ability of the brain to adapt to moment-to-moment changes in experience even in adulthood.”
The work hypothesis of this project assumes that the cortex adapts itself in order to make the response of the neurons vary slowly in time. This is motivated by a simple observation: while the environment vary on a relatively slow timescale, the sensory input, e.g. in our case the response of receptors on the retina, consists of raw direct measurements that are very sensitive even to small transformations of the environment or the state of the observer. For example, a small translation or rotation of an object in the visual scene can lead to a dramatic change of the light intensity at a particular position of the retina. The sensory signal vary thus on a faster timescale than the environment. The work hypothesis implies that the cortex is actively extracting slow signals out of its fast input in order to recover the information about the environment and to build up a consistent internal representation. This principle is called the slowness principle.