Vernier acuity is also called hyperacuity, because its resolution is 5-10× higher than that of visual acuity. Hyperacuity is the secret behind the precision of a sliding caliper.
Sometimes our visual system will extract fine detail from impossibly bad images. Of course, this has been an evolutionary advantage. What are the mechanisms behind this phenomenon?
Microsakkades will keep the retinal image in constant motion. There are several, slightly different maps into neural space. ( the hi in the logo) Together they will push up apparent resolution. Did you ever notice these pixelized, anonymized faces on TV? If you blur the square tiles by squinting, and if the camera or the subject move slightly, you will suddenly recognize the face. Sophisticated image processing algorithms, many of them protected by patents, will do the trick, too.
There is an excellent book ( available online for private use ) by David J.C. MacKay titled Information Theory, Inference, and Learning Algorithms. It avoids the narrow focus that is all too common and there are many exercises.
46.3 Deconvolution in humans
A huge fraction of our brain is devoted to vision. One of the neglected features
of our visual system is that the raw image falling on the retina is severely
blurred: while most people can see with a resolution of about 1 arcminute
(one sixtieth of a degree) under any daylight conditions, bright or dim, the
image on our retina is blurred through a point spread function of width as
large as 5 arcminutes (Wald and Griﬃn, 1947; Howarth and Bradley, 1986).
It is amazing that we are able to resolve pixels that are twenty-ﬁve times
smaller in area than the blob produced on our retina by any point source.
Isaac Newton was aware of this conundrum. It’s hard to make a lens that
does not have chromatic aberration, and our cornea and lens, like a lens made
of ordinary glass, refract blue light more strongly than red.
One of the main functions of early visual processing must be to deconvolve
this chromatic aberration. Neuroscientists sometimes conjecture that the rea-
son why retinal ganglion cells and cells in the lateral geniculate nucleus (the
main brain area to which retinal ganglion cells project) have centre-surround
receptive ﬁelds with colour opponency (long wavelength in the centre and
medium wavelength in the surround, for example) is in order to perform ‘fea-
ture extraction’ or ‘edge detection’, but I think this view is mistaken. The
reason we have centre-surround ﬁlters at the ﬁrst stage of visual processing
(in the fovea at least) is for the huge task of deconvolution of chromatic aber-
I speculate that the McCollough eﬀect, an extremely long-lasting associ-
ation of colours with orientation (McCollough, 1965; MacKay and MacKay,
1974), is produced by the adaptation mechanism that tunes our chromatic-
aberration-deconvolution circuits. Our deconvolution circuits need to be rapidly
tuneable, because the point spread function of our eye changes with our pupil
diameter, which can change within seconds; and indeed the McCollough eﬀect
can be induced within 30 seconds. At the same time, the eﬀect is long-lasting
when an eye is covered, because it’s in our interests that our deconvolution
circuits should stay well-tuned while we sleep, so that we can see sharply the
instant we wake up.
The so-called feature detectors, as well as the deconvolution circuitry, are pattern engines themselves. Rapid tuneability, and long-lasting modification in the absence of matching input should not come as a surprise.
There is hyperacuity in hearing, too. If we listen to speech, we can detect fine spectral detail along with fine temporal detail. Auditory processing occurs mostly in the time domain, the spectral domain is limited to the active filters of cochlear preprocessing. Transitions between vowels and consonants are full of detail. If we preserve the local structures, speech will be recognizable although large-scale spectra will be entirely different. How do commercial systems deal with whispered words?
This spectrogram has been computed in a rather crude way. The input signal was fed into an array of 2nd order bandpass filters. The brightness is log( stored energy ).
BTW. These are the syllables ann – all – ack, repeated three times.
And finally, there’s hyperacuity in higher-order cortical processing, too.
Let’s listen to the words of a master on language and semantic cognition.
… I wanted to express everything. I thought, for example, that if I needed a sunset then I should find the exact word for a sunset — or the exact, or rather, the most surprising metaphor. Now I have come to the conclusion — and this conclusion may sound sad — the conclusion is that I no longer believe in expression. I only believe in allusion. Because after all, what are words? Words are symbols for shared memories. If I use a word, then you should have experience of what the word stands for. If not, the word means nothing to you. And I think, we can only allude. We can only try to make the reader imagine. I think that the reader, if he’s quick enough, can be satisfied with a merely hinting at something.
The pervasiveness of hyperacuity, as well as the connection to learning processes and the peculiar encoding used by the brain was noted before:
Hyperacuity is found in all visual tasks ( as in color vision, for example ; see fig. …), and, indeed, in all other perceptual modalities ( tellingly, a paper titled The ubiquity of hyperacuity(Altes, 1988) appears in a journal on acoustics, not vision). Because so many disparate kinds of physical systems exhibit hyperacuity, the principle that governs this pervasive phenomenon must be computational. This principle — channel coding — is very general (Snippe and Koenderink, 1992): it is at work throughout the brain, even in places far removed from the perceptial front end ( Edelman 1995 )
(Shimon Edelman, Computing the Mind, 2008, p95) The term hyperacuity fills three lines of the index.