The measurement of a continuous variable, e.g. a length, results in a decimal fraction. Some digits are read off the labeling of the scale, but the last fractional digit must be estimated. We are not very good at this. Our visual system can, however, detect coincidence much more precisely. This is just another example of hyperacuity.
The main scale divides the unit interval in ten equal parts, the vernier scale uses a division into nine parts or a multiple thereof. Both scales together will give us a tenfold increase in resolution.
A more general principle behind this ingenious apparatus applies to discretization maps ( continuous functions into discrete, finite spaces) in general. If the resolution of the map is too low, we can use a second, slightly different map. Using both maps together will increase resolution. The vernier scale exploits the phenomenon of visual hyperacuity, and at the same time it’s the key to its explanation!
The Chinese Remainder Theorem is just an algebraic version of this construction. More important, if we want to calculate some function on a large space, we can reduce time and storage requirements, sometimes dramatically, if the function is regular enough.
Let’s return to our example x ∈ X. The 16 functions h_{1}…h_{16} map it into the product space. The values of the coordinate functions on the product are shown below.
For each coordinate function there is a small open neighborhood U_{i} of x where it is constant. The product function is constant on their intersection. Note that the value of a single coordinate function will allow us to recognize the class of x. This indicates that our x is an interior point in the set of all ‚a‘. The image of f_{i}∘h_{i} falls into the diagonal. ( The quality of fit may be measured either by hamming distance from the diagonal or shannon entropy ).
We’ll now store the known value of f(x) into the 16 storage cells addressed by h_{i}(x). The other storage cells will be undefined. Let’s denote this special memory content by the symbol ⊥. All functions g must satisfy the condition g(⊥) = ⊥. ( topological interpretation). Storing sections of a sheaf is quite easy!
If we present our image x again, it will be classified by μ as ‚a‘, with a majority of 100%. In the same way we may now store additional reference points. Let’s assume that there will be no conflicts and that the small initial domains of definition do not overlap. Each reference point will give at most 16 entries in our storage cells.
We may now move x from its initial position x1 to x2. The majority stays at 100% in the immediate vicinity of x_{1}. If we move further on the path, it will drop, step by step, down to zero. When we approach the reference point x_{2}, membership rises again to 100%.
Points of X in the vicinity of a reference point are correctly classified. Points far away are not classified at all. If we want to classify them, we must learn about the structure of the space X and the paths that connect them to our examples. In the case of letter-shapes there should be few paths connecting different letters. Different specimen of ‚a‘ should be strongly interconnected.
A point of X with a majority vote of 62% ‚a‘ and 38% ‚⊥‘ (abstention) is, of course, an a. If we feed back the result into the storage cells addressed by the h_{i}, we will boost the majority at this point up to 100%. At the same time, the domain of definition will grow. Some points in X that were previously unclassified will now get a small vote for ‚a‘. Eventually, the membership functions will form a partition of unity.