Motivated by the remarkable fluidity of memory the way in which items are pulled spontaneously and effortlessly from our memory by vague similarities to what is currently occupying our attention Sparse Distributed Memory presents a mathematically elegant theory of human long term memory.
The book, which is self contained, begins with background material from mathematics, computers, and neurophysiology; this is followed by a step by step development of the memory model. The concluding chapter describes an autonomous system that builds from experience an internal model of the world and bases its operation on that internal model. Close attention is paid to the engineering of the memory, including comparisons to ordinary computer memories.
Sparse Distributed Memory provides an overall perspective on neural systems. The model it describes can aid in understanding human memory and learning, and a system based on it sheds light on outstanding problems in philosophy and artificial intelligence. Applications of the memory are expected to be found in the creation of adaptive systems for signal processing, speech, vision, motor control, and (in general) robots. Perhaps the most exciting aspect of the memory, in its implications for research in neural networks, is that its realization with neuronlike components resembles the cortex of the cerebellum.
Pentti Kanerva is a scientist at the Research Institute for Advanced Computer Science at the NASA Ames Research Center and a visiting scholar at the Stanford Center for the Study of Language and Information. A Bradford Book.
An book totally broaden one's horizons on the view of intelligence. It describe an very simple and easy to follow model, not most other machine learning model which are extremely complex. The memory model the author describe in the book is extremely simple, which I believe is more close to the truth of how our memory work.
In shorts, this book is like a pearl hidden in the sea.
There are several differences between human memory and computer memory. Human memory is associative : Right now I'm typing on and thinking of a computer, then Microsoft, Bill Gates, Capitalism, Noam Chomsky, Formal Languages and so on. Other people might have different trains of thought. Human memory doesn't get "full", doesn't forget things in "chunks", learns a new concept by making connections to pre-existing concepts and so on. Computer memory is flat. It doesn't care if you put in a random junk in there for storage or Shakespeare's complete works. Pentti Kanerva's Sparse distributed memory tries to model and explain this intrinsically "semantic" nature of our memory.
The space is {0,1}^n with n in the range of 10,000s. A concept might be a set of strings in that space that are separated by a hamming distance of less than some threshold, and the distance between two memory items represents how related they are. Several geometric interpretations in this space ( Circle, Triangles, Orthogonality ) lead to some very interesting insights. The author bridges this very simple model to a basic neuron model by constructing various problems like the Best Match Problem. Storage and retrieval of data and its robustness is also discussed in this model where addresses and data live in the same space.
Math in this book is quite easy to follow and the text is to the point. Unlike many in the field of AI, the author does not yield to the temptation of ranting endlessly; this book is 150 pages. This review ( written in few minutes ) doesn't do justice to the book - if you've ever wondered about how turing machines, their endless tape and transition table is equivalent to, say, lambda calculus or more simply how code and data are delineated in various computation models, you should take a look at this book.