This is a book whose time has come-again. The first edition (published by McGraw-Hill in 1964) was written in 1962, and it celebrated a number of approaches to developing an automata theory that could provide insights into the processing of information in brainlike machines, making it accessible to readers with no more than a college freshman's knowledge of mathematics. The book introduced many readers to aspects of cybernetics-the study of computation and control in animal and machine. But by the mid-1960s, many workers abandoned the integrated study of brains and machines to pursue artificial intelligence (AI) as an end in itself-the programming of computers to exhibit some aspects of human intelligence, but with the emphasis on achieving some benchmark of performance rather than on capturing the mechanisms by which humans were themselves intelligent. Some workers tried to use concepts from AI to model human cognition using computer programs, but were so dominated by the metaphor "the mind is a computer" that many argued that the mind must share with the computers of the 1960s the property of being serial, of executing a series of operations one at a time. As the 1960s became the 1970s, this trend continued. Meanwhile, experi mental neuroscience saw an exploration of new data on the anatomy and physiology of neural circuitry, but little of this research placed these circuits in the context of overall behavior, and little was informed by theoretical con cepts beyond feedback mechanisms and feature detectors.
Michael A. Arbib is the Fletcher Jones Professor of Computer Science, as well as a Professor of Biological Sciences, Biomedical Engineering, Electrical Engineering, Neuroscience and Psychology at the University of Southern California.
A lost gem among the wonderful flood of books merging biology, mathematics, and computers following Norbert Weiner's "Cybernetics" in the 1960s. The only reason I discovered it is because the author is a professor at my university.
I suspect that the very reason I gave this book 4 stars will be the reason that many will give it 2 or avoid it all together - its heavy on mathematical notation. Assuming you've got some set theory and a hearty constitution for exploring forests of equations, this is a stellar little handbook covering automata, neural nets, perceptrons, communication theory, and error-correction - the heart of the early attempts at mathematical models of the brain. Computational neuroscience has moved into many other areas today but this is still a great place to start if you're interested in the subject matter.
The clincher for me (and the fourth star above) was Arbib's clear and concise presentation of recursive logics and Godel's Incompleteness Theorem. Perhaps other books have filled this role since, but I've never found such an accessible presentation of it. I was literally trembling page-by-page as I entered the world of recursively enumerable sets and arithmetical logics somewhere over the airspace of Dallas, TX, emerging in a daze of sheer amazement at the airport.
I credit this book with making me comfortable with a great deal set theory and functional notation that has recently come in handy tackling modern algebra and the works of Rene Thom.
My biggest criticism is that this book does not convey the excitement of the author at all. The book reads exactly like a textbook. Profound and interesting conclusions are followed with sentences like, "The reader may be interested in learning more among the literature [see ref. 1 and 2]." Also, while I appreciated the mathematical rigor, I believe well-placed analogy and colloquialisms would have served this book well. I suppose the style was a product of both the era and the author's country of origin (Australia). In any case, this book still christened one of the more memorable plane rides I've ever had.
Will not pretend to have followed all of the proofs, but the prose and diagrams in this book are excellent. Hyper relevant to 2025, despite being written over 50 years ago. Beautiful kernel connecting three branches of computation.
On the proofs and notation: I feel (perhaps biased by my time) that some systems and problems are helped and some are hurt by compact mathematical notation. Set theory and neural networks are elegantly compressable. Turing machines not so much.