Advance praise for Computers Ltd. 'An enlightening and entertaining explanation, written by a profound computer scientist and master expositor. A must read for inquisitive minds.' Michael Rabin, Professor of Computer Science, Harvard University. Computers are incredible. They are one of the most important inventions of the 20th century, dramatically and irrevocably changing the way we live. That is the good news. The bad news is that there are still major limitations to computers, serious problems that not even the biggest, most powerful computers can solve. The consequences of such limitations can be serious. Too often these limits get overlooked, in the quest for bigger, better, and more powerful computers. In Computers Ltd, David Harel, best-selling author of Algorithmics, explains and illustrates one of the most fundamental, yet under-exposed facets of computers - their inherent limitations. Looking at the bad news that is proven, lasting, and robust, discussing limitations that no amounts of hardware, software, talents or resources can overcome, the book presents a disturbing and provocative view of computing at the start of the 21st century. Along the way he shows just how far from perfect computers are, while shattering some of the many claims made for these machines. Though we may strive for bigger and better things in computing, we need to be realistic: computers are not omnipotent - far from it. Moreover, the problem is real and is here to stay. From reviews of Algorithmics: 'This book is a veritable tour de force. Harel writes with uncommon verve, clarity, and imagination... This is science writing at its best.' Times Higher Education Supplement 'This is the book I would most like to have written.' Professor Darrell Ince, Open University
A FRANK OUTLINE OF CERTAIN LIMITATIONS OF COMPUTERS
Israeli computer scientist David Harel wrote in the Preamble to this 2000 book, “computers are… without doubt the most important invention of the 20th century, having dramatically and irrevocably changed the way we live, and mostly for the better. But that is the good news, and good news is what most computer books are about. This book concentrates on the… negative side of things…
The goal of the book is to explain and illustrate one of the most important and fundamental facets of the world of computing---its inherent limitations… It concentrates on bad news that is proven, lasting and robust, regardless of our hardware, software, talents, or patience… starting in the 1990s… many researchers have been working hard to better understand … its inherent weaknesses. The motivation for this quest is four-fold: *To satisfy intellectual curiosity… *To discourage futile efforts… *To encourage development of new paradigms… *To make possible the otherwise impossible… we concentrate on precisely defined computational problems, that come complete with clear-cut objectives about whether or not they can be solved satisfactorily.” (Pg vii-x)
Later, he adds, “This book is … about showing that very often things CANNOT be improved in these ways… That certain tasks are simply impossible… the bad news has nothing to do with the need for complicated and lengthy outputs. The desire to generate even a simple ‘Yes’ or ‘No’ is enough to yield real nightmares.” (Pg. 16)
He explains, “Turing machines are capable of solving any effectively solvable algorithmic problem!... any algorithmic problem for which we can find an algorithm that can be programmed in some programming language… running on some computer… even one that has not yet been built yet (but, in principle, can be built) and even one that requires unbounded amounts of time and memory space for ever-larger inputs---is also solvable by a Turing machine!” (Pg. 40)
He summarizes, “the world of algorithmic/computational problems is divided into the computable, or decidable vs. the noncomputable, or undecidable, and that among themselves the problems in the latter class exhibit various degrees of hardness… So our hopes for computer omnipotence are shattered. We now know that not all algorithmic problems are solvable by computers, even with unlimited access to resources like time and memory space.” (Pg. 58)
Later, he adds, “Is it true that most problems arising in common applications CAN be solved efficiently? Unfortunately, the answer is no. Not at all. It’s just that we often tend to equate ‘everyday’ and ‘common’ with situations that we know how to tackle. In actuality, [for] a growing number of problems arising in real applications … [we] can’t even resort to approximation algorithms.” (Pg. 117)
He observes, “Quantum computing is a brand new approach to computation, based on quantum mechanics… So far, a few surprisingly efficient quantum algorithms have been discovered for problems not known to be tractable in the ‘classical’ sense. However, to work they require the construction of a special QUANTUM COMPUTER, something that as of now is still very much nonexistent…” (Pg. 121)
He outlines, ‘A quantum computer… is to be based on some kind of finite-state element, analogous to the classical 2-state bit. The quantum … ‘qubit’ can be envisioned physically in a number of ways… What we DON’T have in a quantum system is the simple deterministic notion of a qubit being in one basis state or another. Rather, its notion of being or not being is indeterminate: all we can say about the status of a qubit is that it is in both of the states simultaneously, each with a certain ‘probability… [But] the ‘probabilities’ can be negative, or even imaginary… and the resulting combination state is called a ‘superposition.’ Once we ‘take a look’ at a qubit, i.e. make a measurement, it suddenly decides where to be, we see it in one basic state or the other, the probabilities disappear and the superposition is forgotten. This kind of ‘forced directedness’ is what leads to the adjective ‘quantum.’” (Pg. 144-145)
He continues, “Entangled qubits, a term that comes with a precise mathematical rendition, represent an intrinsically non-separable ‘mish-mash’ of the original qubits. They have the weird property of instant communication: observing one and thus fixing its state causes the 0other to lock in the dual state simultaneously, no matter how far away they are from each other. Entanglement turns out to be a fundamental and indispensable notion in quantum computation but unfortunately further discussion of its technicalities and the way it is exploited in the computations themselves is beyond the scope of this book.” (Pg. 146)
He notes, “At the time of this writing (…early 2000), the largest quantum ‘computer’ that has actually been built consists of a mere seven qubits… Why can’t we scale up?... There are severe technical problems around the actual building of a quantum computer. First, experimental physics have not managed to be able to put even a small number of qubits (say, 20) together and control them in some reasonable way. The difficulties seem beyond present-day laboratory techniques. A particularly troubling issue is DECOHERENCE: even if you could gather a good number of qubits and cause them to behave nicely themselves, things that reside close to a quantum system have the pushy habit of affecting it. The quantum behavior of anything surrounding a quantum computer… can mess up the delicate setup of constructive and destructive interference within the quantum computation… The computer thus has to be relentlessly isolated from its environment. But it also has to read inputs and produce an output, and its computational process might have to be controlled by some external elements… Shor’s factoring algorithm constitutes a major advance by any measure. However, at the moment it must be relegated to the status of shelfware, and it is probably destined to remain that way for quite some time. Intractability hasn’t been beaten yet.” (Pg. 152-153)
He explains, “How… do good chess programs operate?... one of their methods is to use heuristics, rules of thumb. A typical heuristic search involves intuitive rules… instructing it to ignore certain portions of the sea of possibilities… using heuristics is like tossing coins… it is tempting to view coin tossing as a ‘blind’ heuristic, a sort of intuitionless rule of thumb. But there is a major difference. With probabilistic algorithms, analysis replaces intuition. By considering carefully defined sets of ignorable possibilities, and using randomization to decide which to actually ignore, we are able to analyze the probability of success rigorously, making precise statements about the algorithm’s performance. This is often not true for algorithms that use heuristics.’ (Pg. 202-203)
This book will be of great interest to those studying computer science, quantum computing, and related topics.
Computers Ltd explores the real mathematical limits of what current computational techniques can accomplish in a way that seems fairly friendly to the layman. It was a nice recap of the points touched upon in my CS degree and although I would have appreciated more depth in some of these areas I think the book achieved its goals well.
I would recommend reading this if you wanted a clear refresher of some old topics as I did or if you want an understanding of some CS fundamentals. Honestly, I think material similar to this would do well in being covered in high schools as part of increasing the computer literacy of the coming generations as it feels like recent advancements are moving some things that computers can perform outside of the realm of the obvious and into the realm of the mystical.
I think people who have never programmed will be quite lost in the middle of it as the author discusses Big O and time complexity, but it was still really wonderful to revisit CS concepts and see them nicely tied together. Later chapters on cryptography and AI were less technical (and more fun, haha). Definitely revisiting this once I've reviewed more of the core concepts.
This book is perfect for smart folks seeking an intro to computational complexity theory, and to people who already have a background in it from years past and who are looking to refresh their memory. For those who've already studied the subject before, the introductory material and a few parts here and there will drag on. In particular, the first two chapters get a bit repetitive at times - we get it, it doesn't matter how fast you make your computer if the problem is inherently intractable. We got it the first time you mentioned it! On the whole, though, the pluses far outweigh the negatives, and even people who've been through an undergrad degree will find things that they might not have covered. I don't recall covering Rice's theorem, Computational Equivalence, or zero knowledge protocols in school, for instance. It's a slim book too, weighing in at just over two hundred pages, and the author manages to cram a surprising amount into them, without glossing over too much or going into too much detail, which makes it a pretty quick read. All in all, if you're interested in the field, then this book is definitely worth an investment of your time.
This robot stands dizzied by the cars and trucks passing by, and waiting for the situation to stabilize so that it can use it's deep and contemplative deduction abilities to devise a plan for crossing.
This one-legged pogo-stick robot is in the midst of traffic, frantically jumping up and down, to and fro, again and again barely avoiding getting hit, but making no progress whatsoever towards the other side.
Many of these gallant and loyal robots are waiting on the side, to be sent out to try again, one after the other, just like the infantry charging out of their trenches in WWI.
The fourth robot sits at the side of the road, nods slowly and says, "Yes, I know; and that reminds me of another story…"
A good introduction to the achievements and limits of the Theory of Computation. It tries to be accessible to lay people, and seems to reach that objective without sacrificing much in terms of precision.