Most aspects of our private and social lives -- our safety, the integrity of the financial system, the functioning of utilities and other services, and national security -- now depend on computing. But how can we know that this computing is trustworthy? In Mechanizing Proof, Donald MacKenzie addresses this key issue by investigating the interrelations of computing, risk, and mathematical proof over the last half century from the perspectives of history and sociology. His discussion draws on the technical literature of computer science and artificial intelligence and on extensive interviews with participants. MacKenzie argues that our culture now contains two ideals of proof as traditionally conducted by human mathematicians, and formal, mechanized proof. He describes the systems constructed by those committed to the latter ideal and the many questions those systems raise about the nature of proof. He looks at the primary social influence on the development of automated proof -- the need to predict the behavior of the computer systems upon which human life and security depend -- and explores the involvement of powerful organizations such as the National Security Agency. He concludes that in mechanizing proof, and in pursuing dependable computer systems, we do not obviate the need for trust in our collective human judgment.
Donald Angus MacKenzie FBA FRSE FAcSS (b.1950) is a Professor of Sociology at the University of Edinburgh, Scotland.
His work constitutes a crucial contribution to the field of science and technology studies. He has also developed research in the field of social studies of finance. He has undertaken widely cited work on the history of statistics, eugenics, nuclear weapons, computing and finance, among other things. Works
MacKenzie's job title says he is a sociologist, and in the preface he namedrops fashionable continentals and gives a quite convincing spiel about why sociologists should be interested in mechanized logic. Then the rest of the book never mentions any sociology again; it is 100% history, and quite nerdy history at that. MacKenzie understands the things he writes about, and even includes proof sketches for some key lemmas of the four-color theorem. I also like that he always very reasonable. The preface starts out by a mild-mannered plea to not go all Science Wars on him, and in the body of the text, which touches on a number of viscious debates and disagreements, he always seems fair to both sides.
The book covers a quite long time period, roughly 1950-2000. I began to study computer science soon after MacKenzie leaves off, so I personally thought it was interesting to see that early history eventually link up with what I think of as modern times. Quite a few of the topics he describes were already familiar to me from textbooks, but it's a different perspective to read a narrative rather than the "timeless" theorem-lemma-theorem view of a math textbook or a perfunctory string of citations in a related-work section. The book doesn't have any overarching theme or point, each chapter is a stand-alone description of some particular episode or topic in the history of mechanized proof, so it's not possible to summarize it, but I do feel generally better informed about the history of the field after reading it.
I was particularly interested to learn that the military (mostly NSA) funded formal verification research in the 1970s and 80s in the hope of creating hacker-proof systems (c.f. the Orange Book). A lot of what is now basic definitions or algorithms originated with military funding, and the vision was strikingly similar to current work. Their plans from 1982 sound exactly like what people in the field describe today, but it wasn't until the 2010s that systems like seL4 started to realize it.
And yet, maybe we should breath a sigh of relief that the DoD failed, because their approach seems like a twisted nightmare version of what formal verification is today. Everything was managed in a top-down, hierarchical manner, with timelines set years in advance. Currently all the major theorem provers (Coq, Isabelle, Z3, ...) are open source and developed by international teams, but back then the source code was classified and export-controlled as a kind of munition. The people working on verifying the Operating Systems etc could only read the source code on a need-to-know basis, and had to work in classified secure labs, which feels perverse if you are used to thinking about mathematical proof as a form of communication and meeting-of-the-minds. And while the field today is dominated by a bunch of very eccentric characters, mainly employed as university faculty, back then the proofs and tools were developed by Raytheon-style military contractors. If the DoD really had managed to produce their secure OS kernel &c, maybe we would now think of formal verification as soul-destroying bureaucratic busywork where you labor 9-to-5 to tick all the boxes on a government RFQ.
I appreciate MacKenzie's scholarship, but a lot of this book felt like a meandering slew of interview summaries and 2-page biographies which were not strongly tied to an overarching narrative. Exceptions include the four-color theorem chapter and the final chapter.
Also, I wish this book was written much closer to the present day. The nature of proof assistants has changed drastically since 2004, and some "serious mathematicians" like Terry Tao now use assistants like Lean to check their work, and I would enjoy learning about this development.
Finally, I think it's worth noting that this book focuses more on history and sociology than it does on technical or philosophical matters. This fact is made clear in the introduction, but I didn't pick that up from the cover or the back summary.