This is a deeply personal book about alternatives to AI as a way of engaging with computers. It draws on the author's research in end-user programming, as well as a wide range of scholarship both related to and remote from AI. I really enjoyed the analysis of AI and the discussion of alternative user interfaces and forms of programming. Despite this praise, I found that some chapters were less developed than others (e.g. the one on WEIRD was rather ordinary), and the logic between chapters - and of the author's argument - was not always as clear as it could have been. Still, a very satisfying read.
Conclusion The book Moral Codes presents what the author believes it means to design alternatives to AI. It offers many interesting questions and ideas to consider and discuss, but it lacks a clear, well-developed solution for why and how these alternatives to AI matter.
A few quick items: At one point in the book, the author says his colleagues think him both stupid and wise for writing this book (since the world of AI is changing so rapidly). Unfortunately, I would argue that the author often comes across as naïve and unprepared, as much of the content already feels outdated.
Overall, the author tends to provide ideas to support his claims rather than concrete examples . As any professor would know, an idea by itself is not enough to prove a claim true.
The title Moral Codes is eventually defined, but it takes a long time to get there. Much of the book feels more like a monologue of the author's personal thoughts about AI than a well-organized argument.
The author makes a number of bold claims and uses odd or imprecise word choices. For example: 1. “There is no logic or reasoning in an LLM” (pg. 69). None at all? Really? 2. In Chapter 5, he says gamification is when we try to make real life easier to “deal with.” Is “deal with” really the best phrase? Why not say “make it more fun”? 3. Chapter 6 states, “AI algorithms can be effective in other areas of life, but only in parts of society that are predictably structured.” But “effective” is a poor word choice here. Maybe he means accurate? Something can be effective without being accurate. I’m surprised, given his technical background, that he doesn’t seem to understand this distinction.
Many chapters—while interesting—don’t clearly relate to the central point of the book: that the world needs less AI and better programming languages.
The definition of WEIRD also feels out of place and unprofessional. In Chapter 13 especially, the author seems to fall into the logical fallacy of assuming that because something was created by certain people or in a particular context, it must therefore be flawed or unusable.
Two stars, since the book did bring up interesting discussion points.
I was excited about the main thesis: following the debates between direct manipulation and AI agents, this book argues for the minority position of direct manipulation: it suggests the focus of design should be on helping users master programming languages rather than doing things for them. Unfortunately, the author does not have enough of a philosophical background to flesh out this thesis and to actually spell out why these are two ways of approaching the same terrain. The book references a lot of philosophy but does not keep its definitions straight. The biggest disappointment is that the definition of "programming language" remains the conventional one - the author claims to be taking the definition beyond its bounds by seeing a spreadsheet as a programming language but does not even entertain how future systems might go beyond even this. Ultimately, he literally means that users should learn to code what they want from a system. This is open to obvious criticisms that are ignored in the book, most centrally in HCI in Lucy Suchman's "Plans and Situated Actions". Most of what we're up to is not planned and we often don't know our intentions. Any "programming language" to rival agentic AI will need to be coupled to our sensitivity to the world: it would need to be fully embodied. Blackwell makes this point, citing Hayles, to criticize AI but does not then trace the implications for his own view. A lot of the book feels like flippant dismissal of positions that have not been fully engaged with - GenAI is portrayed as a pointless machine for lies and plagiarism.