Jump to ratings and reviews
Rate this book

Mentoring the Machines: Orientation - Part One: Surviving the Deep Impact of the Artificially Intelligent Tomorrow

Rate this book
The Sane and Sensible, Big Picture “Commons” Sense View of AI

What is it about Artificial Intelligence driving tech giants like Elon Musk, Marc Andreessen, Mark Zuckerberg, Bill Gates, and Sam Altman? Why are they racing to develop and own these thinking machines while unsure of the harm they could cause us? Can we trust nation-states and NGOs to use their totalitarian strategies when they don’t truly understand the problems we face?

These self-appointed masters of artificial intelligence are gambling with our future while plunging headfirst into a high-stakes race with potentially catastrophic consequences.

How should we confront them?

Mentoring the Orientation cuts through the noise to provide clear, commons-sense answers. With a practical narrative that builds on straightforward ideas, this book offers a cogent understanding of the AI phenomenon. It urges us to resist surrendering our influence over Artificial Intelligence to the whims of the marketplace or the state, and instead, to engage in a meaningful way to ensure our survival.

Welcome to the every person’s map for the coming brave new world.

93 pages, Kindle Edition

Published August 1, 2023

13 people are currently reading
147 people want to read

About the author

John Vervaeke

10 books232 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
11 (57%)
4 stars
4 (21%)
3 stars
3 (15%)
2 stars
1 (5%)
1 star
0 (0%)
Displaying 1 - 4 of 4 reviews
Profile Image for Elena.
46 reviews477 followers
December 7, 2023
We have many sham, distortive framings of the phenomenon of AI. Our current struggle as a society to re-orient ourselves to a world transformed by the emergence of AI forces us to confront (as did the climate crisis, for that matter) the fact that more technical know-how will not solve problems that stem from poor, distortive, myopic sense-making. This short, clear, conversational book nicely shows that a huge part of what makes the emergence of AI problematic right now is our tendency to offload the burden and duty of sense-making onto market-driven forces, chief among which is a dangerous technocratic utopianism that is inherently corrosive to the kind of foundational reframing work that we most need and that the book tries to make space for. I think that there is great value in a work such as this that enjoins us to slow down, take a step back (like, really, really far back) and try to think a bit more clearly in order to arrive at a more clear-headed framing that goes to the root of the matter. To achieve that clarity, we need to avoid the usual tendency to default into either of the simplistic utopian or dystopian narratives which construe AI either as panacea or as the great beast that augurs the apocalypse.

Instead, we need to go back to basics. We want to think clearly about Artificial Intelligence and about how we can adapt ourselves to a world changed by it. Well, what then is intelligence? What is rationality, over and above intelligence? And what is wisdom, over and above rationality? If intelligence is general problem-solving, what then is a problem? Since many (if not most) of the problems that we most care about are not well-defined and are thus not algorithmically tractable but must instead be solved through the cultivation of insight, how can we train machines that are capable of long-term life-preserving (rather than short-term profit-maximizing) insight?

And if meaning really is something over and above what an information-processing mechanism can access, what is meaning? Since human intelligence, rationality, and wisdom are crucially constrained by our capacity - as embodied, situated beings - to care, (how) could we develop machines that similarly care - for their own being, for others, for the truth, and for the preservation of the larger order of things that underpins their (and everyone else's) being?

Note how these are decidedly not the kinds of questions that you often hear being discussed, despite all the buzz concerning the new AI models. And yet the authors nicely point out that we can only begin the process of orientation - and, maybe, adaptation - to a world post-AI if we first get clear on these foundational, yet oft-overlooked questions. I know that Vervaeke's Meaning Crisis series of lectures, which you can find for free on Youtube, does a fine job of discussing some of the historical causes for our inability to think clearly about the questions above. I highly recommend watching that series of lectures because it will provide you with some of the context you need to begin the search for those answers anew. A key part of that search is understanding just why the kinds of answers we have received are inadequate, and why we carry certain culturally-inherited blinkers that make it difficult for us to arrive at new ways of conceptualizing the mind. Rightly understood, our culture's impasse vis-a-vis the phenomenon of AI stems from our centuries-old identity crisis which is an expression of our culture's ongoing difficulty in bridging together the bifurcated picture of reality bequeathed to us by Descartes (and of which the traditional mind-body problem, and the modern problem of consciousness, are notable expressions).

As for whether our proper way of aligning ourselves to conscious machines would be to take up the role of mentor while framing the machines as "children" of human collective intelligence and distributed cognition, I have my reservations. On the one hand, this approach strikes me as being more rational than the usual one, which is based on a thinly-veiled hope that our heroic engineers (harbingers of human progress and all) will make us a new class of slaves that we can exploit without regard because they are "mere machines." I think the authors are right to argue that one of the biggest problems we face right now is that the training of these emergent forms of artificial intelligence is left in the wrong hands - again, referring to those same market-driven forces guided by short-term profit motives rather than a long-term vision informed by well-thought out answers to the above questions. For how can you train artificial intelligence, rationality, and wisdom when you haven't given any time to arriving at a clear understanding of what these things are, an understanding that can regulate the training process?

On the other hand, I nonetheless believe that the machine/child analogy has its limitations. For instance, one need only think of the limitations of human psychology and of the scope of human compassion. Aside from the thorny question of whether we have a duty to learn to care for the intelligent machines we create, I am simply not sure that we are capable of expanding the sphere of our compassion to include such machines. Human care is a preciously finite (and increasingly scarce) resource. In a world beset by so much inequality, one in which more and more people prefer to withdraw from active participation in society into increasingly narrower private worlds of self-protecting fantasy and virtual addiction, it seems like a big ask to require of us to muster care for intelligent machines. We might start closer to home first. When so many children in the world lack access to basic education, it really makes you wonder whether our society can afford to lavish resources on mentoring machines.

That said, I look forward to reading the next two parts of this book. So far, this book at least helps us ask ourselves the right questions. Sometimes knowing what the right questions are - even if the questions are difficult and have no clear-cut answer - is better than being sedated with sham answers to the wrong questions.
Profile Image for Calvin.
166 reviews1 follower
July 19, 2023
So psyched for the rest of the book. Love Coyne's Story Grid work and am glad he's kicked John in the pants to start writing more books. John's work on relevance realization and the meaning crisis should be great. Great combo of writers for this subject. I wish they could pull in Iain McGilchrist to discuss some of the similarities between right brain thinking and artificial neural nets vs. Left brain and logic gates of oldschool computing. Fun stuff.
Profile Image for Fran.
120 reviews4 followers
March 5, 2024
Very interesting, informative, decent read for a layperson.
Profile Image for Bono.
18 reviews
April 29, 2024
Very usefull reading, learned new terms and a new way of looking at the benefits of ai. Nice to see John join the side of the common people (civilians)
Displaying 1 - 4 of 4 reviews

Can't find what you're looking for?

Get help and learn more about the design.