Jump to ratings and reviews
Rate this book

2084 and the AI Revolution, Updated and Expanded Edition

Rate this book

Paperback

26 people are currently reading
91 people want to read

About the author

John C. Lennox

71 books940 followers
John Carson Lennox is Professor of Mathematics in the University of Oxford, Fellow in Mathematics and the Philosophy of Science, and Pastoral Advisor at Green Templeton College, Oxford. He is also an Adjunct Lecturer at Wycliffe Hall, Oxford University and at the Oxford Centre for Christian Apologetics and is a Senior Fellow of the Trinity Forum. In addition, he teaches for the Oxford Strategic Leadership Programme at the Executive Education Centre, Said Business School, Oxford University.

He studied at the Royal School Armagh, Northern Ireland and was Exhibitioner and Senior Scholar at Emmanuel College, Cambridge University from which he took his MA, MMath and PhD. He worked for many years in the Mathematics Institute at the University of Wales in Cardiff which awarded him a DSc for his research. He also holds an MA and DPhil from Oxford University and an MA in Bioethics from the University of Surrey. He was a Senior Alexander Von Humboldt Fellow at the Universities of Würzburg and Freiburg in Germany. He has lectured extensively in North America, Eastern and Western Europe and Australasia on mathematics, the philosophy of science and the intellectual defence of Christianity.

He has written a number of books on the interface between science, philosophy and theology. These include God’s Undertaker: Has Science Buried God? (2009), God and Stephen Hawking, a response to The Grand Design (2011), Gunning for God, on the new atheism (2011), and Seven Days that Divide the World, on the early chapters of Genesis (2011). Furthermore, in addition to over seventy published mathematical papers, he is the co-author of two research level texts in algebra in the Oxford Mathematical Monographs series.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
17 (37%)
4 stars
19 (42%)
3 stars
9 (20%)
2 stars
0 (0%)
1 star
0 (0%)
Displaying 1 - 14 of 14 reviews
Profile Image for Johnny.
Author 10 books144 followers
April 6, 2025
An updated version of Oxford professor John C. Lennox’s 2020 book of the same title, 2084 and the AI Revolution: How Artificial Intelligence Informs Our Future is a marvelous book for those who want to understand (but not program) artificial intelligence programs, as well as consider what the advances and anticipated advances in AI (or more generally broad AI or Artificial General Intelligence (AGI) might mean philosophically and theologically. Lennox comes at the subject from a distinctly Christian perspective, but he doesn’t appeal to distinctly Christian authorities until he has recounted the state of the art and the positions of experts who may or may not be Christian. The first (roughly) half of the book is devoted to the history of AI, the positives and negatives of which we are already aware, general social and ethical considerations, and then, the second half evaluates the first portion from a biblical perspective.

Let me be clear, as Lennox himself makes clear, this is not a Luddite approach that wants to halt technological advancement nor is it a spiritualist approach that sees all technological approaches as bad. Rather, it is an honest attempt to grapple with the implications of what is already in use and what is anticipated in the future. Lennox is insistent “…that my commitment to the biblical worldview does not make me a Luddite but, on the contrary, makes me deeply thankful to God for technological developments, especially those that bring hope to people in this damaged world who would otherwise have none--…” (p. 240).

2084 and the AI Revolution cites the usefulness of AI for assisting with hearing and those who suffer from Parkinson’s Disease (pp. 25, 69), but offers warnings (along with even atheist and agnostic scientists and technologists) concerning: a) the lack of transparency where human developers don’t know for certain how the AI is weighing decisions; b) collecting disinformation and misinformation as part of the data on which decisions are based and deepfakes are created; c) use of gathered information for both social surveillance and surveillance capitalism; and d) possible dangers of autonomous weapon systems and vehicles when the goals they are set to accomplish don’t mesh with those of humans (citing Stephen Hawking on p. 59). I love some of his citations, including this profound observation from Clifford Stoll (famous for tracing a global hacker in The Cuckoo’s Egg): “Data is not information, information is not knowledge, knowledge is not understanding, and understanding is not wisdom.” (cited on p. 239)

Lennox is more skeptical with regard to the idea of singularity and transhumanism. He thinks many of the expressed goals in this regard (super-intelligence and immortality) are thinly veiled idolatry. He approaches arguments for humans who are like gods with both vigorous logic and biblical insight such that he discounts the utopian (or possibly, dystopian) visions of Homo deus and points readers to the Christian hope.

I found 2084 and the AI Revolution to be a marvelous addition to the earlier books like Stuart Russell’s Human Compatible and Brian Christian’s The Alignment Problem. For readers who want a volume that challenges one to more than just where to invest in the next big AI company, this is a unique blend of science, philosophy, and theology.
Profile Image for Steve.
630 reviews24 followers
June 9, 2025
"2084 and the AI Revolution, Updated and Expanded Edition" by John C. Lennox (2024), narrated by the author himself, is a compelling exploration of artificial intelligence (AI) through a Christian lens, offering a thoughtful blend of scientific inquiry, philosophical reflection, and theological insight. Lennox, an Oxford mathematician and renowned apologist, brings his characteristic clarity and intellectual rigor to this audiobook, making complex topics accessible to listeners without a technical background. His warm, engaging narration enhances the experience, infusing the material with conviction and a touch of wit, though at times, his academic tone may feel dense for casual listeners.

The audiobook delves into the multifaceted implications of AI, from its current capabilities to its potential future impacts. Lennox begins by demystifying AI, distinguishing between narrow AI - used in applications like medical diagnostics and social media algorithms - and the speculative realm of Artificial General Intelligence (AGI), which aspires to rival human intelligence. He provides a balanced overview of AI’s benefits, such as advancements in healthcare and automation, while cautioning against its risks, including surveillance, job displacement, and ethical dilemmas like the value alignment problem, where AI’s goals may diverge from human values. His discussion of recent developments, including large language models like ChatGPT, feels timely and relevant, grounding the listener in the rapidly evolving tech landscape.

What sets 2084 apart is Lennox’s integration of a biblical worldview. He argues that Christianity offers unique, evidence-based perspectives on humanity’s quest for super-intelligence, addressing profound questions about human identity, consciousness, and morality. He challenges the trans-humanist vision of merging humans with machines to achieve immortality, positing that such aspirations overlook the intrinsic value of humanity as depicted in Scripture. Lennox’s critique of materialistic philosophies, often referencing thinkers like Yuval Noah Harari, is incisive, though it occasionally assumes familiarity with these works, which may leave you wanting more context.

The audiobook’s structure is well-organized, with chapters exploring AI’s societal impacts - communication, medicine, manufacturing, and surveillance - before delving into philosophical and theological reflections. Lennox’s use of analogies, such as information’s independence from physical media, effectively illustrates the limitations of materialist views of consciousness.

However, the audiobook isn’t without flaws. Even though the first 70% seemed to be pretty objective, the last 30% seemed to roll down a hill as the author started injecting his own religious beliefs and political leanings into the narrative (such as his political beliefs about the Russia and Ukraine war, and who is to blame). That last 30% may be too annoying for listeners, and may seem like preaching - almost like Oscar winners going up to the podium to receive their award and, after thanking their writers/directors/significant others, start spouting their political message when they should really just get off the stage! Lennox’s heavy reliance on biblical arguments may alienate non-Christian listeners, and his critiques of secular works can feel like extended tangents, disrupting the flow.

In addition, while the 2024 edition is updated, listeners might find the pace of AI advancements outstrips the book’s scope, as Lennox himself acknowledges the need for potential future updates. For example, he mentions Cortana AI in the first 20% of the audiobook; however, Microsoft already retired Cortana AI by the time the audiobook was published. The standalone Cortana app in Windows was deprecated in August 2023; support for Cortana in Teams mobile, Microsoft Teams display, Microsoft Teams Rooms, Outlook mobile, and Microsoft 365 mobile ended in fall 2023 - and the voice search and Play My Emails features in Outlook mobile were retired in June 2024.

"2084 and the AI Revolution" by John C. Lennox is, overall, a thought-provoking listen that balances optimism with caution, urging listeners to reflect on what it means to be human in an AI-driven world.
Profile Image for Daniel.
115 reviews
January 3, 2026
My expectations for this volume were exceeded. It provides an adequate mix of science, philosophy, and theology that I appreciate. It distinguishes between AI and AGI (artificial general intelligence), calming some concerns, but raising other major ones. Lennox researches the topic very well and keeps a biblically-informed realistic perspective.
118 reviews
March 14, 2025
Excellent, and much-needed, contribution to the discussion of AI particularly with a focus on Christian ethics. Highly recommend to both Christians and non-Christians alike.
Profile Image for Garrett Cooper.
45 reviews
January 3, 2026
John Lennox is obviously brilliant. An Oxford mathematician, he has devoted his brain to many areas of study. You can tell he has spent a LOT of time thinking and researching about AI to include the philosophy behind it as well as the technical side of it. I think he strikes a good balance between sincere warning and alarmism. He also brings in a lot of scripture towards to the end to assist a Christian in seeing AI through a biblical lense. I wouldn’t necessarily recommend this book unless AI is something you are thinking about a lot (although maybe we all SHOULD be(?))
Profile Image for Dann Zinke.
177 reviews
July 26, 2025
Interesting, but longer than it needed to be. It aims to be a survey of the current state of AI, but given how rapidly the AI landscape is changing Lennox's descriptions are usually too vague to be helpful. Also a bunch of his conclusions are guided by pre-mil eschatology, so if you're on board with that, great.
Profile Image for Logan Almy.
82 reviews3 followers
November 6, 2025
Sections are excellent, but it jumps the shark when it comes to eschatology. 🙄
Profile Image for Chris Wray.
513 reviews16 followers
February 5, 2026
In this updated version of a book he wrote in 2020, John Lennox approaches the topic of AI with the enthusiastic, thoughtful pugnaciousness that characterises all of his writing. As with most apologetic writing, he provides food for thought for the general reader, but only those who are already Christians are likely to engage with his arguments all the way to their conclusion. Finally, while he embarks on several tangents (including engaging with specific technical applications of AI as well as the writing of both Dan Brown and Noah Yuval Harari), he manages to keep the book on track overall. His central objective is to outline the nature of AI, its limitations, and how we should understand this in light of the Christian worldview and what is revealed to us in scripture about the nature of God, the world and ourselves. On this core objective, he delivers with aplomb.

Lennox points out that we often emphasise the notion of ‘intelligence’ when considering AI, and underplay the ‘artificial’, reflecting a misunderstanding of the nature of our humanity that is baked into naturalistic materialism. As Lennox explains pithily, we tend to anthropomorphise AI, which serves both to misunderstand the technology and to risk misunderstanding what it means to be human as we start to think of humanity in computational terms. On the contrary, we are not simply organic computers that process algorithms. Lennox then raises the three questions posed by Immanuel Kant in his Critique of Pure Reason as the three most important questions for any human being to consider: “What can I know? What can I hope for? And what must I do? They are the key existential questions we all ask as we seek meaning in life. This book is an attempt to address them in the context of AI.” These are worldview-level questions, and Lennox states them in slightly different terms when he writes that, “We humans are insatiably curious. We have been asking big questions since the dawn of history – about knowledge, origin, and destiny. Their importance is obvious. Our answer to the first shapes our concepts of who we are, and our answer to the second gives us goals to live for. Taken together, our responses to these questions frame our worldview, the (meta) narrative or ideology that directs our lives and shapes their meaning, the framework of which we are often barely aware. These are not easy questions, as we see from the many and contradictory answers on offer. Yet, by and large, we humans have not let that hinder us. Over the centuries, some answers have been proposed by science, some by philosophy, some based on religion, others on politics, and many on a mixture of all of these and more.”

At this point, I am happy to state that I agree with Lennox’s eventual conclusion that Artificial General Intelligence (in other words, an AI that has a genuinely human level of intelligence) is impossible. The reason for reaching that conclusion is inexorably related to worldview, and specifically our understanding of what it means to be a human being. But that is to get ahead of ourselves, and he begins by defining the nature and scope of AI as it currently exists. As Lennox explains, “at the heart of Al are models - that is, mathematical constructs that approximate aspects of real-world systems and enable us to identify patterns, make predictions, analyze outcomes, and make decisions that normally require human intelligence…AI has also been defined as the theory and development of computer systems that can perform tasks normally requiring human intelligence. The term "AI" is often applied to the machines themselves.” Unpacking this further, Lennox goes on to say that, “At the moment the most typical functional AI system is a computer equipped with a database and an algorithm designed to do one and only one thing that would normally take human intelligence to carry out. The term artificial (from the Latin words for skill and make) signals the fact that this is not natural intelligence, nor is it innate intelligence, but rather it is simulated intelligence. This has led to it being called narrow AI, artificial narrow intelligence, or weak AI. On the more speculative side, however, there is considerable interest in the ambitious quest - the holy grail of computer science build systems that can replicate all that human intelligence can do and more. This is called general AI, artificial general intelligence (AGI), or strong AI. Some think that we will be able to create general AI that will surpass human intelligence within a relatively short time, certainly by 2084. At the moment, however, only narrow Als exist…Beyond AGI lies artificial superintelligence (ASI). Some hold that ASI, or even AGI, if we ever get there, will function as a benevolent god; others, as a totalitarian despot, such that the issue of who or what is in control, the so-called control problem, becomes an important consideration. In our contemporary technopoly, technology rules in a general sense. But will it one day rule in a more particular sense through an AGI or an ASI? And, if so, how can we prepare ourselves for it?”

At this point, it may be helpful to summarise. Narrow AI (the AI that currently exists) is an increasingly complex algorithmic model that can perform tasks of computation and analysis that would previously have required the application of human intelligence. However, this is quite different from asserting that the models reflect intelligence in themselves, and I think Lennox is correct to label them as simulated intelligence. He reinforces this point with a helpful analogy: “AI ‘intelligence’ is not real. C. S. Lewis gives an analogy that can help us understand the significance of the adjective artificial in artificial intelligence. It has to do with the distinction between making and begetting. A carpenter makes a chair, but he begets a child. The chair is an artifact – that is, it is made by his art or skill and therefore will reflect some of his tastes, but the child is begotten in his image and possesses all the characteristics of his life. There is an impassable gulf between the two that suggests there will always be an impassable gulf between any artifacts, including machines, of whatever sophistication produced by our skill, and their creators – us humans.”

Lennox points out that, “This again raises the question of how we define intelligence. Earlier, we gave a rough definition, and giving a more precise definition is like defining time. To paraphrase Augustine, we all know what time is until we are asked to define it. There is a similar tendency to reify intelligence - that is, to conceive of it as a "thing" and then attempt to locate and measure it. Intelligence, however, is not so much a thing as an abstract concept referring to certain human capacities. One way of working towards a definition is to list words and ideas we associate with intelligence, such as perception, imagination, capacity for abstraction, memory, reason, common sense, creativity, intuition, insight, experience, and problem-solving. Added to this, we can consider the spatial, linguistic, musical, and emotional dimensions of intelligence. Yet trying to define any of these tends to be filled with ambiguities.

This is important because “Some aspects of intelligence lend themselves to computational simulation, others not so readily. For example, AI systems for facial recognition can be developed in the area of perception, but awareness, seeing and insight, qualia, and the possession of an inner life are way out of reach for the foreseeable future, since these abilities are connected with consciousness, of which our current understanding is negligible…What really matters is competence in completing a prescribed task, not consciousness of what that task happens to be. The machine may not be conscious in the same way we are, but it is programmed to respond cognitively in the ways that we do. In short, it acts like a human being, it does not think or feel like a human being. Therefore, intelligence can be thought of informally as the capacity to solve problems, whereas consciousness is the capacity to have subjective feelings and experiences (qualia).”

This is a critical distinction, because even the most complex and capable AI does not and cannot have experiences or feelings - in other words, it does not have consciousness. As Lennox goes on to assert, this is not simply a limitation of computing power or complexity, but reflects the fact that human nature is not explicable in terms of naturalistic materialism. He reinforces this fact by explaining that “Machine learning (ML), the motor that drives AI, is a branch of computational statistics dedicated to designing algorithms that can use new data to construct analytical models without explicitly programming the solution. An ML system collects data, identifies patterns, and makes decisions based on those patterns. The algorithms involved in ML differ from earlier classical-type algorithms in that they are no longer a set of steps that lead to precise results…Rather, they are a set of steps designed to improve imprecise results. ML is essentially a tool of prediction in the statistical sense - it uses information you already possess to deliver information you do not yet have…in many contemporary advanced AI systems the human element in the operation of the system is limited or almost nonexistent. In much of the early work in AI, humans explicitly devised an algorithm to solve a particular problem. Yet in more recent Als they do not. Instead, they devise a general learning algorithm that then "learns" a solution to the problem…To summarize, what ML does much better than humans do is to pick out particular patterns from a vast mass of data - often far more data than any human mind can retain.” In all this, Lennox’s summary still applies, that the “human involvement is conscious. The machine itself is not.”

Another key distinction between AI systems and humans is in the area of ethics, and specifically in our status as moral agents. To be sure, AI can and should have ethics programmed into it, but as Lennox points out, “the ethics of a system will depend on the values of its programmers, which in turn will be determined by their ethical perspective.”

He continues by highlighting an important distinction between such programmed ethics and true moral agency: “acting according to externally imposed ethical principles that are embedded in its programming (as, for example, in an autonomous vehicle) is not the same as human moral agency. Human moral agency is usually associated with an internal capacity to distinguish between right and wrong and to reason about them, the awareness of moral obligations, the freedom to act either in accord with or against them, and an understanding of what has been done. It assumes the existence of a consciousness that machines per se do not possess. They have no inner life.” Moral agency is a critical (and unique) aspect of our humanity, and one of the blockers to the creation of a true AGI. Lennox drives this point home by quoting Danny Crookes, Professor of Computer Engineering at Queens University, Belfast: “We are still a long, long way from creating real human-like intelligence. People have been fooled by the impact of data-driven computing into thinking that we are approaching the level of human intelligence. But in my opinion, we are nowhere near it. Indeed, it might be argued that progress in real AI in recent years has actually slowed down. There is probably less research into real AI now than before because most of the funding is geared essentially to advertising! Researchers follow the money…There are huge challenges in our understanding of the human reasoning process…two fundamental problems yet to be cracked: (1) Even if we knew the rules of human reasoning, how do we abstract from a physical situation to a more abstract formulation so that we can apply the general rules of reasoning? (2) How can a computer build up and hold an internal mental model of the real world? Think of how a blind person visualises the world and reasons about it. Humans have the general-purpose ability to visualise things and to reason about scenarios of objects and processes that exist only in our minds. This general-purpose capability, which humans all have, is phenomenal; it is a key requirement for real intelligence, but it is fundamentally lacking in AI systems. There are reasons to doubt if we will ever get there…we need to be careful about even assuming that humanity has the intellectual capability to create an intelligence rivalling human intelligence, let alone superseding it, no matter how much time we have.” Lennox draws out one aspect of Crooke’s statement when he highlights that, “The second point made by Crookes is important in understanding the limitations of AI. It is that human beings have the unique cognitive ability to construct mental models. Machines, not having minds, or consciousness, simply do not have this capacity. Mathematician Hannah Fry makes a wry and apt comment: ‘For the time being, worrying about evil AI is a bit like worrying about over-crowding on Mars. Maybe one day we'll get to the point where computer intelligence surpasses human intelligence, but we're nowhere near it yet. Frankly, we're still quite a long way away from creating hedgehog-level intelligence. So far, no one's even managed to get past worm.’”

In other words, “Humans have the mental capacity to look at problems in different ways or to reframe them. And that ability to frame a problem is central to teasing out the difference between the machine and human roles in AI…Each stage involved the creation of a mental model to frame the problem and imagine how various scenarios might play out. And only humans can do that…Frames come at different scales. From a philosophical perspective, paradigms are like large frames, and worldviews like even larger ones. Paradigm shifts or changes of worldviews are major events, whereas framing occurs much more frequently at all kinds of lower levels in everyday life. Framing helps focus our minds and create new options for action by giving us different perspectives.”

So far, Lennox has provided much food for thought for the general reader, prompting us to clarify the nature of the technology we are discussing, its benefits, any associated risks, and whether it raises ethical concerns. From here, he considers some of these questions in light of an explicitly Christian worldview. First, it is clear that Lennox is definitely not an anti-technology Luddite, stating that he is “deeply thankful to God for technological developments, especially those that bring hope to people in this damaged world who would otherwise have none - giving hearing to the deaf, sight to the blind, limbs to the limbless; eradicating killer diseases; and benefiting from a host of other things that represent magnificent work in the spirit of a Creator who has made humans in his image to be creative themselves.”

The Christian understanding of humanity is that we have been made in the image as God, with an inherent dignity and value that is unique, and a status as responsible moral agents. Moreover, we have been created to be sub-creators, and AI is just one more way that that works itself out, as we fulfil the God-given mandate to tend, subdue and fill creation. Unfortunately, Genesis 3 follows Genesis 1 and 2, and so we also need to contend with the fact that we are fallen and sinful. In that regard Lennox comments that, since the Fall, we have been fleeing from God, “a flight that bears within it all the seeds of dystopia. For there has lurked in the human heart the suspicion that God, if he exists at all, is innately hostile to us. He does not wish our happiness, well-being, or even protracted existence. Human history shows that we have used our autonomy to get out of control…That is exactly what drives some of the fears around AI. What if our creations get out of control? Would a superintelligent homo deus do to the rest of us what we have done to God?"

As we consider the fact that AI does not have any inherent morality of it’s own, but simply reflects the morality of its programmers, Lennox calls us to “revisit one of the most significant sources of values that has proved its worth in giving men and women a sense of dignity and providing the foundation for human rights legislation, and much else…liberal democracy depends on the fact that all humans share an undefined "Factor X" on which their equal dignity and rights are grounded…I would want to say that Factor X has actually been defined: it is being made in the image of God.” The most interesting and important questions for us to answer remain the same as they always have: Who are we? Why are we here? Where are we heading? The answers that Christianity offers tothese questions are very different to, and much more compelling than, those offered by the naturalistic materialism that dominates in Western culture today. In fact, I would contend that our culture can only maintain a commitment to concepts like freedom, equality and human rights by borrowing these from Christianity - but that is for another book (review).

In terms of our consideration of AI, Lennox offers the following conclusion: “The quest for upgrading humans, creating superintelligence and godhood, is very ancient and, in its contemporary form, dressed up in the language of advanced computer technology, very alluring. The project sounds like the culmination of billions of years of development, initially blind and natural and finally directed by the human mind to which those evolutionary processes gave rise…Yet at its heart, it delivers a flawed narrative that is neither true to the past nor to the nature of reality. Indeed, its narrative is the reverse of what actually is the case. Superintelligence and godhood are not the end products of the trajectory of the history of human ingenuity. If there is a God who created and upholds the universe and who made us in his image, then a superintelligence, God himself, has always existed. He is not an end product. He is the producer…Jesus' mind was not uploaded onto silicon; he ascended bodily into heaven and so, one day, will those who follow him. This claim clashes head-on with the dominant earthbound, atheistic naturalism of the Western academy that reaches that this world is all that there is…The promises of AGI are firmly rooted in this world, and in that sense they are parochial and small compared with the mind-boggling implications of the resurrection and ascension of Jesus.”
Profile Image for Wendy Blankinship.
210 reviews
December 24, 2024
First… the original book was published in 2020. It was roughly 4 hours long on audible. Lennox redid the book and it came out in October 2024 and it is now 17 hours long on audible . Truly incredible that so much happened in four years to warrant such a big update

This was fascinating… I had never read any John Lennox before and I thoroughly enjoyed learning about him and his ideas. Not gonna lie… Much of this was over my head and I could definitely use a second read through but overall I learned a lot about AI and it’s possible effects on humanity.
Profile Image for Stephen Wallace.
15 reviews
December 30, 2025
I found this book really difficult to rate. The first two thirds is excellent. Lennox gives a really good summary of different approaches to AI (not just the more sensationalist views that get media attention).
He introduces ethics in a considered way.
Historically it’s pre-Trump. As a Christian mathematician, I took from it the healthy reminder that the real and more imminent danger is not some future existential threat from AI, but the more immediate threat from power hungry men using it to their advantage, whether Muskites or Muscavites or political systems as seen in China.
The book though loses its way. It becomes more a critique of Yuval Harari than AI. But more so it takes on a very literal view of Biblical prophecy and tries to force it into a modern setting. These approaches seem to think writing from the First century had as it’s primary object the 21st century. I still remember the huge disappointment at EU expansion as Revelation ran out of toes!
Personally I find it much more compelling to stick to Biblical arguments about human nature than speculating about the beast in Revelation (who if Lennox was writing now may need to include Trump on the list!)
Summary - first two-thirds well worth reading. Last few chapters not great.
925 reviews10 followers
November 30, 2025
This is a quite long and deep dive into AI by the brilliant John Lennox, Cambridge mathematician and faithful follower of Jesus. He takes a look at the good possibilities of AI and also the darkness of AI with a fairly balanced view.

I learned a lot about AI from reading this book, a lot that I hadn’t even considered up to this point. Naturally, the powers that be are trying to shape it to their own ends for both good and evil.

Mr Lennox spends about the last third of the book looking at the Christian faith in light of AI and has an excellent presentation of the gospel in the process which is why I love this guy.

Good reading, although slow at times, in order to understand the spirit of the age of AI.
Profile Image for Matt Robertson.
50 reviews5 followers
March 26, 2025
What a very odd book. Lennox addresses several important topics, most (but not all) of them related to AI, and often (but not always) addressing them biblically. The book is somewhat helpful but could have been 1/3 the length and much more focused. Unfortunately there aren’t many alternative books available on the topic yet, so I’d still recommend reading it.
16 reviews1 follower
December 11, 2025
Very interesting. I learned a lot. Ironically, I had to use chat gbt to have some concepts explain to me. I directed it to explain it at a sophomore high School level!
Displaying 1 - 14 of 14 reviews

Can't find what you're looking for?

Get help and learn more about the design.