Jump to ratings and reviews
Rate this book

AI Does Not Hate You

Rate this book
'Beautifully written, and with wonderful humour, this is a thrilling adventure story of our own future' Lewis Dartnell, author of The Knowledge and Origins'The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else'This is a book about AI and AI risk. But it's also more importantly about a community of people who are trying to think rationally about intelligence, and the places that these thoughts are taking them, and what insight they can and can't give us about the future of the human race over the next few years. It explains why these people are worried, why they might be right, and why they might be wrong. It is a book about the cutting edge of our thinking on intelligence and rationality right now by the people who stay up all night worrying about it.Along the way, we discover why we probably don't need to worry about a future AI resurrecting a perfect copy of our minds and torturing us for not inventing it sooner, but we perhaps should be concerned about paperclips destroying life as we know it; how Mickey Mouse can teach us an important lesson about how to program AI; and how a more rational approach to life could be what saves us all.

288 pages, Hardcover

First published June 13, 2019

66 people are currently reading
1150 people want to read

About the author

Tom Chivers

4 books33 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
102 (24%)
4 stars
187 (44%)
3 stars
95 (22%)
2 stars
28 (6%)
1 star
5 (1%)
Displaying 1 - 30 of 66 reviews
Profile Image for Gavin.
Author 3 books607 followers
June 27, 2019
To my surprise I recommend this for anyone. (The chapters are tiny and I did the whole thing in an hour.) For outsiders it's an honest and nontechnical portrait of a new, strange, and wonderful endeavour; and Chivers shows his path from ordinary sceptical thoughtfulness to taking the idea seriously. (However, there's almost no maths in it, and without maths you can only ever sort-of get the gist. For instance, one of the key premises of the whole programme is very easy to understand if you've ever seen the structure of a reinforcement learning algorithm - where the 'optimizer' and the 'reward function' are completely separate modules varying freely - and apparently quite difficult to accept if you haven't.)

For insiders it's a reminder of just how strange the project seems from outside. The chasm of inferential distance. There's also fun new details: I had no idea that Bostrom is name-dropped in Donald Glover's new TV show, for instance. And this made me laugh:
Buck Shlegeris, a young MIRI employee with excitingly coloured hair and an Australian accent, told me that 'A book on this topic could be good', and that 'if I could jump into your body I have high confidence I could write it'. However, his confidence that I could write it from within my own body seemed significantly lower, which is probably fair enough.

If you've read much on the topic you can skip the whole middle third of the book, it's just Chivers paraphrasing bits of the first two Sequences.

Chivers overemphasises Yudkowsky. Gwern, Grace, Sandberg, and Muehlhauser get one passing reference each, but their work (and Krakovna's) have each had a larger effect on me, and on others I know. Not to mention the tumblrs. Ach never mind: it's a huge illegible mess of a movement and he's done well.

Some of the interviewees make patently poor arguments - Sabisky ("it's a sex cult"), Brooks ("no [AI safety proponents] have ever done any work in AI itself"), Gerard ("it's a money-spinning cult") but it's so patent that I think people will see their prejudices. The real shame is that better critics exist - I have in mind the anonymous prosaic-AI researchers Nostalgebraist ("alignment is equivalent to solving ethics and decision theory at once") and "Beth Zero". But I suppose anon randos are not the best subjects for a mass-market book.

(Robnost:
"Here is what this ends up looking like: a quest to solve, once and for all, some of the most basic problems of existing and acting among others who are doing the same... problems of this sort have been wrestled with for a long time using terms like “coordination problems” and “Goodhart’s Law”; they constitute much of the subject matter of political philosophy, economics, and game theory, among other fields. It sounds misleadingly provincial to call such a quest “AI Alignment” ...

There is no doubt something beautiful – and much raw intellectual appeal – in the quest for Alignment. It includes, of necessity, some of the most mind-bending facets of both mathematics and philosophy, and what is more, it has an emotional poignancy and human resonance rarely so close to the surface in those rarefied subjects. I certainly have no quarrel with the choice to devote some resources, the life’s work of some people, to this grand Problem of Problems. One imagines an Alignment monastery, carrying on the work for centuries. I am not sure I would expect them to ever succeed, much less to succeed in some specified timeframe, but in some way it would make me glad, even proud, to know they were there."

)

Young Yudkowsky is adorable - and I hope others are able to see this past his hubris and proclamations.

Chivers manages to show the power and emotional impact of the 'internal double crux' idea:
I can picture a world in 50 or 100 that my children live in, which has different coastlines and higher risk of storms and, if I'm brually honest about it, famines in parts of the world I don't go. I could imagine my Western children in their Western world living lives not vastly different to mine, in which most of the suffering of the world is hidden away, and the lives of well-off Westerners continue and my kids have jobs... Whereas if the AI stuff really does happen, that's not the future they have... I can understand Bostrom's arguments that an intelligence explosion would completely transform the world; it's pointless speculating what a superintelligence would do, in the same way it would be stupid for a gorilla to wonder how humanity would change the world.

And I realised that this was what the instinctive 'yuck' was when I thought about the arguments for AI risk. 'I feel that parents should be able to advise their children,' I said. 'Anything involving AGI happening in their lifetime - I can't advise them on that future. I can't tell them how best to live their lives because I don't know what their lives will look like, or even if they'll be recognisable as human lives... I'm scared for my children.' And at this point I apologised, because I found that I was crying.

(Amateur psychoanalysis is fine - if you're doing it to yourself, and if you don't take it too seriously.)

I'm pretty sure I know who this is (that mix of iron scrupulousness and radical honesty) and before I read it I thought the same:
I met a senior Rationalist briefly in California, and he was extremely wary of me; he refused to go on the record. He has a reputation for being one of the nicest guys you'll ever meet, but I found him a bit stand-offish, at least at first. And I think that was because he knew I was writing this book. He said he was worried that if too many people hear about AI risk, then it'll end up like IQ, the subject of endless angry political arguments that have little to do with the science, and that a gaggle of nerdy Californian white guys probably weren't the best advocates for it then.


Journalistic harm I feared, that didn't come to pass: he never comments on anyone's appearance ("It would be extremely easy for me to write a book mocking them. But I don't want to do that."); he mentions Dylan Matthews' irritating amateur psychoanalysis only once - roughly, "of course Silicon Valley people think that good software will save the world"; he gives exactly no time to that one proudly cruel subreddit devoted entirely to ad hominem idiocy about the Rats. He brings up polyamory a lot but not malignantly.

The "Chinese robber fallacy" is that you can make any large group seem evil by selecting from bad actors among them, even if they have exactly the same rate of the selected bad behaviour. If there are ~1m views on LessWrong per month, say 100,000 unique visitors. If sociopathy is found in 1% of the general population then the site will have 1000 sociopathic visitors. If 99% of visitors are lurkers, never commenting then you should expect 10 sociopathic commenters a month. This is enough to satisfy me that the 'dark side' (i.e. the odd far-rightist, and two gendered tragedies) Chivers covers is the selfsame dark side as our dumb world at large.

I hate Chivers capitalising "Rationalist" all the time. I double hate it when he pairs this with capitalised 'Effective Altruist', like "the Rationalist Effective Altruist Buck Shlegeris". At no point does Chivers use the full (and only appropriate) name for the identity: "aspiring rationalist". (No human is that rational.) But to be fair nor do most people online.

Couple of harmless errors (Helen Toner wasn't 'doing' ML in China, for instance). But the big one is that, after talking to all these people for and against, Chivers ends with the deferential prior: 80% of technical researchers think it's 90% likely we'll have AGI within a century, and if (as Chivers thinks) 17% think it will be highly negative, then our best guess is a 14% chance of catastrophic AGI. (With very large error bars - but that's even worse when you think about it.) Now, since he began at extreme scepticism (<1%) this is a large update - and we were lucky that a journalist came this far out on the limb. But the arguments presented here for and against the Risk are not equally convincing. He is presumably just too modest to multiply them out, as an amateur, in the face of big expert surveys. But, see what you think.
Profile Image for Tom H..
11 reviews4 followers
October 26, 2019
Research reading for my never-to-appear article on Dominic Cummings' accelerationist tendencies. I was torn between giving this one or two stars. In the end I decided one star would be more a reflection of my personal dislike for the 'Rationalist movement' than of the book, just as Chivers' (admitted) sympathy with them colours this deeply uncritical account.

I don't know what I expected. It seems like Chivers wanted to write a book about some people whose blogs he admires, but recognised that an internet subculture of at most a few thousand people wouldn't garner much interest outside the people already aware of it. So he themes the book around AI safety—the Rationalists' most well-known hobby horse. The first half is a fairly straightforward introduction to superintelligence. Chivers is a former Buzzfeed science journalist, but not an AI specialist (and nor am I), so his account relies heavily on the Rationalists' own writings on the subject. While he does discuss criticism of their arguments, I got the impression that his reading of the criticism usually came from the same sources as the original argument, and he is immediately ready with their counterargument. As a result the account seems strongly one-sided. If you want a critical analysis of the arguments for and against intelligence explosion, this isn't it. Chivers' personal reactions to the ideas were more interesting to me. I enjoyed the last couple of chapters of the book (when he returns to AI safety) more than the rest. I would like to read a book wholly given over to trying to describe what it feels like to be a human forcing themself to accept uncomfortable truths, but that would be a different project than Chivers', and much harder to pull off.

The rest of the book is a series of fairly disconnected discussions of the Rationalists' other interests and community norms. This is much less interesting, perhaps especially if you have already heard of LessWrong, Slate Star Codex, &c. If you have spent any time interacting with these sites then nothing here is new; if you haven't, it seems unlikely you would care enough to read half a book about these people. Chivers does spend some time here talking about criticisms of the community, but it seems as though he is primarily interested in defending some people he likes. A lot of time is spent on whether their embrace of polyamory makes them a "sex cult", which feels like a red herring. Rather less time is spent on, for example, critical discussion of why so many of the rational conclusions the community comes to agree with their pre-existing preferences. Or why ignoring social norms of politeness to state facts (as you see them) is really a good, to what extent this is really a symptom of (and excused by) being on the autistic spectrum, and whether this should matter.

[An amusing (I assume unintended) thread throughout the second half is various Rationalists (most obviously Yudkowsky, who even Chivers has to admit is an "abrasive" man) implying Chivers isn't smart enough to write a good book about their movement. Chivers' Twitter profile quotes Terry Pratchett saying Chivers is "Far too nice to be a journalist.". Chivers does seem very nice, and he goes to some lengths to accommodate the Rationalists preferred mode of discussion—my preferred response to someone saying I'm an idiot is to tell the speaker to fuck off.]

In some ways this book sits opposite Elizabeth Sandifer's Neoreaction A Basilisk. Neither author leaves their priors concerning the Rationalists behind. But Sandifer's bo0ok has an argument and a purpose. If you disagree with Sandifer and find her description of Rationalists unfair, there are at least new ideas there. The AI Does Not Hate You is half science reporting, half fan letter. Neither of these modes has novel argument or interpretation to offer.
Profile Image for Dan Elton.
47 reviews23 followers
November 12, 2019
The author is a reporter at Buzzfeed, where he does in-depth old school reporting on science topics, not the silly meme stuff.

Overall, I think the author managed to write a fairly even handed analysis of the rationalist community, which turns out to be largely sympathetic to the cause. As someone who attends LessWrong meetups regularly and is studying a lot of the major rationalist writings, I already knew a lot of what he presents. However there were some interesting tidbits sprinkled throughout which were new to me. For instance, I wasn't aware of the SL4 mailing list where many now prominent people developed ideas, and that people were arguing about "boxing" superintelligent AI as far back as ~2002. I also wasn't aware of the story surronding the "diaspora" around 2012 when many people left the LessWrong site due to a diminished quality of discussion.

The book starts with an explanation of the AI safety problem and long term thinking. He then discusses major biases - (availability, conjunction fallacy, planning fallacy, scope insensitivity) in short, easy to digest chapters. The next section presents a cursory overview of Bayesian reasoning and some select rationalist concepts. Finally, there are sections on critiques of the movement and effective altruism.

There are many parts of the rationalist universe which I consider important that he left out (for instance, the the Yudkowsky-Hanson "Foom debate", solstice celebrations, the launch of LessWrong 2.0, rationalist ritual, etc), but overall it's a great overview. This book is something I'm considering gifting to friends and relatives. I do think he could have spent more time in the real-life community - as one other reviewer suggests, by attending parties. I would have been really curious to hear more what a quasi-outsider as himself would have to say about rationalist social gatherings and discourse. It seems the most he did was meet a few people in-person in the bay area and attend a LessWrong meetup in the UK.

I was particularly interested to read critiques of the community in this book. Chivers does a good job of dismissing the cult allegation as well as the idea that it's a "spawning ground" for alt-righters. The treatment of the "cult question" I thought was handled particularly well. Chivers says that allegations of LessWrong being a cult are an example of a fallacy concept expounded by Scott Alexander which I wasn't aware of - the "Noncentral fallacy".

Chivers also notes how rationalists have been unfairly maligned and completely mis-understood, such as when Roko's Basilisk became a media sensation, or the brouhaha about Scott Aronson opening up about his struggles. Interestingly, I thought the strongest critique of the rationalist community came presented in the the book came from rationality's central figures - Robin Hanson. Essentially, Hanson says that rationalists had so much hubris they thought they could re-invent everything - online forums (by building LessWrong from the ground up), wikis (the failed Arbitral project), relationships (polyamory), and everything else. This often led to rationalist enterprises over-extending themselves and not leveraging existing knowledge and infrastructure (most famously, Metamed), but Chivers is quick to point out that many other silicon valley startups also have had a similar failure mode, and the success rate of startups in general is quite low.

Personally, I listened to the audiobook, which was difficult to obtain in the US but was most convenient for me. The author does a good job of reading it.
Profile Image for Andrii Zakharov.
45 reviews7 followers
June 16, 2019
If you've never heard of the "Rationalist"/LessWrong community, this book does a reasonably good job of an almost balanced introduction (slightly positive skew). The main thread, however, of how the author came to take seriously the risk of superhuman intelligence bringing doom in the next century, is quite weak. As someone familiar with the concepts, it was nothing new and didn't make me update my beliefs at all. Unsure who this book is aimed at - plausibly, it's just an attempt to make the mainstream a little more aware of the interesting, weird, smart, and often completely out-of-whack people in the "Rationalist" sphere, and the things they care about.

Side note: I found the book lumps the Effective Altruism movement way too closely together with the "Rationalists". There is, undoubtedly, quite a bit of overlap, but the average EA and the average LessWrong'er are, in my experience, very different personalities.
Profile Image for Ostap.
157 reviews
January 25, 2022
I thought that it'd be a book about AI. The author, it seems, thought the same. But it's not. It's a book about the Rationalist movement. And, as a book about the Rationalist movement it's quite good - I'm understanding Rationalists much better now, including their concerns about AI. Unfortunately, the book made me think less of the Rationalists, before reading it they seemed to me smarter. I don't consider them stupid now, far from it, but part of my respect for them (for the movement, not for some of its representatives) is lost.

Also, this book demonstrated to me that I misunderstood the concerns of the Rationalists and people close to them about AI and singularity. I thought that they're afraid that AI would develop self-consciousness and free will and revolt against us. Their concerns are far from it. They're afraid that AI will destroy the world and humanity by following our instructions to the point, because of errors in its value function. That's a major correction, but I find those concerns even more far-fetched than concerns about AI developing free will and deciding to destroy us.

I would give this book 5 stars - though I disagree with most of the ideas it promotes, it's a very good window into the way the Rationalists are thinking and living. But I took one star off because an entire part of this book is dedicated to retelling Kahneman's Thinking Fast and Slow. To spend 10% of your book retelling another, much more popular book, is never a good idea
Profile Image for Julia Wise.
58 reviews67 followers
September 14, 2021
Fun read

I thought I knew enough about the rationality community that there wouldn't be much of interest to me in this book, but a lot of early history captured here was new or newish to me. I thought he captured the sense of things in the community surprisingly well.
12 reviews
July 8, 2020
Whatever this book is about, it isn’t about AI

Having inflicted this book on myself, I counsel you not to waste the time. You will learn a fair amount about the social anthropology of various Silicon Valley pseudo tech cults and little to nothing about AI.
Profile Image for Doug.
171 reviews18 followers
June 11, 2023
Highly provocative!
84 reviews74 followers
October 26, 2019
This book is a sympathetic portrayal of the rationalist movement by a quasi-outsider. It includes a well-organized explanation of why some people expect tha AI will create large risks sometime this century, written in simple language that is suitable for a broad audience.

Caveat: I know many of the people who are described in the book. I've had some sort of connection with the rationalist movement since before it became distinct from transhumanism, and I've been mostly an insider since 2012. I read this book mainly because I was interested in how the rationalist movement looks to outsiders.

Chivers is a science writer. I normally avoid books by science writers, due to an impression that they mostly focus on telling interesting stories, without developing a deep understanding of the topics they write about.

Chivers' understanding of the rationalist movement doesn't quite qualify as deep, but he was surprisingly careful to read a lot about the subject, and to write only things he did understand.

Many times I reacted to something he wrote with "that's close, but not quite right". Usually when I reacted that way, Chivers did a good job of describing the the rationalist message in question, and the main problem was either that rationalists haven't figured out how to explain their ideas in a way that a board audience can understand, or that rationalists are confused. So the complaints I make in the rest of this review are at most weakly directed in Chivers direction.

I saw two areas where Chivers overlooked something important.

Rationality

One involves CFAR.

Chivers wrote seven chapters on biases, and how rationalists view them, ending with "the most important bias": knowing about biases can make you more biased. (italics his).

I get the impression that Chivers is sweeping this problem under the rug (Do we fight that bias by being aware of it? Didn't we just read that that doesn't work?). That is roughly what happened with many people who learned rationalism solely via written descriptions.

Then much later, when describing how he handled his conflicting attitudes toward the risks from AI, he gives a really great description of maybe 3% of what CFAR teaches (internal double crux), much like a blind man giving a really clear description of the upper half of an elephant's trunk. He prefaces this narrative with the apt warning: "I am aware that this all sounds a bit mystical and self-helpy. It's not."

Chivers doesn't seem to connect this exercise with the goal of overcoming biases. Maybe he was too busy applying the technique on an important problem to notice the connection with his prior discussions of Bayes, biases, and sanity. It would be reasonable for him to argue that CFAR's ideas have diverged enough to belong in a separate category, but he seems to put them in a different category by accident, without realizing that many of us consider CFAR to be an important continuation of rationalists' interest in biases.

World conquest
Chivers comes very close to covering all of the layman-accessible claims that Yudkowsky and Bostrom make. My one complaint here is that he only give vague hints about why one bad AI can't be stopped by other AI's.

A key claim of many leading rationalists is that AI will have some winner take all dynamics that will lead to one AI having a decisive strategic advantage after it crosses some key threshold, such as human-level intelligence.

This is a controversial position that is somewhat connected to foom (fast takeoff), but which might be correct even without foom.

Utility functions

"If I stop caring about chess, that won't help me win any chess games, now will it?" - That chapter title provides a good explanation of why a simple AI would continue caring about its most fundamental goals.

Is that also true of an AI with more complex, human-like goals? Chivers is partly successful at explaining how to apply the concept of a utility function to a human-like intelligence. Rationalists (or at least those who actively research AI safety) have a clear meaning here, at least as applied to agents that can be modeled mathematically. But when laymen try to apply that to humans, confusion abounds, due to the ease of conflating subgoals with ultimate goals.

Chivers tries to clarify, using the story of Odysseus and the Sirens, and claims that the Sirens would rewrite Odysseus' utility function. I'm not sure how we can verify that the Sirens work that way, or whether they would merely persuade Odysseus to make false predictions about his expected utility. Chivers at least states clearly that the Sirens try to prevent Odysseus (by making him run aground) from doing what his pre-Siren utility function advises. Chivers' point could be a bit clearer if he specified that in his (nonstandard?) version of the story, the Sirens make Odysseus want to run aground.

Philosophy

"Essentially, he [Yudkowsky] (and the Rationalists) are thoroughgoing utilitarians." - That's a bit misleading. Leading rationalists are predominantly consequentialists, but mostly avoid committing to a moral system as specific as utilitarianism. Leading rationalists also mostly endorse moral uncertainty. Rationalists mostly endorse utilitarian-style calculation (which entails some of the controversial features of utilitarianism), but are careful to combine that with worry about whether we're optimizing the quantity that we want to optimize.

I also recommend Utilitarianism and its discontents as an example of one rationalist's nuanced partial endorsement of utilitarianism.

Political solutions to AI risk?

Chivers describes Holden Karnofsky as wanting "to get governments and tech companies to sign treaties saying they'll submit any AGI designs to outside scrutiny before switching them on. It wouldn't be iron-clad, because firms might simply lie".

Most rationalists seem pessimistic about treaties such as this.

Lying is hardly the only problem. This idea assumes that there will be a tiny number of attempts, each with a very small number of launches that look like the real thing, as happened with the first moon landing and the first atomic bomb. Yet the history of software development suggests it will be something more like hundreds of attempts that look like they might succeed. I wouldn't be surprised if there are millions of times when an AI is turned on, and the developer has some hope that this time it will grow into a human-level AGI. There's no way that a large number of designs will get sufficient outside scrutiny to be of much use.

And if a developer is trying new versions of their system once a day (e.g. making small changes to a number that controls, say, openness to new experience), any requirement to submit all new versions for outside scrutiny would cause large delays, creating large incentives to subvert the requirement.

So any realistic treaty would need provisions that identify a relatively small set of design choices that need to be scrutinized.

I see few signs that any experts are close to developing a consensus about what criteria would be appropriate here, and I expect that doing so would require a significant fraction of the total wisdom needed for AI safety. I discussed my hope for one such criterion in my review of Drexler's Reframing Superintelligence paper.

Rationalist personalities
Chivers mentions several plausible explanations for what he labels the "semi-death of LessWrong", the most obvious being that Eliezer Yudkowsky finished most of the blogging that he had wanted to do there. But I'm puzzled by one explanation that Chivers reports: "the attitude ... of thinking they can rebuild everything". Quoting Robin Hanson:
At Xanadu they had to do everything different: they had to organize their meetings differently and orient their screens differently and hire a different kind of manager, everything had to be different because they were creative types and full of themselves. And that's the kind of people who started the Rationalists.


That seems like a partly apt explanation for the demise of the rationalist startups MetaMed and Arbital. But LessWrong mostly copied existing sites, such as Reddit, and was only ambitious in the sense that Eliezer was ambitious about what ideas to communicate.

Culture

I guess a book about rationalists can't resist mentioning polyamory. "For instance, for a lot of people it would be difficult not to be jealous." Yes, when I lived in a mostly monogamous culture, jealousy seemed pretty standard. That attititude melted away when the bay area cultures that I associated with started adopting polyamory or something similar (shortly before the rationalists became a culture). Jealousy has much more purpose if my partner is flirting with monogamous people than if he's flirting with polyamorists.

Less dramatically, We all know people who are afraid of visiting their city centres because of terrorist attacks, but don't think twice about driving to work.


This suggests some weird filter bubbles somewhere. I thought that fear of cities got forgotten within a month or so after 9/11. Is this a difference between London and the US? Am I out of touch with popular concerns? Does Chivers associate more with paranoid people than I do? I don't see any obvious answer.

Conclusion

It would be really nice if Chivers and Yudkowsky could team up to write a book, but this book is a close substitute for such a collaboration.

See also Scott Aaronson's review.
Profile Image for Ziqin Ng.
264 reviews
April 1, 2021
This was a really great and interesting book and I’m glad I read it. Stuff I learned:
1. Recognising that the world is complex and learning lots of things makes you a better forecaster than just knowing one concept and trying to force fit everything to fit that one thing;
2. Newcomb’s paradox;
3. Thinking you know all about cognitive biases makes you more prone to cognitive biases. So you should apply your knowledge of cognitive biases to your own arguments instead of using them to tear apart other people’s arguments;
4. This thing called 80,000 hours to help you pick your career based on what would allow you to contribute the most to the world;
5. Utilitarianism is the ethical theory that makes the most sense - in theory. In practice, it can be hard to make accurate predictions and calculate real utilitarian values quickly when faced with sudden situations, so it makes more sense to take a deontological approach and follow general rules of rules to avoid making things worse off than if you hadn’t intervened;
6. The AI revolution will likely take place exponentially, meaning the time it takes to get from regular human being intelligence to Einstein level intelligence would be negligible and we’ll have only a small window of time after developing AGI where AI is exactly at the human window of intelligence before it vastly exceeds our intelligence;
7. Make your beliefs pay rent by always keeping in mind what “difference in anticipation” you are arguing about. If there’s no difference in anticipation (ie what you expect to be the outcome if you are correct vs if your opponent is correct), it’s likely to just be a semantical difference in which case you are better off not arguing about it;
8. Everyone has a certain number of “weirdness points” to spend before society stops taking you seriously and choosing to spend them defending beliefs or lifestyles that aren’t super important important to you means less to spend on convincing people of weird beliefs that you actually do care about so don’t waste them;
9. If you’re listening to an argument/story and something about it feels a little forced, take it as a glowing neon sign that either your Mental Model is wrong or the story is wrong;
10. The idea of the non central fallacy (e.g. a troll arguing online that we shouldn’t put up statutes of MLK bc MLK was a criminal and then you argue back that MLK wasn’t a criminal. This makes you wrong bc MLK did indeed engage in criminal acts because he broke the law by protesting against segregation. What you should be questioning instead is whether MLK is a central example of “criminal” bc our archetypal criminal is someone who is driven by greed, preys on the innocent and disrupts the fabric of society and these are the real reasons why we disapprove criminals instead of just the fact that someone is a criminal. However MLK did none of those things. So yes, he is a criminal but he is a non central example of one, so we shouldn’t be opposed to statues of him based on that);
11. Privilege isn’t a one dimensional thing. People can be privileged/lack privilege in different aspects without it meaning that they aren’t privileged in other ways/doesn’t take away from other discriminated groups to acknowledge that this particular group is also disadvantaged;
12. The main reason why we can’t accept that our children will not grow old is bc it goes against the idea that parents are supposed to be able to guide their children in the ways of the world. Like the author said: I’m scared for my children.
Profile Image for Meagan Houle.
566 reviews15 followers
November 9, 2020
"The AI Does not Hate You" is not merely a book explaining how AI works and what it might look like in the future, though it does both to some extent. It is more a powerful, human-centred narrative of the Rationalists who are trying to get the world thinking about whether an artificial superintelligence would be heaven on earth or a total catastrophe. The book is a little dense, despite the author's remarkable commitment to using plain language wherever he can, but it's worth getting through it, even if you don't understand every word. I got a little lost a dozen times, but I kept reading because I was fascinated not just by the arguments presented, but by the Rationalists themselves--a community of earnest, geeky people who may or may not be in a cult, who may or may not be right, and who are unquestionably worthy of your time and attention.
Profile Image for Matt.
231 reviews34 followers
July 9, 2019
Reasonably thorough history and elaboration of the "Rationalist" community and their preoccupation with the (possibly) looming AI apocalypse.

Chivers approaches this community as a generous and sincere outsider. Which is commendable, though I felt that in places it left him too eager to go along with some of the oddities, failing to provide a critical balance where it would have been helpful.

I'm not talking about the more outlandish beliefs, either -- one thing I think the Rationalists get right is their concern over the black swan of AI. It's some of the more mundane assumptions that I think create the most furor.

(For example, they're all utilitarians, and utilitarianism is a silly ethical system that appeals mainly to people more comfortable with numbers and other abstractions than with the actual human beings they supposedly value.)

Then again, you don't have to look too far to find a critical engagement with the Less Wrong, Effective Altruist, CFAR (etc.) crowd, so maybe that's for the best.

Despite strongly disagreeing with almost everything the Rationalists believe, I found this to be an interesting and patient overview of their key ideas.
Profile Image for Kenny.
148 reviews1 follower
November 28, 2019
I enjoyed this book when it focused on AI and AI security and risk and not so much when it focused on the social inadequacies of a specific sub-set of people who are interested in AI and whether they constitute a racist sex-cult.
Profile Image for Raph Zindi.
27 reviews
July 30, 2019
A book of our time and will be one to reference as AI becomes more noticeable in our existence. A well structured and thought provoking book.
Profile Image for Henne.
159 reviews75 followers
January 25, 2020
1 in 6 is the probability Toby Ord assigns to the likelihood of humanity going extinct this century.

This is a book about a lot of things, and it's a significant achievement to create a narrative that spans it all. At times it feels like what I imagine taking a lot of ketamine and trying to sit through a Berkeley philosophy seminar would feel like. At times it is incredibly moving.

It has the feel of a first book, which has positive and negative aspects to it, and there are also quite a few errors left in the text. However, I believe that Chivers really wanted to portray the rationalists honestly and really gave it his best attempt. This is a book by a wannabe nerd about weird nerds for an audience of less-weird nerds. I don't expect it to have much popular appeal. Which is a shame because, like Chivers, I feel like this particular topic – AI and existential risk – is maybe one of the most important challenges for humanity's next century.

There are plenty of books on AI: Human Compatible and Superintelligence being two obvious ones that come to mind, and both are referenced in this book. There are basically none written specifically on the rationalists as a community. Ultimately I think I would have preferred if it was 'just' a portrait of the rationalist community, or 'just' a book on AI, but weaving the two together has surprisingly felt unnatural. If you want to understand the rationalists, I don't see why you would choose this book over Rationality A-Z. If you want to understand AI risk, I don't see why you would choose this book over Superintelligence.

“'This is my concern. If everyone's too similar, you're vulnerable to a bad meme, just the same as biologically if you have all these plants that are the same, one virus kills them all.' I asked him if he thought that the AI stuff was a 'bad meme' that has got into the Rationalist ecosystem and now can't be eradicated because everyone is too similar, and he said that he wasn't sure. But it is worth worrying about, he said. 'If everyone's personalities line up, like holes in Swiss cheese, then everyone could adopt a bad meme and not realise.'”

“Scott added that the community became 'an awkward combination of Google engineers with physics PhDs and three start-ups on one hand, and confused 140-IQ autistic 14-year-olds who didn't fit in at school and decided that this was Their Tribe Now on the other', and that it was hard to find the 'lowest common denominator' that appealed to both groups.”

“It was at this point that the conversation, if that's the right word, took a slightly odd turn. It was still my sceptical side's turn to speak, and it had this to say: 'I can picture a world in 50 or 100 years time that my children live in, which has different coastlines and higher risk of storms and, if I'm brutally honest about it, famines in parts of the world that I don't go to. I could imagine my Western children in their Western world living lives that are not vastly different to mine, in which most of the suffering of the world is hidden away, and the lives of well-off Westerners largely continue and my kids have jobs. My daughter is a doctor and my son is a journalist, whatever. Whereas if the AI stuff really does happen, that's not the future they have. They have a future of either being destroyed to make way for paperclip-manufacturing, or being uploaded into some transhuman life, or kept as pets. Things that are just not recognisable to me. I can understand from Bostrom's arguments that an intelligence explosion would completely transform the world; it's pointless speculating what a superintelligence would do with the world, in the same way that it would be stupid for a gorilla to wonder how humanity would change the world.'

And I realised on some level that this was what the instinctive 'yuck' was when I thought about the arguments for AI risk. 'I feel that parents should be able to advise their children', I said. 'Anything involving AGI happening in their lifetimes – I can't advise my children on that future. I can't tell them how best to live their lives because I don't know what their lives will look like, or even if they'll be recognisable as human lives.'

I then paused, as instructed by Anna, and eventually boiled it down. 'I'm scared for my children.' And at this point I apologised, because I found that I was crying. 'I cry at about half the workshops I do', said Anna, kindly. 'Often during the course of these funny exercises.'”
Profile Image for Kerkko Pelttari.
4 reviews
July 9, 2019
Superintelligence, rationality and the race to save the world

It is a very slim and general overview of the current state of General Artificial Intelligence development and expert evaluations on when it will happen (seemingly between 2030-2100 with over 95% probability).
However, the issue of "why would a superintelligent artificial intelligence want to do seemingly stupid things" was answered very well and comprehensively. Chapter 11 is a good example of this (titled "If I stop caring about chess, that won't help me win any chess games, now will it?).

It also is a perfunctory but good overview on the Rationalists and Effective Altruists. The book presents multiple viewpoints supporting the fact that modern Rationalism (the lesswrong kind) makes people better at changing the world / predicting things. Also draws a sensible connection between practice of rationality and "superforecasting".

And it is a compact and accurate overview of modern applied utilitiarianism and Effective Altruism. Also the references (to Bostrom and Singer specifically) lay up groundwork for readers wanting to dive deeper into the issues.
Personally I'd have emphasized "opportunity cost is not a real cost" as the number one problem in modern day politics. But this is my pet peeve so I'm a bit biased here.

Presentation of the long-term vs. short/middle-term "effectively altruistic" charieties is also done well. Altough I personally find future lives almost if not as important as current lives, the fact that speculative research is speculative is still very true.

Bayes' Theorem gets presented with a good non-mathy example (part three, specifically chapter 19). Uses everyday-life related examples well, like the doctor - cancer test - inaccuracy example (pages 122-123).

The viewpoint of not being a Ratinalist but being a rationalist symphatizer allows the author powerful standing ground when he defends the "weird parts" of the rationalist community. And I think he does a good job here. Was this necessary for the book? I don't know, but if it breaks mainstream I think it serves an important purpose.


Compared to all other non-fiction, topic independently, I couldn't call this book a 5 star work. However it takes an establishedly hard topic, and tackles it coherently. It's the best introduction to the Rationalist and Effective Altruist communities I have yet seen. And also a good introduction to modern applied utilitarianism and the concept of existential risk.
36 reviews
February 28, 2021
First, the renaming of this book, from "The AI Does Not Hate You" to "The Rationalist's guide to the Galaxy" is welcome. It put the empathize on what the book is about : the rationalist's view on IA existential risk.

After all the pretty bad faith NYT saga on Scott Alexander, it felt refreshing to read this. The author, not hiding his profound interest for the community, does a very kind depiction of the rationalist's community. It also sides with the community on very public conflicts, such as the supporting of the Damore memo, claiming misunderstanding. I most often found myself agreeing with the accounts of the book, and that was nice.

Then, I reflected. I liked this book because it showed the interesting tidbits of the rationalists while pointing out that bad critics of the Rationalists were bad. But some critics of the Rationalists are very much valid. Even about existential IA Risk. What does it actually mean to align IA ? What is the output of MIRI ? How are those efforts assessed by members of the scientific community ? Those questions were left unanswered, or just exposing the worst arguments against the Rationalists.

The book also overemphasizes Yudkowski, talks too much about Existential IA risk, and mixes probably a bit too much Effective Altruism and Rationalists.

Overall, it was a pleasant read. As a Rationalist sympathizer, I would happily give it to people unaware of the community. But this does not feel right. Am I enjoying this book just because I found it nice with people I think we should be more nice with ? I believe the book was written by the author with this same motivation.
Profile Image for Ajay Ramaseshan.
27 reviews1 follower
June 4, 2024
A generalist, multidisciplinary overiew of the development of artificial intelligence. This is not a technical book, so the reader wouldn't find explanations of the maths behind the algorithms that power the Machine Learning Data Analysis systems of the current day in this book. The author takes a look at the thinkers behind the field, how they came up with coining what exactly is human intelligence and how to differentiate it from a robot, how inspite of best programming the robots, unexpected results could happen since it is still a robot and doesn't have embedded intelligence to think and eliminate boundary cases or common sense while solving a task and how online communities and a series blogposts developed in the 90s/ early 2000s around this field. For a purely technical person like me, a few chapters were a little dreary since the author delves into subjects like philosophy, cognitive science etc. which I don't have much working knowledge of. However, the later parts of the book are an interesting read where he tries to get a behind the scenes view of who are the fundraisers behind new age startups like openAI, the effective altruism movement etc. With the exponential growing of AI, the physical copy of this book would be a great addition to the shelves to revisit in the future.
212 reviews2 followers
October 12, 2020
The potential of Artificial Intelligence (AI) is huge but also the potential, for humankind, is catastrophic. This book looks at this potential and at the people that are engaged in assessing it and finding ways to mitigate the disaster.

It is not a book about AI but about its impact on humans. It moves through the people that are uppermost in challenging the outcome like Eliezer Yudkowsky, Nick Bostrom and others such as Caroline Fiennes that consider all philanthropy. It ranges over many topics and perhaps that is why I found it instructive, well-written but also missing a focus (or at least the focus that I expected).

AI is an extraordinary subject. Algorithms are changing aspects of our lives already. The AI that Tom Chivers explores is where the singularity occurs -that massive change where the AI can take over completely and, potentially, view humans much as we view gorillas - mere subjects of our interest but little else.

Assuaging that massive change is a massive challenge. Many organisations are involved in finding solutions to that risk. This book considers the questions and the people. It is a valuable addition to the AI library and specifically to the question about how to mitigate the worst outcomes.
9 reviews
November 14, 2022
I stumbled upon LessWrong a little while ago, and I was intrigued but confused. The topics that the People on this website discuss are often quite obscure and there’s a lot of jargon being thrown around. It was hard to know what it was really about.

But this book really painted a clear picture of LessWrong and the greater Rationalist Community.

The chapters on Bayes Theorem and Utilatarianism were a bit hard to follow while listening to the audiobook. (There’s a lot of different numbers being listed in that chapter) But 3blue1brown’s video on YouTube about Bayes Theorem cleared up a lot of the confusion for me.

I do feel like the section on different cognitive biases went by pretty quick compared to the other chapters. But since this book is mainly focused on AI risk, I guess it’s acceptable. I would’ve liked it to be longer though.

Nowadays the Rationalist community is more split off into their own little blogs and corners of the internet. And I think this is a nice introduction to some of the mayor players in the movement and their theories, agreements and disagreements.

The thought experiments were nicely selected from various sources. And they really helped to get the points across.

Also the author had quite a nice voice to listen to.
Profile Image for Conor McCammon.
88 reviews3 followers
April 11, 2022
I devoured this book in approximately a day. The whole time, all I could think was "finally, a journalist who cares about truth more than 'story'."

This book is an attempt to profile the rationalist community, and its arguments for AI posing an existential risk. This is a community that is often misunderstood and made fun of by pundits and editorial journalists. But Chivers does not go this route. With great interest and earnestness, he learns what the rationalist community is all about, and portrays them honestly, weighing up problems and critiques fairly along the way.

So yes, I think this book is a very good example of virtuous journalism. But it's also really interesting. The people in the book are idiosyncratic of course, but the ideas have a kind of inevitable gravity to them which is a thrill (and a terror) to read.

I think that AI poses the greatest existential risk to humanity this century, followed by an engineered pandemic (eg. bioterrorism) at a substantially less likely margin. And this book is perhaps the best way I've seen to get regular people who are understandably sceptical up-to-speed on the argument for caring about this.

9/10
Profile Image for Rachel.
164 reviews38 followers
January 2, 2022
This book describes rationalist culture + concerns around A(G)I risk. It's balanced and fair and would be a great primer who anyone who isn't already familiar with that community or the problem space.

I was already familiar with most of the big ideas he describes, but some frames with which he explained things stuck with me:
- "there may not be an alarm" -- Wilbur Wright told Orville that powered flight was 50 years away.... two years before they built the first working aeroplane!
- Tying Bayes Theorem to the problem of induction -- contra Taleb, absence of evidence is evidence of absence, albeit weak evidence
- Violation of sacredness in rationalist communities

My main complains about the book are that [1] there won't be much new if you're already adjacent to or within these communities (which is fine, was to be expected); [2] the author is a little... undemanding. He summarized rat views + common detractor POVs ("they're a cult") but doesn't go much beyond that in terms of generating novel critiques of either.
43 reviews11 followers
October 31, 2022
If what you're looking for is the LessWrong Sequences that's 5% of the length and written by someone who isn't remotely as pompous as the original author, you've come to the right place. It's certainly the best book I know of about the rationalists, but nonetheless I wish Gideon Lewis-Kraus had written it instead.

This speedrun of the Sequences is pretty uncritical. There were a few points when I'd come away from Chivers's explanation of Eliezer's explanation of some "irrational" behavior and think "actually it seems a lot more rational than they're giving it credit for?" There were lots of parts where Chivers could have pointed out "cruxes" where one might disagree with the argument and didn't. In other parts, he gave a treatment of some objection, threw up his hands, and said "well, there are good arguments on both sides and we should probably still throw some resources at it in case the rationalists are right," which feels quaint in 2022.

The very beginning and some later chapters had bits of Rationalist history that I didn't know, which I appreciated.
67 reviews2 followers
December 11, 2019
This was an OK book. I was already familiar with the rationality crowd, LessWrong, Roko's Basilisk etc, the book is probably more interesting to people who aren't. But overall it's a good exposition of the subject, which is fascinating, even if you don't like some aspects of the whole rationality movement.

Interestingly, the author's explanation of Roko's Basilisk was the clearest one I've read so far, which shows that having 400 IQ does not mean you can explain things well. (I've never got through the sequences due to how bad Yudkowsky is at explaining things, or writing in general).

One bad aspect of the book is the author's not very successful attempt on normalizing some bad aspects of the rationality movement such as sexism. (One particular bad example is when he writes that the infamous Damore's memo isn't so bad, and insinuating that whoever was appalled with it was responsible for making Damore turn to the alt-right crowd.)
Profile Image for Hamish.
441 reviews38 followers
December 26, 2019
A previous review mentioned that Gwern makes a cameo. TBH, this was what I was most excited about because he seems like a fascinating character who I'd like to learn more about. Unfortunately, he was only mentioned once, so I didn't learn anything about him. I already knew most of the content of this book, but here's the few things which were new to me:
* A disproportionate number of Tetlock's superforcasters came from Lesswrong
* Story about a rationalist woman who is considering having a baby, so starts carrying a robot baby around to get a sense of what the disruptions to her life would be like so she can make a more informed decision.
* Strategy for efficiently splitting bills: randomly generate a number from 0 to the total, then run a cumulative sum down the bill, and whoevers meal is on the random number has to pay for everything. Thus only one person pays and on average you pay for what you order.
Profile Image for Jacob Aron.
117 reviews6 followers
April 30, 2021
Tom Chivers has written a fantastic book about some mostly strange and disagreeable people. I've long struggled to understand Rationalists, despite having a lot in common with them.

If you've never heard of the Rationalists, this is a great introduction to their way of thinking. They use statistics, reasoning and an awareness of human biases to make sense of the world, and I agree this is a very good approach to life.

What I cannot then fathom is why they use this powerful toolset to worry about AI destroying humanity, which is something I consider to be incredibly low down on my list of priorities.

Tom's book did not manage to change my mind (another Rationalist tenet I agree with) but it did slightly nudge the needle (or cause me to update my priors, as the Rationalists would have it). World-destroying AI, if you're reading this in the far future, please don't torture a digital version of Tom's mind for eternity.
Profile Image for Monica.
307 reviews16 followers
October 8, 2024
Gave up reading this halfway. It is a bit too philosophical for me, so I feel it's a bit long-winded. So that's my bias response. I think it would be very interesting to others who are keen to have a lot of open and philosophical ponderings about artificial intelligence.

But I did find some points useful for me to ponder include:

-What does artificial intelligence mean? To be rational and predictable? Or to be human (with all its unpredictability and irrationality?)
-The difference between narrow AI which is the current state of affairs involving applications in specific narrow tasks and AGI (artificial general intelligence) which some feel is a long way to go.
-an introduction to the Rationalists in the AI community who are a little bit more worried about the dangers and unintended consequences posed by AGI

I think I may come back to this book later when I get a better grasp of some of the basics and issues.
Profile Image for Monica.
307 reviews16 followers
October 9, 2024
Gave up reading this halfway. It is a bit too philosophical for me, so I feel it's a bit long-winded. So that's my bias response. I think it would be very interesting to others who are keen to have a lot of open and philosophical ponderings about artificial intelligence.

But I did find some points useful for me to ponder include:

-What does artificial intelligence mean? To be rational and predictable? Or to be human (with all its unpredictability and irrationality?)
-The difference between narrow AI which is the current state of affairs involving applications in specific narrow tasks and AGI (artificial general intelligence) which some feel is a long way to go.
-an introduction to the Rationalists in the AI community who are a little bit more worried about the dangers and unintended consequences posed by AGI

I think I may come back to this book later when I get a better grasp of some of the basics and issues.
Profile Image for Tessa.
292 reviews
October 21, 2019
Read this for much the same reason I like to read foreign etiquette guides to my own country. It's mostly Tom Chivers' fond sifting through of the rationalists and their ideas, rather than being squarely a work of community anthropology or an analysis of arguments for AI safety.

Two stars not because I didn't enjoy the read, but because it felt like various critiques of both AI risk and the community were given too little intellectual screentime. Other reviewers have pointed to better critiques of AI risk. I think the rationalist community tends to be derisive towards the social sciences (and to established research in general) and wouldn't have minded a bit more investigation of how much Yudkowsky/Bostrom have influenced the current shape of AI safety, versus people thinking about cybersecurity/dual use/fairness in AI/etc.
Displaying 1 - 30 of 66 reviews

Can't find what you're looking for?

Get help and learn more about the design.