What do you think?
Rate this book


288 pages, Hardcover
First published June 13, 2019
Buck Shlegeris, a young MIRI employee with excitingly coloured hair and an Australian accent, told me that 'A book on this topic could be good', and that 'if I could jump into your body I have high confidence I could write it'. However, his confidence that I could write it from within my own body seemed significantly lower, which is probably fair enough.
"Here is what this ends up looking like: a quest to solve, once and for all, some of the most basic problems of existing and acting among others who are doing the same... problems of this sort have been wrestled with for a long time using terms like “coordination problems” and “Goodhart’s Law”; they constitute much of the subject matter of political philosophy, economics, and game theory, among other fields. It sounds misleadingly provincial to call such a quest “AI Alignment” ...
There is no doubt something beautiful – and much raw intellectual appeal – in the quest for Alignment. It includes, of necessity, some of the most mind-bending facets of both mathematics and philosophy, and what is more, it has an emotional poignancy and human resonance rarely so close to the surface in those rarefied subjects. I certainly have no quarrel with the choice to devote some resources, the life’s work of some people, to this grand Problem of Problems. One imagines an Alignment monastery, carrying on the work for centuries. I am not sure I would expect them to ever succeed, much less to succeed in some specified timeframe, but in some way it would make me glad, even proud, to know they were there."
I can picture a world in 50 or 100 that my children live in, which has different coastlines and higher risk of storms and, if I'm brually honest about it, famines in parts of the world I don't go. I could imagine my Western children in their Western world living lives not vastly different to mine, in which most of the suffering of the world is hidden away, and the lives of well-off Westerners continue and my kids have jobs... Whereas if the AI stuff really does happen, that's not the future they have... I can understand Bostrom's arguments that an intelligence explosion would completely transform the world; it's pointless speculating what a superintelligence would do, in the same way it would be stupid for a gorilla to wonder how humanity would change the world.
And I realised that this was what the instinctive 'yuck' was when I thought about the arguments for AI risk. 'I feel that parents should be able to advise their children,' I said. 'Anything involving AGI happening in their lifetime - I can't advise them on that future. I can't tell them how best to live their lives because I don't know what their lives will look like, or even if they'll be recognisable as human lives... I'm scared for my children.' And at this point I apologised, because I found that I was crying.
I met a senior Rationalist briefly in California, and he was extremely wary of me; he refused to go on the record. He has a reputation for being one of the nicest guys you'll ever meet, but I found him a bit stand-offish, at least at first. And I think that was because he knew I was writing this book. He said he was worried that if too many people hear about AI risk, then it'll end up like IQ, the subject of endless angry political arguments that have little to do with the science, and that a gaggle of nerdy Californian white guys probably weren't the best advocates for it then.
At Xanadu they had to do everything different: they had to organize their meetings differently and orient their screens differently and hire a different kind of manager, everything had to be different because they were creative types and full of themselves. And that's the kind of people who started the Rationalists.
Less dramatically, We all know people who are afraid of visiting their city centres because of terrorist attacks, but don't think twice about driving to work.