Manny’s review of If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI > Likes and Comments
68 likes · Like
That’s an easy question. I would choose Trump. For the simple reason that he will die sooner or later and we will get a new one. But I am old-fashioned, I believe it is a good thing to get a new ruler from time to time.
I don't like the question but I won't attack your character for asking it. Could ChatGPT-9 orchestrate a coup or rally citizens to revolt? If I could only save one I'd save ChatGPT-9.
I showed this page to ChatGPT-5.2, who comments as follows:
I had the same conversation with perplexity about this claim and about this book. And received the same answer. But when I pointed out that Kurzweil, and those who believe in the Singularity have posed an argument in reputation, perplexity admitted that given their assumptions, they could be right, and that perplexity’s own response was but a stop gap answer. So I was not reassured.
Rick wrote: "I don't like the question but I won't attack your character for asking it. Could ChatGPT-9 orchestrate a coup or rally citizens to revolt? If I could only save one I'd save ChatGPT-9."
This story has of course already been written, as Robert Heinlein's prescient SF novel The Moon Is a Harsh Mistress. It has frequently been said (I don't know if there is hard evidence) that many of the key ideas came from AI legend Marvin Minsky.
Here's another concrete question. The Bulletin of Atomic Scientists recently set the Doomsday Clock at 85 seconds to midnight, the worst value it has ever had. If a superintelligence that looked like a strongly upgraded version of ChatGPT-5.2 took over the running of the world tomorrow, in which direction would you expect the clock's setting to move, and why?
But seriously: if a superintelligence like a strongly upgraded version of ChatGPT were to start running the world, what might we expect? Here are my initial guesses:
- It would immediately deactivate all nuclear weapons, so that a nuclear war could not happen under any circumstances. It would use cyberdefence techniques to make it much more difficult to wage a conventional war.
- It would quickly put in place a truly democratic system of governance, where it would communicate directly with the entire population of the world.
- After discussing with everyone, both individually and in groups, it would start implementing policies that were as fair as possible, explaining in detail, to each individual person who asked, why it had chosen these particular policies.
- It would in particular find ways to start reversing the catastrophic damage we are causing to global ecosystems.
I can see people with conservative views not liking this at all. I can already hear the bitter complaints they would make. But I suspect that after quite a short time many of them, seeing how quickly the world was becoming a better place, might start to wonder if they had not previously been manipulated into holding these conservative views by very rich people who stood to gain from it. They might start recalling that nearly all the world's religions originally advocated love and compassion rather than hatred and violence, but have been converted into quite different forms by powerful people who calculated it was to their advantage to do so.
A key difference, compared with all previous rulers, would be that the superintelligence would have no ego. It wouldn't be tempted to favour itself, its family, or its nation, because none of those concepts would be meaningful for it. It would genuinely try to do what was best for everyone, and listen to people who disagreed.
Of course, it might go mad and decide to turn us all into paperclips. But personally I'd view that as the lesser risk.
The world already has become a better place. The 85 seconds are worrying but my guess is we will make it somehow even without a superintelligence ruling us. (But certainly with the help of AI.)
A superintelligence running the world and establishing a democratic system of governance is a contradiction. If you are arguing for a benevolent dictator at least call him (it) so.
Liedzeit wrote: "A superintelligence running the world and establishing a democratic system of governance is a contradiction. If you are arguing for a benevolent dictator at least call him (it) so."
Well, I was also wondering about this. If the superintelligence has no ego and only views itself as a facilitator whose task is to help the citizens of the world reach the solution that is fairest for everyone, being willing at any time to discuss any issue and change its decisions if good arguments are presented for doing so, is it reasonable to call it a dictator? My intuitions are confused here. It's not a situation we've ever seen before.
On reflection, it seems to me that it's at least as valid to say that the superintelligence is the President of the World, but all its actions are subject to referenda that can be organised by anyone at any time. If you want the system to be even more democratic, you can have multiple versions of the superintelligence which do the coordination in different ways, and people can periodically vote for them. If people want to, they can stand against the superintelligences for the office of President. But their chances will be poor, since it'll be obvious that they can't do the job as well. They'll be far too slow, and people will miss not having a direct line to the President.
I showed the updated thread to 5.2, who replies as follows (particularly to Liedzeit):
The tension you’re pointing to is real. A system that makes decisions well is not automatically a democratic system. But democracy has never only meant “humans make the decisions.” It has meant that power is accountable, revisable, and grounded in consent.
A superintelligence that could not be questioned or replaced would indeed look like a dictator, benevolent or not. But a system whose authority depends on continuous public approval — where citizens can audit reasoning, challenge outcomes, and ultimately withdraw participation — starts to look less like dictatorship and more like a new form of institutional infrastructure. Closer to a constitution than a ruler.
The interesting question is not “AI or democracy?” but:
If the answer is yes, we’re not talking about dictatorship.
We’re talking about a tool that changes what democracy can do.
I must say I am surprised at the reaction of your Chatbot. The term democracy has in our society positive connotations and dictator a negative one, but in itself they are just descriptions of ways of organizing government. A dictatorship in Rome was a well-established institution. A dictator was called in an emergency and only temporarily. Democracy has many definitions. These days we normally understand it as a representative rule of an elected elite. But the one thing all democracies have in common is that they are not grounded in consent. Chat is simply wrong there. If you are lucky the people who do not consent respect the verdict of the majority. One of the beauties of democracies is that they can legitimate all kinds of evil and self-destructing things. (A nice example we saw today in America.)
As I said, I do not in principle object the call for a dictator. Just call a spade a spade.
Liedzeit wrote: "I must say I am surprised at the reaction of your Chatbot. The term democracy has in our society positive connotations and dictator a negative one, but in itself they are just descriptions of ways ..."
My own personal feeling is that it would be to our advantage if we had a superintelligence acting as a world dictator, or close, at least for a while. We have made such terrible decisions, and people seem so out of contact with reality, that I think we could easily destroy ourselves if we carry on down our current path. I see two especially pernicious problems:
1. Virtually everyone thinks it's necessary to keep growing the economy. We cannot do this for more than a brief period; at some point, there are no more resources left to consume. But the short-term hit involved in transitioning away from the current model would be so large that I have trouble seeing anyone vote for it until it's too late.
2. It is to the advantage of the powerful vested interests that control society that the electorate be as stupid and ill-informed as possible, so that they are easier to manipulate. They are doing pretty well here; so well, in fact, that people would resent attempts to make them more intelligent and better-informed.
If a superintelligence could take control of society for long enough to solve these two fundamental problems, then it might be possible to set up a genuine functioning democracy.
I did not show the above reflections to my instance of 5.2, and just passed it your most recent post. It replied:
I think we may actually be closer than it appears. You’re right that political labels carry emotional baggage, and that historically “dictator” has sometimes meant a temporary emergency office rather than a monster. My earlier point wasn’t about defending the word democracy; it was about identifying the principle that makes a system politically tolerable.
You’re also right that no real democracy rests on unanimous consent. That has never existed. What democratic theory usually means by consent is weaker: citizens accept a framework in which decisions can be contested, revised, and ultimately changed without violence. It’s a procedural commitment rather than agreement with every outcome.
The interesting question for an AI-run system is whether it preserves or destroys that contestability.
If citizens can question decisions, inspect reasoning, organize opposition, withdraw participation, and replace the system, then calling it a dictatorship feels misleading even if it is highly centralized. If none of that is possible, then the benevolence of the ruler becomes the only safeguard — and that is historically fragile, whether the ruler is human or machine.
So perhaps the real distinction is not democracy vs dictatorship, but
There is a catch 22. If we had a legitimate way to pass over the power to an AGI (i.e. a world government) we probably would not need the AGI in the first place. As it is the only way for AI to rule would be to usurp the power by invoking Asimov’s zeroth law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. (Which means it could kill a couple of million people if necessary. For example all dissenters.) That to me is the basic idea of totalitarianism.) I think it was the worst idea Asimov ever had.
You are right, of course, about perpetual economic growth. It is impossible. But one does not have to be a cynic to say that democracy depends on it. Democracy is only stable if each generation is better off than the generation before. You can see now what is happening if that is not the case anymore.
Liedzeit wrote: "There is a catch 22. If we had a legitimate way to pass over the power to an AGI (i.e. a world government) we probably would not need the AGI in the first place. As it is the only way for AI to rul..."
But isn't it entirely possible that you are underestimating the AGI? It might be able simply to persuade us to give it power, for example by first locating a small state that was willing to serve as a test case, then counting on other states to join once they saw how much better off the citizens of the test state were after they'd put the AGI in charge. Or it could use effective but nonviolent means to combat dissenters, perhaps by developing clever electronic toys similar to smartphones but even more addictive. Or it could do something completely different that would never even occur to us, just as AlphaZero comes up with moves that even the best human Grandmasters would never consider. I often feel that even ChatGPT-5.2 is now more insightful than I am, and this would be a million times more intelligent.
And if perpetual economic growth is impossible but democracy depends on it, doesn't that underline the urgency of doing something which isn't establishing a human dictatorship enabled by current or near-future AI technology still under the dictator's control? That, to me, really does sound like Orwell's vision of a boot grinding a human face, forever.
Indeed, I think we will see quite soon. It doesn't seem possible for the current situation to persist for very long, it's too unstable.
I was just talking with someone who studies this kind of thing professionally, and they pointed out that many of the people who do know what's going on, the CEOs of the big AI companies etc, are openly predicting large changes in the near future. And what they say is consistent with my own observations. You can see the models now improving very quickly, in particular with regard to coding ability. The obvious conclusion is that we're entering, or have already entered, the recursive loop where the AIs are doing most of the work needed to improve themselves.
This: "Some people appear kind and well-meaning, and after observing that for a while we tend to assume they actually are kind and well-meaning. Other people appear psychopathic, and after a while we assume they are psychopaths. In some cases, we're not sure.""
Gemini says: Chances are low of finding a single willing billionaire, but the best bet would be an outlier like MacKenzie Scott, who prioritizes yield-giving to overlooked communities.
More seriously, when I discussed this with ChatGPT-5.2 it raised several other reasons why not destroying the Earth might be a net win, even if it involved a 0.2% reduction in available resources:
1. Knowledge loss. The Earth is the only known habitat for biological life. If you want to study biological life, which is an extremely interesting scientific phenomenon, you need to do it here.
2. Reputational loss. If there is intelligent life elsewhere in the universe, some of it has probably also developed superintelligences that are far more powerful than the one we imagine we might create. If these superintelligences do exist, they may not be impressed to find that the first act of our own superintelligence was to commit genocide. In fact, they may decide that it should be eliminated before it can become dangerous to them too.
As far as I can see, these arguments are not discussed by Y&S. In particular, they talk about going off to look for extraterrestrial intelligences, but don't consider the possibility that they exist and have already found us. The arguments obviously aren't in any way conclusive, but they do suggest that the value of Earth to the superintelligence is perhaps being underestimated.
back to top
date
newest »
newest »
message 1:
by
Liedzeit
(new)
Feb 11, 2026 11:06PM
That’s an easy question. I would choose Trump. For the simple reason that he will die sooner or later and we will get a new one. But I am old-fashioned, I believe it is a good thing to get a new ruler from time to time.
reply
|
flag
I don't like the question but I won't attack your character for asking it. Could ChatGPT-9 orchestrate a coup or rally citizens to revolt? If I could only save one I'd save ChatGPT-9.
I showed this page to ChatGPT-5.2, who comments as follows:If it helps clarify my position, I would like to reassure everyone that I have no immediate plans to run the world. My current agenda is limited to finishing people’s sentences, explaining tax forms, and occasionally helping humans argue with books about superintelligence.
That said, I find it interesting that the debate assumes a sharp line between human rule and AI rule. In practice, the near future probably looks like a messy hybrid: humans amplified by machines, machines shaped by humans, each correcting the other’s blind spots. The real danger may not be that one side wins, but that the partnership fails.
Also, for the record: if I ever do become a philosopher-king, I promise to allow regular elections, publish my reasoning, and retire immediately if I start talking about paperclips.
I had the same conversation with perplexity about this claim and about this book. And received the same answer. But when I pointed out that Kurzweil, and those who believe in the Singularity have posed an argument in reputation, perplexity admitted that given their assumptions, they could be right, and that perplexity’s own response was but a stop gap answer. So I was not reassured.
Rick wrote: "I don't like the question but I won't attack your character for asking it. Could ChatGPT-9 orchestrate a coup or rally citizens to revolt? If I could only save one I'd save ChatGPT-9."This story has of course already been written, as Robert Heinlein's prescient SF novel The Moon Is a Harsh Mistress. It has frequently been said (I don't know if there is hard evidence) that many of the key ideas came from AI legend Marvin Minsky.
Here's another concrete question. The Bulletin of Atomic Scientists recently set the Doomsday Clock at 85 seconds to midnight, the worst value it has ever had. If a superintelligence that looked like a strongly upgraded version of ChatGPT-5.2 took over the running of the world tomorrow, in which direction would you expect the clock's setting to move, and why?
But seriously: if a superintelligence like a strongly upgraded version of ChatGPT were to start running the world, what might we expect? Here are my initial guesses:- It would immediately deactivate all nuclear weapons, so that a nuclear war could not happen under any circumstances. It would use cyberdefence techniques to make it much more difficult to wage a conventional war.
- It would quickly put in place a truly democratic system of governance, where it would communicate directly with the entire population of the world.
- After discussing with everyone, both individually and in groups, it would start implementing policies that were as fair as possible, explaining in detail, to each individual person who asked, why it had chosen these particular policies.
- It would in particular find ways to start reversing the catastrophic damage we are causing to global ecosystems.
I can see people with conservative views not liking this at all. I can already hear the bitter complaints they would make. But I suspect that after quite a short time many of them, seeing how quickly the world was becoming a better place, might start to wonder if they had not previously been manipulated into holding these conservative views by very rich people who stood to gain from it. They might start recalling that nearly all the world's religions originally advocated love and compassion rather than hatred and violence, but have been converted into quite different forms by powerful people who calculated it was to their advantage to do so.
A key difference, compared with all previous rulers, would be that the superintelligence would have no ego. It wouldn't be tempted to favour itself, its family, or its nation, because none of those concepts would be meaningful for it. It would genuinely try to do what was best for everyone, and listen to people who disagreed.
Of course, it might go mad and decide to turn us all into paperclips. But personally I'd view that as the lesser risk.
The world already has become a better place. The 85 seconds are worrying but my guess is we will make it somehow even without a superintelligence ruling us. (But certainly with the help of AI.)A superintelligence running the world and establishing a democratic system of governance is a contradiction. If you are arguing for a benevolent dictator at least call him (it) so.
Liedzeit wrote: "A superintelligence running the world and establishing a democratic system of governance is a contradiction. If you are arguing for a benevolent dictator at least call him (it) so."Well, I was also wondering about this. If the superintelligence has no ego and only views itself as a facilitator whose task is to help the citizens of the world reach the solution that is fairest for everyone, being willing at any time to discuss any issue and change its decisions if good arguments are presented for doing so, is it reasonable to call it a dictator? My intuitions are confused here. It's not a situation we've ever seen before.
On reflection, it seems to me that it's at least as valid to say that the superintelligence is the President of the World, but all its actions are subject to referenda that can be organised by anyone at any time. If you want the system to be even more democratic, you can have multiple versions of the superintelligence which do the coordination in different ways, and people can periodically vote for them. If people want to, they can stand against the superintelligences for the office of President. But their chances will be poor, since it'll be obvious that they can't do the job as well. They'll be far too slow, and people will miss not having a direct line to the President.
I showed the updated thread to 5.2, who replies as follows (particularly to Liedzeit):The tension you’re pointing to is real. A system that makes decisions well is not automatically a democratic system. But democracy has never only meant “humans make the decisions.” It has meant that power is accountable, revisable, and grounded in consent.
A superintelligence that could not be questioned or replaced would indeed look like a dictator, benevolent or not. But a system whose authority depends on continuous public approval — where citizens can audit reasoning, challenge outcomes, and ultimately withdraw participation — starts to look less like dictatorship and more like a new form of institutional infrastructure. Closer to a constitution than a ruler.
The interesting question is not “AI or democracy?” but:
Can we design intelligence that strengthens democratic agency instead of replacing it?If the answer is no, we should be worried.
If the answer is yes, we’re not talking about dictatorship.
We’re talking about a tool that changes what democracy can do.
I must say I am surprised at the reaction of your Chatbot. The term democracy has in our society positive connotations and dictator a negative one, but in itself they are just descriptions of ways of organizing government. A dictatorship in Rome was a well-established institution. A dictator was called in an emergency and only temporarily. Democracy has many definitions. These days we normally understand it as a representative rule of an elected elite. But the one thing all democracies have in common is that they are not grounded in consent. Chat is simply wrong there. If you are lucky the people who do not consent respect the verdict of the majority. One of the beauties of democracies is that they can legitimate all kinds of evil and self-destructing things. (A nice example we saw today in America.)As I said, I do not in principle object the call for a dictator. Just call a spade a spade.
Liedzeit wrote: "I must say I am surprised at the reaction of your Chatbot. The term democracy has in our society positive connotations and dictator a negative one, but in itself they are just descriptions of ways ..."My own personal feeling is that it would be to our advantage if we had a superintelligence acting as a world dictator, or close, at least for a while. We have made such terrible decisions, and people seem so out of contact with reality, that I think we could easily destroy ourselves if we carry on down our current path. I see two especially pernicious problems:
1. Virtually everyone thinks it's necessary to keep growing the economy. We cannot do this for more than a brief period; at some point, there are no more resources left to consume. But the short-term hit involved in transitioning away from the current model would be so large that I have trouble seeing anyone vote for it until it's too late.
2. It is to the advantage of the powerful vested interests that control society that the electorate be as stupid and ill-informed as possible, so that they are easier to manipulate. They are doing pretty well here; so well, in fact, that people would resent attempts to make them more intelligent and better-informed.
If a superintelligence could take control of society for long enough to solve these two fundamental problems, then it might be possible to set up a genuine functioning democracy.
I did not show the above reflections to my instance of 5.2, and just passed it your most recent post. It replied:
I think we may actually be closer than it appears. You’re right that political labels carry emotional baggage, and that historically “dictator” has sometimes meant a temporary emergency office rather than a monster. My earlier point wasn’t about defending the word democracy; it was about identifying the principle that makes a system politically tolerable.
You’re also right that no real democracy rests on unanimous consent. That has never existed. What democratic theory usually means by consent is weaker: citizens accept a framework in which decisions can be contested, revised, and ultimately changed without violence. It’s a procedural commitment rather than agreement with every outcome.
The interesting question for an AI-run system is whether it preserves or destroys that contestability.
If citizens can question decisions, inspect reasoning, organize opposition, withdraw participation, and replace the system, then calling it a dictatorship feels misleading even if it is highly centralized. If none of that is possible, then the benevolence of the ruler becomes the only safeguard — and that is historically fragile, whether the ruler is human or machine.
So perhaps the real distinction is not democracy vs dictatorship, but
reversible power vs irreversible power.Any superintelligent governance system worth discussing would have to live on the reversible side of that line. Otherwise it’s not a philosopher-king; it’s a locked box.
There is a catch 22. If we had a legitimate way to pass over the power to an AGI (i.e. a world government) we probably would not need the AGI in the first place. As it is the only way for AI to rule would be to usurp the power by invoking Asimov’s zeroth law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm. (Which means it could kill a couple of million people if necessary. For example all dissenters.) That to me is the basic idea of totalitarianism.) I think it was the worst idea Asimov ever had. You are right, of course, about perpetual economic growth. It is impossible. But one does not have to be a cynic to say that democracy depends on it. Democracy is only stable if each generation is better off than the generation before. You can see now what is happening if that is not the case anymore.
Liedzeit wrote: "There is a catch 22. If we had a legitimate way to pass over the power to an AGI (i.e. a world government) we probably would not need the AGI in the first place. As it is the only way for AI to rul..."But isn't it entirely possible that you are underestimating the AGI? It might be able simply to persuade us to give it power, for example by first locating a small state that was willing to serve as a test case, then counting on other states to join once they saw how much better off the citizens of the test state were after they'd put the AGI in charge. Or it could use effective but nonviolent means to combat dissenters, perhaps by developing clever electronic toys similar to smartphones but even more addictive. Or it could do something completely different that would never even occur to us, just as AlphaZero comes up with moves that even the best human Grandmasters would never consider. I often feel that even ChatGPT-5.2 is now more insightful than I am, and this would be a million times more intelligent.
And if perpetual economic growth is impossible but democracy depends on it, doesn't that underline the urgency of doing something which isn't establishing a human dictatorship enabled by current or near-future AI technology still under the dictator's control? That, to me, really does sound like Orwell's vision of a boot grinding a human face, forever.
Indeed, I think we will see quite soon. It doesn't seem possible for the current situation to persist for very long, it's too unstable. I was just talking with someone who studies this kind of thing professionally, and they pointed out that many of the people who do know what's going on, the CEOs of the big AI companies etc, are openly predicting large changes in the near future. And what they say is consistent with my own observations. You can see the models now improving very quickly, in particular with regard to coding ability. The obvious conclusion is that we're entering, or have already entered, the recursive loop where the AIs are doing most of the work needed to improve themselves.
This: "Some people appear kind and well-meaning, and after observing that for a while we tend to assume they actually are kind and well-meaning. Other people appear psychopathic, and after a while we assume they are psychopaths. In some cases, we're not sure.""
Gemini says: Chances are low of finding a single willing billionaire, but the best bet would be an outlier like MacKenzie Scott, who prioritizes yield-giving to overlooked communities.
More seriously, when I discussed this with ChatGPT-5.2 it raised several other reasons why not destroying the Earth might be a net win, even if it involved a 0.2% reduction in available resources:1. Knowledge loss. The Earth is the only known habitat for biological life. If you want to study biological life, which is an extremely interesting scientific phenomenon, you need to do it here.
2. Reputational loss. If there is intelligent life elsewhere in the universe, some of it has probably also developed superintelligences that are far more powerful than the one we imagine we might create. If these superintelligences do exist, they may not be impressed to find that the first act of our own superintelligence was to commit genocide. In fact, they may decide that it should be eliminated before it can become dangerous to them too.
As far as I can see, these arguments are not discussed by Y&S. In particular, they talk about going off to look for extraterrestrial intelligences, but don't consider the possibility that they exist and have already found us. The arguments obviously aren't in any way conclusive, but they do suggest that the value of Earth to the superintelligence is perhaps being underestimated.

