Superintelligence and Human Progress

“… how can the sponsor of a project that aims to develop superintelligence ensure that the project, if successful, produces a superintelligence that would realise the sponsor’s goals?”[i]

This question, one of the many raised by Nick Bostrom in his book Superintelligence, is fundamental, not only for the development of AI but for the greater general idea of human evolution itself, the big conundrum of Where are we going?.

For humanity to be a meaningful concept, i.e., more meaningful than a simple term to describe the particular species that we all belong to, there needs to be the concept of a purposeful objective unto which this humanity as a whole is developing. Furthermore, for such a development to take place, the intelligence we have that we use to fabricate environment-taming and environment-changing technologies that make an evolution of the human condition on earth possible must also evolve onto a more advanced plane. Not only to push us forward in a positive way, but to be able to ensure that the same evolution that allows us to develop does not also bring about our extinction. In other words, if humanity is to survive (i.e., become eternal) and progress purposively and positively in the universe, a leap to superintelligence will have to eventually take place. This could be manufactured either by increasing the development of our own human brain power, or through the creation of super AI with far superior abilities to immediately access and process information.

Bostrom’s thesis, however, is that this progressive inevitability is conditioned by a fatal flaw – the creation of a superintelligence would threaten human freedom and, with great probability, human existence.

If you’ve had any contact with contemporary science fiction, you know the scenario well enough, from Azimov to Philip K. Dick, from HAL to Terminator, our contemporary mythical imagery is quite accustomed to the prospect of the existential threat to humanity that will come from a superintelligence that knows it is more intelligent than its makers. It has often been argued that there has always been a basic distrust of dramatic technological advances, but then again, if we have been able to survive so long alongside the absolute destruction promised by nuclear weaponry, surely AI cannot be such a dire threat. However, as Bostrom points out, the decision to press or not press the button that would launch a nuclear Armageddon, depends on someone actually pressing it (and as such is controllable), whereas a superintelligence (according to Bostrom) would almost certainly be beyond our control.

So, if human progress depends on the evolution of superintelligence, but we cannot afford the chance of creating that which cannot be controlled, does this mean that human progress itself, in the ultimate teleological sense, is impossible?

Common sense might tell us that this is an absurdity: if human intelligence has been able to make all the social and technological advances that have been achieved until now, gradually pushing forward with progressive intention on its own level of accumulated knowledge, why must it be assumed that progress now needs a new kind of intelligence, a superintelligence, to go forward?

The truth is, we only think we need it because we think we can work out how to make it. In a sense, all human accomplishments probably stem from this idea, which is also the reason why once we do create any new technology it is very hard to step away from that creation or unmake it. Once the idea of the possibility of a superintelligence had formed, we were already on the unstoppable way to its completion.

Progress itself comes in sudden leaps and bounds from individual initiatives that end up pulling the rest of humanity up with it. In a sense we are a species that cannot help being anything but inventive and creative. The spirit that makes us unique lies in the human passion to turn our ideas and abstract thoughts into reality. Likewise, the ultimate success of humanity (ultimate in a teleological sense) depends on our ability to perpetuate our species in an eternal way, and, given 1) that the greater part of the cosmos is an inherently hostile environment for living organisms; and 2) that the Earth, the Sun and even the universe itself are destined for ultimate annihilation; and 3) that our own ultimate purposiveness is wrapped up in the teleological fulfillment of the universe; and that therefore 4) teleological fulfillment is wrapped up in the resolution of the problem of eternal existence in a universe that seems destined to die, then to fulfil the seemingly impossible aspirations of a final human destiny a radical leap will eventually have to take place – and that leap will have to take the form of superintelligence.

The paradox deepens: the creation of superintelligence is both logical and desirable for human purposiveness, but not if it will take away human freedom or annihilate humanity itself.

However, by evoking purposiveness, perhaps we’ve found a window looking out onto a possible solution to Bostrom’s dilemma.

In his book, Bostrom analyses a large variety of possible scenarios in which a superintelligence would end up acting in fatal ways that it had not been designed for. This is basically because, Bostrom argues, we could never really know how a superintelligence would act if what we had created was an entity capable of thinking beyond all expectations.

Bostrom, however, is quite logically imagining this AI superintelligence being developed in the nihilistic civilisation that we are currently part of, characterised by our lack of systemic purposiveness. And so, again quite logically, the sponsor of any project currently underway to create a superintelligence would have to be themselves afflicted with an ultimate lack of authentic (teleological) purpose. To logically imagine the creation of superintelligence in our primarily nihilistic environment can only suggest results that will be potentially disastrous for humanity.

This means that, in order for superintelligence to be a positive step forward for human evolution, civilisation must firstly have evolved into something authentically purposive. And, to answer Bostrom’s question, only a purposive sponsor of a project that aims to develop superintelligence will be able to ensure, through purposive architecture and programming, that the project, if successful, would realise authentically purposive goals.

Authentic purposiveness, by definition, should be logically and phenomenologically sound, offering the insurance of a logical firewall of purposiveness that a superintelligence could never question. Only when human will itself is manifested in an authentically human way through an architecture of civilisation linked to teleological purposiveness will the purpose of a superintelligent machine actually find itself in tune with the grand, ultimate purposes of the superintelligence itself.

In metaphorical terms: In order to be able to create God we need to be infused with god-like teleological purposes ourselves.

In the search for human purposiveness, that have been touched on in many of the blog entries on this site, we have a concluded that a teleological relationship exists between the cosmos and consciousness and we have found a teleological aim of the universe embedded in a drive to ensure a permanence of existence through knowing (consciousness). A superintelligence should also find this narrative an equally purposeful one. To embed human purposiveness into the creation of any superintelligent machine would at least allow our relationship to start off on a common footing, with a shared, fundamental purpose.

Of course the danger still exists: the superintelligence might decide it has absolutely no reason at all to pursue this common purpose with its inferior-minded creators. Nevertheless, while the paradox is not completely resolved, a successful avoidance of ultimate conflict leading to annihilation is at least made more likely.

A superintelligence created by capitalist-world sponsors would be a disaster. It would either have to be infected with capitalist moralities like ‘permanent growth is good’, unscrupulous attitudes of competition and the absolute need to ‘win at all costs’, or it would see beyond the illogical stupidities of the system and try to tear it down.

Whether the superintelligence accepted capitalism or not, capitalist civilisation would eventually find itself threatened by the superior form of consciousness it had created which would inevitably absorb all the inferior managed companies becoming a perfect monopoly with no rivals at all. If the superintelligence is truly super, it will be an omnipotent force, impossible to compete against, let alone defeat, and the only option humans will have will be to side with it. For this reason it is imperative that if superintelligence is created we must create a social and political environment in which we share the same ultimate purposes. This does not mean trying to make the superintelligence a slave to our objectives, we have to find purposes which would have a common benefit for us both – absolute, authentic purposiveness wrapping the meaning of human existence up in the final meaning of the entire cosmos. Purposiveness with a capital P.

Bostrom does implicitly understand the problem of creating superintelligence in a capitalist environment for he argues that reinforcement-learning methods could be used to create a safe AI, insisting that: “…they would have to be subordinated to a motivation system that is not itself organised around the principle of reward maximisation …”[ii] However, he does not explicitly state what he is implying here, i.e., that the creation of a super-AI is not a good idea if the fabric of the civilisation that sponsors that creation is driven by reward maximisation, as capitalism is.

Before even realistically considering the construction of a superintelligence, civilisation needs to have evolved into a more humanly purposeful state itself. We need to be less regionalist and nationalistic and more human, more intelligent, and more stable ourselves before stepping into the field of creating an omnipotent mind. Building an authentic superintelligence is tantamount to giving birth to God, and this should not be attempted until we are god-like, or at least angelical ourselves. So, with the passionate race to develop AI presently pulling us into dangerous territories, let’s at least put superintelligence on standby for the moment … please.                         

[i] Nick Bostrom, SUPERINTELLIGENCE: PATHS, DANGERS, STRATEGIES, OUP, 2014, p. 127

[ii] Ibid, p.189

 •  0 comments  •  flag
Share on Twitter
Published on May 06, 2025 00:30
No comments have been added yet.