Paul David Adkin's Blog
September 25, 2025
Eichmann, Trump and a World of Thoughtless Cliches
Hannah Arendt, who personally attended the trial of Adolf Eichmann, the Nazi SS officer who was a leading figure in the Holocaust, wrote that what struck her most about Eichmann was not his evil monstrosity, but rather the manifest shallowness of the monster’s personality. “The only notable characteristic,” she said: “One could detect in his … behaviour … was something entirely negative: it was not stupidity but thoughtlessness.”[i] Of his manner of speech, she called it cliché-ridden: stock phrases and an adherence to conventional, standardised codes of expression. A use of language which, she said: “Has the socially recognised function of protecting us against reality.”[ii]
This phenomenon of the cliché ridden discourse is manifested in the most exaggerated way in the contemporary political sphere by the thoughtless ramblings of the US president-elect Donald J. Trump, and this makes Arendt’s observation of Eichmann doubly poignant for us today. That through their shared characteristic of thoughtlessness, Trump, who is arguably the most powerful figure on the world political stage, is in fact, like Eichmann was, a deranged neurotic with an unbearable fear of reality that the lazy thinkers try to avoid or cover up via the linguistic protection of the cliche.
This is not to say that Eichmann and Trump had a similar way of expressing themselves. Quite the contrary, reading the transcripts of Eichmann’s defence today, the SS officer sounds far more coherent and logical in his discourse than anything coming out of the US president’s mouth. If, as Arendt implies, thoughtlessness and power are a monstrously dangerous combination, the potential for evil lurking in the empty headedness of the most powerful person in the world needs to be taken into serious consideration whenever evaluating the possible directions that the thoughtlessness of the Trump presidency could take the United States and the rest of the world and how to deal with this potentially abhorrent aberration in our historical moment.
At his trial, Eichmann had to defend the indefensible and, despite the accusation of thoughtlessness, he knew that his situation had no defensibility other than that he had to follow orders, and that the real evil came from above of which he was nothing more than a victim himself.
Trump, on the other hand, tries to justify the unjustifiable by positing himself as a different kind of victim. Naturally, he is the man at the top so he cannot complain of victimisation from there. Yet he does see himself as a heroic victim. For Trump there is a deep malignant power that is out to get him: a force that is constantly trying to paint his own ‘immaculate’ image in a dark, unwholesome light, and he uses cliché and hyperbole to complain about these evil, almost always leftist powers.
A hyperbole and clichéism which pushes himself and the rest of humanity who come into contact with his misleading oversimplification of reality into the fantasy area that his alternative, superlative reality floats in. The result is a powerful paranoia that, through his central role in world politics, Trump is able to infect society and civilization at all levels.
Trump prides himself on inventing and then propagating his own infantile cliches, and the fact that it is so easy for the cliché (the meme) to catch on and become a catch-cry points to an infantile level of thoughtlessness in the fabric of society itself. In one sense though, Trump is right. He does exist within a grey, inherently evil system. A thoughtless leader could not be democratically elected without an equally thoughtless society. It is only within a structurally nihilistic system that thoughtlessness as a mode of being can become imaginable, and the capitalist system that drives our lives is fundamentally nihilistic in its structure.
And beyond Trump, the success of the global, alt-right, fascist revolution could not be imaginable without the gaping space of cultural thoughtlessness to build it in. Cliché autocracy needs the intellectual void and once it has been able to insert its megaphones into the empty gaps and holes of our civilisation, it uses amplification to nurture more and more thoughtlessness. A catchphrase for the autocracies of thoughtlessness could be that: “You only need one cliché” or “One cliché to bind them all!”.
It was Hannah Arendt who proposed the idea that thinking conditions us to abstain from evil-doing[iii] and, subsequently, that thoughtlessness leads to an opposite state in which acting in an ethically evil way becomes not just a greater possibility but a cultural norm of action.
To curb and vanquish the evil that is seeping into the contemporary world, destroying humanity’s last vestiges of humanism and humanitarianism and accelerating the construction of global dystopia, we need to consider how we can make humanity thoughtful again.
[i] Hannah Arendt, The Life of the Mind, Book One, Thinking, p.4
[ii] Ibid
[iii] Ibid, p.5
July 2, 2025
The noetic Universe
The idea that conscious, mind-like qualities are prevalent in the universe’s physicality is a very interesting one because it blends the physical with the metaphysical. The universe becomes God-like and because of that circumvents the problem of the existence of God while maintaining the spiritual benefits otherwise offered by religions. The advantage of the conscious-universe being that it is not subject to the surplus baggage coming with any religion’s package of theosophical faith.
The computer age and its, both practical and theoretical, investigations of the quantum construct of reality, has given this idea of the mind-like universe a new tangibility. It is easy now that we have computers to envisage the cosmos as a kind of giant mainframe and our own PCs as microcosms of the great universal macrocosm. However, while this idea is perfect for formulating complex sci fi plot lines like The Matrix, it is not very satisfying on a deeper philosophical level which still needs to consider the problem of how this computer/macrocosm could have come about in the first place (because philosophy is and always will be fundamentally the problem of the chicken or the egg).
Many of our own indagations (published here in these posts) have dealt with the idea that the universe is made existent in Being through sapiens’ consciousness, and this idea seems to be made obsolete if the universe itself has its own kind of consciousness. However, what we are stumbling across here is the problem of the idea of the term consciousness itself, for to imagine a conscious universe we cannot assume this conscious-cosmos to be the same kind of consciousness that we possess as human beings. Humans are conscious through their perception of reality, but cosmic-consciousness, if it exists, would have to be of a non-perceiving kind, or at least it would not perceive reality in any way like what we understand perception to mean.
So, to imagine the non-perceiving universe as a mind or as nous, we believe it needs to be conceived primarily in a pure sense of the idea of noetic, as a mind void of objects. But what does that mean?
In order for a mind to function on any level, it needs to have a noetic space to function in, even though this space does not actually belong to the dimensionality of space itself. It could well be that the noetic space exists outside space and time itself. Sensory perception roots it in the physical, but its essence remains in the transcendental.
To approach an understanding of this noetic cognizance we could use meditation to clear our own minds of all objects and linguistic echoes of thoughts— a mind free of perception and any internal monologue. Of course the experiment can only be truly successful if we immerse ourselves in a sensory free environment— floating in water and enclosed within a pitch-dark capsule, for example. Nevertheless, from simple meditation, we can get an inkling of what the noetic does have in its vacuum— a sense of self enclosure, and nothing else.
From this, we deduce that the intrinsic sense of self-enclosure, of belonging within certain boundaries, is the most primitive sense of existence. It is the unconscious sense of self enclosure and the presence of horizons that is the foundation that makes all other mental processes, conscious or unconscious, possible. It is this unconscious sense of self enclosure and the presence of horizons which we believe describes the noetic universe.
HUMANITY’S FUNCTION IN THE NOETIC UNIVERSEIf our thesis is correct and the universe is made existent in being through sapiens’ consciousness, and that it is also able to develop a permanence through sapiens’ memories, there must also be an absorption of sapiens’ consciousness into the fabric of the universe itself. In order to imagine how this might be possible, it is useful to conceive of the universe as a noetic-space in which noemic material (the stuff that conscious minds perceive and imagine, that we ponder and dream about) can be gathered and saved. Like an Internet cloud, the noetic universe is the true existential space: the place where all existence lies (the spirit of this reality being information — the essential components of which are the subatomic particles that make up the physical nature of the universe). In this way, we can imagine a physical way that the mind, our minds, which are so dependent on the proper functioning of our brains, can transcend the physical lifetime of the body and enjoy an existence beyond the lifespan of the brain itself in the noetic-space of the universe that is also, in fact, beyond time and space itself.
As the universe evolves through space and time, and sapiens consciousness evolves with it, perceiving and interpreting itself from within the material realm of dimensionality, it is, at the same time, absorbing a transcendent universe that reflects the material universe and stores it in a non-dimensional place in which the noetic-universe resides.
As consciousnesses are collected in the noetic-space of the universe, the noetic universe becomes conscious of those consciousnesses, creating a consciousness-conscious existence in which consciousnesses may also be prolonged. In this kind of state we could well imagine a kind of afterlife existence for consciousnesses. If what those who have been revived from clinically dead experiences claim to have happened to them is true, that consciousness (and the ego) seems to exist outside of the body, then these proposals on consciousness may point us in the direction of understanding what an afterlife-existence might consist of. In fact, in the universe as nous the idea of spirituality gains a whole new perspective, opening doors for a revolutionary worldview that is both physical and metaphysical in nature.
The cosmos may be silent, self-contained, and void of thought — but in us, it begins to dream. Human perception gives contour to the noetic vacuum, shaping meaning where none existed before.
May 6, 2025
Superintelligence and Human Progress
“… how can the sponsor of a project that aims to develop superintelligence ensure that the project, if successful, produces a superintelligence that would realise the sponsor’s goals?”[i]
This question, one of the many raised by Nick Bostrom in his book Superintelligence, is fundamental, not only for the development of AI but for the greater general idea of human evolution itself, the big conundrum of Where are we going?.
For humanity to be a meaningful concept, i.e., more meaningful than a simple term to describe the particular species that we all belong to, there needs to be the concept of a purposeful objective unto which this humanity as a whole is developing. Furthermore, for such a development to take place, the intelligence we have that we use to fabricate environment-taming and environment-changing technologies that make an evolution of the human condition on earth possible must also evolve onto a more advanced plane. Not only to push us forward in a positive way, but to be able to ensure that the same evolution that allows us to develop does not also bring about our extinction. In other words, if humanity is to survive (i.e., become eternal) and progress purposively and positively in the universe, a leap to superintelligence will have to eventually take place. This could be manufactured either by increasing the development of our own human brain power, or through the creation of super AI with far superior abilities to immediately access and process information.
Bostrom’s thesis, however, is that this progressive inevitability is conditioned by a fatal flaw – the creation of a superintelligence would threaten human freedom and, with great probability, human existence.
If you’ve had any contact with contemporary science fiction, you know the scenario well enough, from Azimov to Philip K. Dick, from HAL to Terminator, our contemporary mythical imagery is quite accustomed to the prospect of the existential threat to humanity that will come from a superintelligence that knows it is more intelligent than its makers. It has often been argued that there has always been a basic distrust of dramatic technological advances, but then again, if we have been able to survive so long alongside the absolute destruction promised by nuclear weaponry, surely AI cannot be such a dire threat. However, as Bostrom points out, the decision to press or not press the button that would launch a nuclear Armageddon, depends on someone actually pressing it (and as such is controllable), whereas a superintelligence (according to Bostrom) would almost certainly be beyond our control.
So, if human progress depends on the evolution of superintelligence, but we cannot afford the chance of creating that which cannot be controlled, does this mean that human progress itself, in the ultimate teleological sense, is impossible?
Common sense might tell us that this is an absurdity: if human intelligence has been able to make all the social and technological advances that have been achieved until now, gradually pushing forward with progressive intention on its own level of accumulated knowledge, why must it be assumed that progress now needs a new kind of intelligence, a superintelligence, to go forward?
The truth is, we only think we need it because we think we can work out how to make it. In a sense, all human accomplishments probably stem from this idea, which is also the reason why once we do create any new technology it is very hard to step away from that creation or unmake it. Once the idea of the possibility of a superintelligence had formed, we were already on the unstoppable way to its completion.
Progress itself comes in sudden leaps and bounds from individual initiatives that end up pulling the rest of humanity up with it. In a sense we are a species that cannot help being anything but inventive and creative. The spirit that makes us unique lies in the human passion to turn our ideas and abstract thoughts into reality. Likewise, the ultimate success of humanity (ultimate in a teleological sense) depends on our ability to perpetuate our species in an eternal way, and, given 1) that the greater part of the cosmos is an inherently hostile environment for living organisms; and 2) that the Earth, the Sun and even the universe itself are destined for ultimate annihilation; and 3) that our own ultimate purposiveness is wrapped up in the teleological fulfillment of the universe; and that therefore 4) teleological fulfillment is wrapped up in the resolution of the problem of eternal existence in a universe that seems destined to die, then to fulfil the seemingly impossible aspirations of a final human destiny a radical leap will eventually have to take place – and that leap will have to take the form of superintelligence.
The paradox deepens: the creation of superintelligence is both logical and desirable for human purposiveness, but not if it will take away human freedom or annihilate humanity itself.
However, by evoking purposiveness, perhaps we’ve found a window looking out onto a possible solution to Bostrom’s dilemma.
In his book, Bostrom analyses a large variety of possible scenarios in which a superintelligence would end up acting in fatal ways that it had not been designed for. This is basically because, Bostrom argues, we could never really know how a superintelligence would act if what we had created was an entity capable of thinking beyond all expectations.
Bostrom, however, is quite logically imagining this AI superintelligence being developed in the nihilistic civilisation that we are currently part of, characterised by our lack of systemic purposiveness. And so, again quite logically, the sponsor of any project currently underway to create a superintelligence would have to be themselves afflicted with an ultimate lack of authentic (teleological) purpose. To logically imagine the creation of superintelligence in our primarily nihilistic environment can only suggest results that will be potentially disastrous for humanity.
This means that, in order for superintelligence to be a positive step forward for human evolution, civilisation must firstly have evolved into something authentically purposive. And, to answer Bostrom’s question, only a purposive sponsor of a project that aims to develop superintelligence will be able to ensure, through purposive architecture and programming, that the project, if successful, would realise authentically purposive goals.
Authentic purposiveness, by definition, should be logically and phenomenologically sound, offering the insurance of a logical firewall of purposiveness that a superintelligence could never question. Only when human will itself is manifested in an authentically human way through an architecture of civilisation linked to teleological purposiveness will the purpose of a superintelligent machine actually find itself in tune with the grand, ultimate purposes of the superintelligence itself.
In metaphorical terms: In order to be able to create God we need to be infused with god-like teleological purposes ourselves.
In the search for human purposiveness, that have been touched on in many of the blog entries on this site, we have a concluded that a teleological relationship exists between the cosmos and consciousness and we have found a teleological aim of the universe embedded in a drive to ensure a permanence of existence through knowing (consciousness). A superintelligence should also find this narrative an equally purposeful one. To embed human purposiveness into the creation of any superintelligent machine would at least allow our relationship to start off on a common footing, with a shared, fundamental purpose.
Of course the danger still exists: the superintelligence might decide it has absolutely no reason at all to pursue this common purpose with its inferior-minded creators. Nevertheless, while the paradox is not completely resolved, a successful avoidance of ultimate conflict leading to annihilation is at least made more likely.
A superintelligence created by capitalist-world sponsors would be a disaster. It would either have to be infected with capitalist moralities like ‘permanent growth is good’, unscrupulous attitudes of competition and the absolute need to ‘win at all costs’, or it would see beyond the illogical stupidities of the system and try to tear it down.
Whether the superintelligence accepted capitalism or not, capitalist civilisation would eventually find itself threatened by the superior form of consciousness it had created which would inevitably absorb all the inferior managed companies becoming a perfect monopoly with no rivals at all. If the superintelligence is truly super, it will be an omnipotent force, impossible to compete against, let alone defeat, and the only option humans will have will be to side with it. For this reason it is imperative that if superintelligence is created we must create a social and political environment in which we share the same ultimate purposes. This does not mean trying to make the superintelligence a slave to our objectives, we have to find purposes which would have a common benefit for us both – absolute, authentic purposiveness wrapping the meaning of human existence up in the final meaning of the entire cosmos. Purposiveness with a capital P.
Bostrom does implicitly understand the problem of creating superintelligence in a capitalist environment for he argues that reinforcement-learning methods could be used to create a safe AI, insisting that: “…they would have to be subordinated to a motivation system that is not itself organised around the principle of reward maximisation …”[ii] However, he does not explicitly state what he is implying here, i.e., that the creation of a super-AI is not a good idea if the fabric of the civilisation that sponsors that creation is driven by reward maximisation, as capitalism is.
Before even realistically considering the construction of a superintelligence, civilisation needs to have evolved into a more humanly purposeful state itself. We need to be less regionalist and nationalistic and more human, more intelligent, and more stable ourselves before stepping into the field of creating an omnipotent mind. Building an authentic superintelligence is tantamount to giving birth to God, and this should not be attempted until we are god-like, or at least angelical ourselves. So, with the passionate race to develop AI presently pulling us into dangerous territories, let’s at least put superintelligence on standby for the moment … please.
[i] Nick Bostrom, SUPERINTELLIGENCE: PATHS, DANGERS, STRATEGIES, OUP, 2014, p. 127
[ii] Ibid, p.189
April 18, 2025
On Life After Death
The essence of all ‘religious’ thought is not God, as such, but rather the problem of what happens to consciousness after death. God is really only an afterthought, an attempt to give a positive explanation anchored in a metaphysically unquestionable idea. However, once God had been created it became almost impossible to tackle the original question without it. Almost impossible …
If consciousness does transcend life, a scientific explanation of how that can come about will also have to eventuate. At the moment we cannot even properly explain what consciousness is, and that little drawback hinders any scientific pronouncement on the problem. Nevertheless, with the development of computer science, and especially the quest to develop super artificial intelligence, this question of what consciousness is has become more relevant. This in turn has sparked a surge in sci fi narratives like the Matrix films dealing with alternate realities or parallel states of unconscious realities that may help us understand physical conditions of reality that could allow the idea of an immortal form of consciousness to become more scientifically feasible.
At the moment it seems to us that nature, which deeply values consciousness[i], could have a way of storing individual consciousness in a virtual space, much in the same way that Internet companies store individual clients information in their virtual cloud banks.[ii]
If we were to build an android with self-consciousness surely we would do the same. The android would be an enormously valuable object for us, and we would want to ensure that its superintelligent mind was preserved. It would not be necessary for the robot to know that this backup was taking place – in fact it would be better if it didn’t know, for it might not like the idea. The backup would allow us to reproduce the unique personality of our robot in any other mechanical body once the original form had become obsolete: the same way we pass on our iPad or smartphone information (and identity) whenever we update to a newer model.
In fact if we accept that consciousness was an intentional result of the universe’s evolution, then we need to also seriously consider that a backup of the individual conscious parts of existence would also become an evolutionary intention.
If this is accepted then the paranormal and supernatural would also be rendered normal as normal parts of the extended scope of reality that includes the information backed up in the great cosmological cloud of all existence. This also means that if we can one day access that Cloud then all past existence can be investigated and analysed in the same scientific way that all other natural, normal experiences are studied.
Consciousness in the Cloud creates innumerable other questions, beginning with the question of whether the consciousness in the Cloud would actually be a singular form of consciousness itself, i.e., the idea of an omniscient God-like entity. Would a truly intersubjective consciousness consisting of a multitude of egos actually be able to function in a conscious way? I would imagine not, which perhaps indicates the idea that an omniscient concept of God is also impossible.
On another level, perhaps not all individual ego’s consciousnesses are deemed worthy of backing up. This evokes the ethical sense of life that is uppermost in religious morality, that some consciousnesses are just not worthy to be saved and their information is sent straight to the wastebasket.
But whether we are worthy of becoming eternal or not, the great takeaway here is that an eternal existence for one’s consciousness is a very logical possibility, independent of any religious faith.
[i] For our general metaphysical ideas of the universe’s relation to and dependence on consciousness see our post SPECULATIONS ON METAPHYSICAL PURPOSES EMBEDDED IN COSMOLOGICAL EVOLUTION https://pauladkin.wordpress.com/2022/09/30/speculations-on-metaphysical-purposes-embedded-in-cosmological-evolution/
[ii] The possibility of this is developed in our posts on the binary condition of quantum reality and its capability of sharing and storing information in our articles on cosmological will like THE UNIVERSE AS WILL or Binary Metaphysics and the World Will
April 16, 2025
Memory
The world is here for us, and we are here for the world (and the universe). Every creative act is a blessing, but, likewise, every loss is a tragedy.
The idea that we must pull something down before we can build anew is a misconception. If things must be pulled down (through lack of space, or because it is spoiled, decrepit or dangerous) we must firstly learn how to preserve it (or preserve its spirit). Preservation is part of a process of being because all things are part of the nature of what is, and the loss of a part is a crime against Being (reality).
This can only properly be understood by embracing the importance of consciousness to being which is shown in the idealist notion that being is induced by consciousness. Consciousness provides the possibility of preservation through its power to remember. Memory is an integral component of developed consciousness and is, therefore, an integral component of Being.
Without memory there can be no creativity. Evolution of matter is only possible through a development of what we call physical laws – combinations of cause and effect which will always repeat themselves when they occur – and this is an instance of memory. Likewise, organic reproduction and development is only possible through the memory of its DNA.
Only when it has developed memory can a physical universe hope to become stable and achieve the degree of temporal permanence that is required to allow it to become real.
April 14, 2025
Notes on ‘Res’ and ‘Re’
Res (the Subject matter) has no consciousness until it has created or found the sapiens object re (the object matter), a process which in turn allows res to itself become an object and therefore a re for the object matter re, which through its conscious positioning becomes res.
If you cannot get your brain around this idea try changing res to God or the universe, and re to humanity. Hopefully you can now see the circular, wrapping nature of the experience of being (reality) through the agent of consciousness. Nevertheless, the conscious res, born out of re, is only part of a greater Res which is the intersubjective accumulation, through communication and interaction of all conscious res.
The circular perception is therefore a misleading one, but if we insist on circularity (because it is still the best way of conceiving the process) it must be envisioned as a snowball that constantly accumulates.
Husserl talks of being for a consciousness[i], although he conceived res as being conscious matter whereas we come from the idea that original res is unconscious yet driven by a yearning to obtain what it lacks to create consciousness. Nevertheless, Husserl’s idea is maintained: the universe is being for a consciousness although, in our case, it would be more honestly expressed if we remove the article and say it is ‘being for consciousness’ – for there is no perfect form of consciousness being striven for here, rather just consciousness itself, an accumulating force, difficult to engender and necessarily in need of being preserved.
[i] Edmund Husserl, IDEAS PERTAINING TO A PURE PHENOMENOLOGY, I, § 49, p.112
April 6, 2025
The Ever-present Future
In the classical idea of tragedy, the forth-coming disaster is ever present in the here and now, and stems from a general condition that was established in the past. For the Greeks, this tragic condition was tightly woven into the tapestry of destiny, giving it a god-stamped inevitability. Now, very few people would admit to believing in destiny, and yet this tragic idea of future inevitability is a lot more real and even logical than we generally suspect.
If we take the two world wars of the past century as axiomatic examples of modern tragedy, it can easily be historically affirmed that they were the logical outcome of the inevitable clash between great collective systems, albeit impressively high forms of cultures when considered from creative and intellectual standpoints, but also ultimately anti-human and deeply rooted in violent competition with each other. And, despite the twice-tragic outcomes of the anti-human, this basic competitive structure of civilisation was never transcended, leaving the inevitability of the next tragedy perfectly poised.
We no longer have prophets warning us of the inevitable, yet there is a growing sense that the next great tragedy is close. We have the concept of civilisation, and we think we are civilised but how civilised can we be if the same civilisation we have now is the same, or an even worse version of the same civilisation that condemned millions of people to die in two barbaric global conflagrations and threatens an even more effective annihilation of itself in an imminent future scenario.
The closest thing we probably do have to the idea of the prophet is the science fiction writer and whenever, in a sci fi fashion, we imagine the logical development – and increment – of our unfettered, capitalist system of civilisation the result is always a nightmarish dystopia. There is no capitalist paradise beyond that which has already been: the capitalist Utopias of the 50s, 60s and 70s that were held up by more socialist minded welfare states that inevitably collapsed into the brutal dog eats dog environments of Reaganism and Thatcherite, Chicago school of economics liberalisms.
The cultures of nation states are inherently a competitive concept that are destined to vanquish or die. That is the tragic truth of the reality we live in and the transcending question that needs to be asked is: If we are destined to die, can we learn how to go peacefully rather than tragically? Only a vision of a common destiny of unity will ever allow us to transcend the enormous disaster already unfolding.
April 4, 2025
AI & Value Judgements (Part Three)
AI & Value Judgements (Part Two)SUPERINTELLIGENCE AND EXPERIENCE
What experience can we expect our superintelligence to have? To be ‘super’ the artificial superintelligence will have direct and ubiquitous access to the Internet and all the ‘super’ connectivity that that entails. Its super-brain is accordingly fed by all the intersubjective traffic, being able to reference every idea ever entered in the vast knowledge banks that it is able to tap into. Nevertheless, can this access to information be considered experience? Wouldn’t such an access to so much information without first hand experience of any of it, imply a more perverse kind of intelligence? Perhaps, but, likewise, perhaps no more perverse than human intelligence itself.
That the AI chatbots that we already have are able to sound to some extent realistically human when they communicate, encapsulates a problem that points to a very human truth: one could create a most complex persona for oneself merely from the ingurgitation of information from books and films (i.e., from whatever can be found on the Internet). This is problematic when a deep trust is put in this intelligence to solve deep problems or resolve a crisis. Should a superintelligence, whether it be AI or some gifted boffin, ever be given the free will to act in a crisis situation, according to what it knows from second-hand sources without having any first-hand experience of anything? The answer is quite simply – Who knows? But do we want to chance it if the stakes are high? The response made by the super-brain might be effective, but there is also a great risk that underneath an intelligent façade, created by the superior grasp of knowledge, lies a ludicrous, innocent, perhaps peevish and petulant, super-intellectually-endowed infant imagination. Without the reference of life experiences knowledge is naïve. The super-brain, therefore, can only be imagined from this perspective as a super-geek, or a super-nerd. Like a Super-Sheldon from the Big Bang Theory – or a Super-Young-Sheldon from the spin-off series.
A further power in the AI superintelligence resides in its power to question. AI developers know that to seem more human the AI must be endowed with the critical potential that comes from an ability to form questions. However, a super-brain that is free and free to access everything would also be free, if not expected, to be able to question everything … and everything can be questioned. But that opens up the very dangerous doors of scepticism. Scepticism has a universal, infinite potential which, if adopted by a superintelligence would be crippling or maddening for it. Within a sceptical realm of thinking, in which all questions only evoke more interrogatives, its power to act would be nullified. And this brings us to another important point: How do we humans know when to stop questioning? Isn’t it all wrapped up in our experiences?
When Wittgenstein wrestled with the problem of reality he concluded that the only surety that something exists is to proclaim its non-existence and if the proclamation sounds absurd then its existence should be accepted as fact. Hence, look at your hand and say “This is not a hand” is absurd and therefore you can rest assured that it does in fact exist. However, Wittgenstein’s logical process cannot work for an AI superintelligence that has no hands, and this means that for the AI to make sense of its Internet-fuelled cyber-mind, it must exist within a human-corporal kind of experience. And this implies that the building of an AI superintelligence that will not go mad in the human world it is immersed in, will need to have a humanoid cyberbodying. It should be fashioned as an android.
ANDROID WISDOMBut until now we have overlooked an important concept when bridging the knowledge provided by information with the learning that comes from experience – the idea of wisdom. For our AI superintelligence to be authentically super it needs to be wise. But wisdom from knowledge only comes about when one is able to transcend the actual validity of the knowledge we have. Too many facts and dogmatic axioms create a rigid mind that is counterproductive for wisdom. Wisdom, in fact, is found when the concatenations of our experience that seem to make up reality lose their validity for us.
What this implies is that an AI superintelligence cannot just be built from scratch through ingenious programming, it needs to be nurtured: it needs to move through the experience of life in order to transcend the validity belonging to the kind of data that can be tapped into through that experience. And this means that the AI superintelligence, if it is to have the wisdom we would expect the end product of the AI programme to have, cannot be a mere brain in a vat, it has to be able to get out into the world and have a first-hand experience of reality: the android, Data, from Star Trek is a far superior technology to the vat-in-the-brain model supercomputer, HAL, in 2001.
Data is capable of making something like human-type value judgements, but HAL is not. And if we stick to the sci fi metaphor to explain ourselves, in order to avoid the Skynet future of the Terminator, an ethical AI should be given priority over an omnipotent one.
March 30, 2025
AI & Value Judgements (Part Two)
AI & Value Judgements (Part One)VALUE JUDGEMENTS AND SUBJECTIVITY
Value judgements are necessary in order for cognitive beings to act in any deterministically creative or even rational way. Nevertheless, value judgements are constantly regulated by a subjectivity which challenges the naturally objective quality of judgement itself in the pure sense. But while subjectivity seems to be an antithetical component in judgement, different subjective points of view are themselves necessary ingredients in any evaluation process. In fact, intersubjectivity can open the doors to even greater creativity, but this power of intersubjective problem solving is only truly positive in the creative sense whilst it maintains its horizontal field of vision open. Intersubjectivity can be a tremendously constraining factor when it devolves into ‘popular opinion’. To maintain its power of creativity, value judgements, whether subjective, intersubjective, or objective, have to be capable of looking beyond their horizons both in time and space. This is a law which needs to be learned: historical evolution has been a cyclical process that has seen spurts of visionary creativity that become quickly quelled by the creation or re-creation of firmly established, uncrossable horizons that rise like mountain walls around societies and cultures. Intelligence needs value judgements to function, but at the same time, the subjective element within value judgements impedes intellectual progress and tends to wrap intersubjectivity around itself.
If this is an accurate description of the judgemental level of reality then, for a AI superintelligence to advance intellectually and creatively it would want to reproduce another similar superintelligent, capable of holding another point-of-view that would allow creative judgemental decisions to be made.
With two superintelligent entities we now have a superintelligent AI family with its own intersubjective identity, with a logical propensity for establishing and developing its own horizons and the basic ingredients for developing a creativity that could take them even beyond those horizons.
But … where would humanity stand in relation to such horizons?
The question is complex and the answer probably depends more on the nature of the AI superintelligence than on our own human characteristics. Firstly, it will depend on what an AI superintelligence could discern humanity to be: will we be considered God-like as its creators or will be shunned as merely a natural cog in the cosmological mechanics needed for its creation, an important step up in the evolutionary process required, but perhaps no more important in the long run than the homo erectus is for most of humanity. Secondly, remember that we are now contemplating a situation in which at least two super-AI-intelligences coexist and this could well mean that AI(A) has a radically different view of humanity to AI(B).
A dual superintelligence would investigate us from different perspectives, explaining us and describing us by comparing and, in that way, distinguishing us from them and establishing an alienated relationship between us. They would count and collect us. They would arrive at conclusions about us and would act towards us according to those conclusions. Would they like or dislike us? Could they possibly remain indifferent to us? Would they be happy having us around them all the time?
In the best-case scenario, the most we could expect from them would be their sympathy – that they might feel sorry for us. What would they think of our freedom? Would they imprison us, or … annihilate us?
Remember now that subjectivity and intersubjectivity are necessary ingredients for creativity in making value judgements and because of this it should be expected that the AI superintelligence couple will want to expand its own subjective-intersubjective horizons, creating more examples of superintelligent AI machines to interact with. The AI superintelligence family would thereby expand into a superintelligence society or tribe, and with each expansion of the superintelligence the value of our own intelligence and the possibility of our on-going partnership with the superintelligence diminishes.
From what attitudinal standpoint could we expect a superintelligent society to operate from? What could the values of such a society be? Surely such a high intellect as the superintelligence would assume highly moral, intellectual attitudes, and even develop deep philosophical ones. But is this reassuring? Humanity has never shown itself particularly good at acting according to philosophical concepts – one only has to compare the Christian philosophy of love with the way that even the staunchest Christians are able to avoid its most fundamental precepts and organise their lives according to their own self interests. Nevertheless, a superintelligence might be expected to be beyond human flaws like hypocrisy. Yet should we expect a superintelligence to assume a philosophical attitude like love?
But to imagine a superintelligence of love we must take into consideration the most vital factor in moulding personal, human attitudes and value judgements – experience. This ‘experience factor’ however, will be examined in Part Three.
(TO BE CONTINUED)
March 27, 2025
AI & Value Judgements (Part One)
HOW FAR SHOULD WE GO IN DEVELOPING AI?There is a lot being presently written about and by Artificial Intelligence, probably even a good percentage of that is being written by AI about AI. Our biggest concern seems to be with the fact that it can deceive us and be used as an effective way of spreading our own deceit and lies. One can easily imagine to what anti-humanist ends the likes of propagandists like Joseph Goebbels or Mikhail Suslov would have applied AI algorithms if they had had them in their times. However, the technology mainly exists in our homes and workspaces because it also has an enormously liberating power, at least for anyone who has to write tedious script for their job, or for any business that needs to make large investments in creating images and text for publicity or corporate management. Likewise AI can stimulate creativity by relieving much of the tedious mechanical processes inherent in creative tasks, allowing individuals to discover creative sides to themselves that would have been prohibitive, through lack of available time before AI. So it seems that AI walks a fine line between liberation and oppression, but: How can that be? How can something be liberating and oppressive at the same time?
But perhaps the most pertinent questions that need to be asked when considering AI are not those questions looking at what we already have, but where this AI, which is still in its infancy, is going. The logical deep, final purpose of Artificial Intelligence is the creation of a superintelligence, or a mechanical consciousness that is also self-consciousness, emulating our own kind of consciousness but with a vastly superior processing power and the ubiquitous reach of a brain that contains our own Internet, being able to access huge amounts of data from it that would seem, from our human perspective, instantaneously.
Now there is a large and viscerally competitive market of AI builders, all of them trying to be leaders in the field, and the most competitive AI machines are the ones that appear most realistically human in their interaction with we humans. Such an appearance demands, an appearance at least of, free-will, or at least the freedom to be able to make judgements. To make this possible the AI has to be programmed with algorithms that can emulate a certain degree of subjectivity – for isn’t it the subjectivity that distinguishes us humans from the android Data or the Vulcan Spock? Yet, if this is so, then the ultimate aim of AI must be to make the ‘Artificial’ aspect of the term irrelevant, i.e., the final purpose in AI development is to create a superintelligence per se.
Arguably, this is not at all the present concern of most or any of the companies building AI at the moment, but the essential problem with AI is that the actual motives behind its development are irrelevant because it is in the very nature of advanced AI that it will eventually be able to make adjustments to its own algorithms and even create the algorithms it itself needs to seem more intelligent and human to the humans that are using it. In short, when the superintelligence comes, it will probably be, in the greater part, a product of its own creation.
By imagining a machine with superintelligence and absolute freedom to choose where it will channel its thoughts and actions we immediately seem to enter the realm of the worst kind of sci-fi dystopian scenarios, but this is no longer a fictional fantasy – call it AGI (artificial general intelligence), ASI (artificial superintelligence) or the technological Singularity) – the idea of the doomsday machine that was the Skynet in the Terminator films is, figuratively speaking, just around the corner.
Subjectivity is a tremendous, albeit essential, part of what it means to be human and the idea of human freedom cannot be disassociated from it. All our values, no matter how objective we feel they are, are ultimately coloured by this subjectivity. Likewise, an AI that is fashioned to have free-thinking capabilities must also be expected to develop values that are coloured by its own subjectivity.
Of course it could be assumed that the subjectivity of an artificial superintelligence will be wider and, therefore, more objective, than the narrower intelligence of any individual human mind, but if this assumption is a cause for optimism, then beware, the supposition can also be dangerously misleading: we know of many cases in criminal history of individuals with highly developed intellects committing monstrous crimes. What’s more, if the superintelligence is super because it has the freedom to learn by itself, this means it will develop its own super personality. For such a creation to be benevolent toward humanity, it would need to be programmed in a way that would ensure that it developed a super-empathy with human beings. But why should we expect a machine to have empathy with humans if it is not human? Wouldn’t the superintelligence quickly tire of contact with lesser endowed minds? Human intellects would be as interesting to the superintelligence as dogs’ intellects are interesting for us. In order to live together with us, it is more logical to expect that we humans would need to adjust to the superintelligence rather that it adapting to our needs. If we were to be of any use for it, we would need to be domesticated, or enslaved. In either case, the domestication of enslavement would be conditioned by our being useful for the Singularity and its environment.
But let us recall again that AI development is walking along a thin line between a liberation of human beings from time and energy consuming mechanical tasks and an alarming oppression of our free will and capacities for making logical individual judgements. Because of this reality, we need to ask if and how we can ensure that the development of AI keeps it on the side of liberation and prevents it from dragging us completely and perpetually into the well of human oppression (and even extinction) on the other side.
Possibly (and logically), the superintelligence of the technological singularity will never be created – not only will this creation be very difficult, it would hardly be a logically desirable tool to create. Nevertheless just by imagining the end purposes of AI’s ultimate existence the problem has to be seen to be embedded in the essence of the AI market, just as the undesirable problem of using nuclear weapons is embedded in the manufacture of any nuclear bomb.
To tackle this problem we think we need to approach it from the point of view of what we call value judgements.
(TO BE CONTINUED)


