AI & Value Judgements (Part Three)

AI & Value Judgements (Part Two)
SUPERINTELLIGENCE AND EXPERIENCE

What experience can we expect our superintelligence to have? To be ‘super’ the artificial superintelligence will have direct and ubiquitous access to the Internet and all the ‘super’ connectivity that that entails. Its super-brain is accordingly fed by all the intersubjective traffic, being able to reference every idea ever entered in the vast knowledge banks that it is able to tap into. Nevertheless, can this access to information be considered experience? Wouldn’t such an access to so much information without first hand experience of any of it, imply a more perverse kind of intelligence? Perhaps, but, likewise, perhaps no more perverse than human intelligence itself.

That the AI chatbots that we already have are able to sound to some extent realistically human when they communicate, encapsulates a problem that points to a very human truth: one could create a most complex persona for oneself merely from the ingurgitation of information from books and films (i.e., from whatever can be found on the Internet). This is problematic when a deep trust is put in this intelligence to solve deep problems or resolve a crisis. Should a superintelligence, whether it be AI or some gifted boffin, ever be given the free will to act in a crisis situation, according to what it knows from second-hand sources without having any first-hand experience of anything? The answer is quite simply – Who knows? But do we want to chance it if the stakes are high? The response made by the super-brain might be effective, but there is also a great risk that underneath an intelligent façade, created by the superior grasp of knowledge, lies a ludicrous, innocent, perhaps peevish and petulant, super-intellectually-endowed infant imagination. Without the reference of life experiences knowledge is naïve. The super-brain, therefore, can only be imagined from this perspective as a super-geek, or a super-nerd. Like a Super-Sheldon from the Big Bang Theory – or a Super-Young-Sheldon from the spin-off series.

A further power in the AI superintelligence resides in its power to question. AI developers know that to seem more human the AI must be endowed with the critical potential that comes from an ability to form questions.  However, a super-brain that is free and free to access everything would also be free, if not expected, to be able to question everything … and everything can be questioned. But that opens up the very dangerous doors of scepticism. Scepticism has a universal, infinite potential which, if adopted by a superintelligence would be crippling or maddening for it. Within a sceptical realm of thinking, in which all questions only evoke more interrogatives, its power to act would be nullified. And this brings us to another important point: How do we humans know when to stop questioning? Isn’t it all wrapped up in our experiences?

When Wittgenstein wrestled with the problem of reality he concluded that the only surety that something exists is to proclaim its non-existence and if the proclamation sounds absurd then its existence should be accepted as fact. Hence, look at your hand and say “This is not a hand” is absurd and therefore you can rest assured that it does in fact exist. However, Wittgenstein’s logical process cannot work for an AI superintelligence that has no hands, and this means that for the AI to make sense of its Internet-fuelled cyber-mind, it must exist within a human-corporal kind of experience. And this implies that the building of an AI superintelligence that will not go mad in the human world it is immersed in, will need to have a humanoid cyberbodying. It should be fashioned as an android.

 ANDROID WISDOM

But until now we have overlooked an important concept when bridging the knowledge provided by information with the learning that comes from experience – the idea of wisdom. For our AI superintelligence to be authentically super it needs to be wise. But wisdom from knowledge only comes about when one is able to transcend the actual validity of the knowledge we have. Too many facts and dogmatic axioms create a rigid mind that is counterproductive for wisdom. Wisdom, in fact, is found when the concatenations of our experience that seem to make up reality lose their validity for us.

What this implies is that an AI superintelligence cannot just be built from scratch through ingenious programming, it needs to be nurtured: it needs to move through the experience of life in order to transcend the validity belonging to the kind of data that can be tapped into through that experience. And this means that the AI superintelligence, if it is to have the wisdom we would expect the end product of the AI programme to have, cannot be a mere brain in a vat, it has to be able to get out into the world and have a first-hand experience of reality: the android, Data, from Star Trek is a far superior technology to the vat-in-the-brain model supercomputer, HAL, in 2001.

Data is capable of making something like human-type value judgements, but HAL is not. And if we stick to the sci fi metaphor to explain ourselves, in order to avoid the Skynet future of the Terminator, an ethical AI should be given priority over an omnipotent one.   

 •  0 comments  •  flag
Share on Twitter
Published on April 04, 2025 01:17
No comments have been added yet.