AI, Dire Warnings and The Influence of Stories

Any sufficiently advanced technology is indistinguishable from magic.


Arthur C. Clarke

 

Even when working correctly, AI 2.0 could be Cambridge Analytica on steroids: a disinformation machine, personalizing and adjusting its message to persuade and influence swatches of the population.


Kai-fu Lee

Co-Chair of the Artificial Intelligence Council at the World Economic Forum

 

People respond to anecdotes and stories more than facts. Facts are dry, boring, they are devoid of emotion. So, in response to this fact, emotion is now being poured into the story about AI and the wonders it will unveil. In the tech world and medical communities, the message is uplifting, positive, exciting. New breakthroughs mean better health care, the ability to travel to distant worlds, someone to write your dry technical description for you so you don’t have to work as hard. Whatever it is. The boring stuff is erased. Even traditional work will be erased.


How exciting and fascinating I used to find technology and space stories, real world breakthroughs in science which led to definite progress, which is reflected in my works . . . despite some apocalyptic themes. That is, until social media revealed itself as a scourge on society and contributed to the re-rise of fascism. In Sinkhole, I discussed the necessity of moving forward with exploration and scientific invention, while at the same time acknowledging that in our classist society people are suffering and should be cared for and uplifted from poverty. In Ice Tomb, an ancient but advanced civilization utilized new technology to provide an outlet to escape disaster. In Time Meddlers I discussed our failures, and warned of the allure of technology, but still forged ahead to alter the past.


The forging ahead despite is still happening.


If we look at the bulk of science fiction themes, the AI warnings were a little off the mark, so far: destructive robots bent on killing all of humanity, war games where the AI could not differentiate between a game and real life and could not reason that global nuclear war would mean complete annihilation, sentient robots being subjected to human rights abuses. But the one that stands out is the Terminator series. Killer robots? No, but maybe drones. Nuclear destruction by Skynet, perhaps more likely. For example, OpenAI Signs Deal with US Government to Use Its AI for Nuclear Weapon Security


The difference in most of these stories is that eventually humans prevail over AI. We will not. Even AI scientists and ethicists, like Max Tegmark in his book Life 3.0, admit that any superintelligence that surpasses us will leave us behind in one way or another. “Since we humans have managed to dominate Earth’s other life forms by outsmarting them, it’s plausible that we could be similarly outsmarted and dominated by superintelligence.”


 Nothing can really prepare us for an intelligent entity beyond our scope and no one can predict what it might do. Let me repeat: no one can predict what it might do, and we can’t reasonably prepare for it. Right now, some tech ethicists and governments are trying to do both, because the genie is out of the bottle. And others—we will call them the tech capitalists—are not. They’re just unleashing it, for the sake of greed.


The reason I’m focusing on AI is many-fold. First, it is being developed with little consideration for the consequences. Lately I have been reading and researching AI books. I will begin with two that struck me as particularly alarming:


 AI 2041  by Chen Quifan


 Life 3.0 by Max Tegmark


 Quifan collaborated with Kai-Fu Lee as pioneering technologists to write short stories of a potential AI future less than 20 years away. Max Tegmark is an MIT professor and president of the Future of Life Institute dedicated to exploring and setting down ethical guidelines related to the future of AI. Both understand the dangers of AI, and both have an optimistic viewpoint. How could they not, because they work in and live for the AI industry, and they want this technology to proceed despite the dangers.


I don’t work in the industry, but I do live in the world of real-world future speculation. I shared in the excitement of the expanding technological world, including social media, and I have witnessed the degrading of society because of it, and perhaps the death of democracy. I have come full circle.


We cannot underestimate the potential of certain technologies to lead to an apocalypse, or the end of a free society which is the equivalent of an apocalypse. I started my exploration of this topic in The Silent Gene series, where society had decided AI was too dangerous to proceed with as it went rogue on Mars, and began reproducing robotic AI while ignoring instructions from Earth. A lone robot survived an EM pulse designed to eliminate the AI threat. This element of the story was an aside, a commentary. A tiny portion of the greater plot. But as I delved deeper into the topic, I found it extremely difficult to imagine what this creature might think, and what it might do, because even the experts who are even now discussing the future of AI haven’t a clue.


We will begin by discussing what they know, and what they don’t.


1.     They don’t know how to build human-friendly AI. Right now, AI is rife with errors and human biases, so a fair, measured, intelligent AI is far from reality. Facebook has released its own version of AI, which has proven to possess the worst flaws of humanity.  Dr. Fei-Fei Li discovered this in her work with AI (The Worlds I See). By feeding unlimited data to AI, including from social media, it was incorporating biases, hate speech, the worst aspects of our nature. While at least civilized human beings restrain these beasts, and aspire to incorporating equality, peace and love into our lives, fascist political parties and enemies of democracy are encouraging racist, misogynistic, homophobic behaviour, and when exposed, no wonder AI is not a decent entity.

 

I would like to add some ethical principles that have been suggested at the Asilomar AI conference to see where the experts are, and whether you think they can control AI in our current anti-reality, anti-democratic climate.

 

Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms and cultural diversity.

 

The principles are great. There are several more that I will add to each point below. But are they realistic and are they achievable?

 

2.     They do know that AI is currently becoming a strategic genius by playing the most difficult game on the planet and winning against humans every time. This sends chills down my spine. Think of a strategic genius without morals. It has no conscience or a lifetime of history and context to guide its actions. Autonomous drones (Life 3.0) are predicted to be able to assassinate a target with explosive charges shot directly into a person’s brain. Anyone who presents a threat to the current government (assuming it has nefarious purposes) can be disposed of instantly. That’s not to mention the surveillance capabilities AI has and will have in the future.

 

From Asilomar AI conference:

 

Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

 

AI Arms Race: An arms race in lethal autonomous weapons should be avoided. (That worked well with nuclear weapons.)

 

3.     They do know that AI is based on centuries of raided/stolen written history and daily human life online they coin as “data” which has been fed to the AI without perspective, or the rules and laws that have been garnered from our vast experience. As the world is taking a step back from empathy and decency because of misleading stories if not outright lies, they are charging forward with an undoubtedly corrupted AI.

 

As a storyteller, this point is especially problematic for me. We, as humans, are guided by our emotions more than our intellect. Stories are generated to touch these emotions, move us. As a result of propaganda and political agendas, the lines are blurring between fiction and reality. So many conspiracy theories are simply stories designed to manipulate us.

 

Will AI base decisions on perceived reality – the stories we’ve been telling for centuries (mythology, religion, just good old fiction, and the myriad conspiracy theories now online), rather than facts and truth?

 

A book I mentioned above, which delved into this topic is called The Worlds I See by Dr. Fei-Fei Li. She was one of the original researchers in AI, and it describes her journey in advancing this technology. She thought it was perfectly okay to mine our data to develop AI. Grand larceny wasn’t in her vocabulary. Dr. Li approached AI with an abundance of optimism, but as the book progressed, and as she dumped more and more data into the machine, the results began to shake her to her core. Ethical concerns cropped up more and more until at the tail end of the book there are nothing but. Here are some quotes:

 

“My mind kept returning to those eight hundred GPUs... So many transistors. So much heat. So much money… AI was becoming a privilege. An exceptionally exclusive one.”

 

“…lack of transparency intrinsic to its design… staggeringly powerful when organized at the largest scales, and thus virtually immune to human understanding.”

 

“… an emerging threat known as ‘adversarial attacks’… (such as) to fool a self-driving car into misclassifying a stop sign.”

 

“…true progress in addressing such complex, unglamorous challenges demanded a kind of reverence that Silicon Valley just didn’t seem to have.”

 

“…they pointed toward a future that would be characterized by less oversight, more inequality, and in the wrong hands, possibly even a kind of looming, digital authoritarianism.”

 

AI is a current day Frankenstein’s monster, whether they like it or not.

 

4.     They do know AI will very soon take over most employment, people’s bread and butter. Both AI 2041 and Life 3.0 state, don’t suggest, this. They envision a utopian world where people don’t have to work, and will only work for fulfillment. But in our current system of capitalism and a humongous gap between the rich and the poor, how can we get from Point A to Point B. The wealthy oligarchs of our current age have no interest in redispersing their wealth, which is the only way to prop up the future jobless.

 

Tegmark mentions universal basic income as a solution, but can he convince our current society to implement this, particularly when “pull yourself up by your bootstraps” is the mantra? Where the false notion of the American Dream has infiltrated more than just the American culture? He has the optimistic view that human rights will always be considered whenever society changes, but are they even being considered now? How can you possibly think in less than 20 years, the timeline given by Chen Quifan, that people rather than money will be the first priority?

 

From Asilomar AI Principles:

 

Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

 

Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

 

Are they realistic and are they achievable?

 

5.     They do know funding for AI research is coming from spurious quarters.

 

Tegmark states in his book that the funding for some of these ethical guidelines in AI research came from Elon Musk. But Musk is now espousing eugenics, raising his arm in a Nazi salute, encouraging ideas and potential programs that are abhorrent to say the least. Does this not in itself warn us that breaching these ethical guidelines is inevitable? Do you think AI researchers will abide by the rules when the current technology companies are no longer guided by even minimal ethical standards?

 

Common good, sure.

 

6.     They do know that AI is already capable of human surveillance.

 

Doesn’t that already raise alarm bells for you? It does for me. I hate that Alexa or FB or Google are always listening to what I say and watching what I consume to feed it into algorithms that place ads in front of my face.

 

Fei-Fei Li thought she was providing a great service for patients to record them and detect gaps in care, like handwashing techniques, but nurses soon began to call it Bossware, a form of employee monitoring. While it was not used for this purpose, it was perceived that way, and it certainly could be used that way in the future. It could be used by governments. It may already be, who’s to say? And now money is being dumped into research for exactly that undisguised purpose.

 

Bettering society? I think not.

 

Even now Google has lifted a ban on using its AI for weapons and surveillance: https://www.wired.com/story/google-responsible-ai-principles/

 

From Asilomar AI Principles:

 

Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse and actions, with a responsibility and opportunity to shape those implications. (Already being misused. Do they feel guilty?)

 

Liberty and Privacy: The application of AI to personal data must not reasonably curtail people’s real or perceived liberty.

 

Authoritarian governments do not care about your liberty.

 

Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

 

We are living in a technocracy.


I’ll end with that one. I know there are several more points and we could go on endlessly, but for all the good intentions of AI researchers who came up with the principles and truly mean to honour them, we are still faced with the dilemma of dishonourable people in positions of power. Sci-fi writers were not wrong, just not acquainted with all the facts when they wrote their warnings. Our world was changing for the better after World War II and we said “never again.” Now it has been subverted by propaganda, conspiracy theories and garbage misinformation directly associated with the same evil. It has been taken over by greedy robber barons and vainglorious bastards. Will an AI funded by these very same people be any different?



 

“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”


Isaac Asimov

 •  0 comments  •  flag
Share on Twitter
Published on February 20, 2025 08:44
No comments have been added yet.