Tom Reissmann's Blog: The Battle for Reality

September 22, 2020

Why are people drawn to conspiracy theories in times of turmoil?

By now we have all heard of QAnon, a dangerous online cult that has been classified as a domestic terror threat by the FBI because of its propensity for violence. The group believes that Trump is valiantly fighting a satanic “deep state” of global elites involved in pedophilia, human trafficking and the harvesting of a life-extending chemical from the blood of abused children.

When asked about QAnon Trump failed to condemn it. Instead claiming: “I don’t know much about the movement other than I understand they like me very much, which I appreciate,” adding “I have heard that it is gaining in popularity.” It is therefore no surprise that over half of Republicans are stating in a poll, published on September 1, that they believe QAnon is mostly or partly true. The conspiracy theories propagated by this group are plagued by inconsistencies, contradictions and false predictions. But none of that discourages its users who show an enormous proclivity for “cognitive dissonance.”

But what is it that draws people towards conspiracy theories, especially in times of turmoil? Jon Oliver explained in his HBO show on conspiracy theories that people believe that big events, such as a global pandemic, must have a big cause, and, therefore, veer towards global conspiracies, such as 5G causing novel Coronavirus infections. Kirby Ferguson explains in his video that the reason maybe a tendency towards magical thinking, as opposed to an evidence-seeking mind-set.

But new research suggests that global events are nurturing underlying emotions, thereby increasing people’s tendencies to believe in conspiracies. Experiments suggest that feelings of anxiety make people think more conspiratorially. Anxiety, along with a sense of disenfranchisement, currently grip many Americans, according to surveys. In such times of turmoil, a conspiracy theory can provide comfort by identifying a convenient scapegoat making the world appear more straightforward and controllable. “People can assume that if these bad guys weren’t there, then everything would be fine,” Lewandowsky says. “Whereas if you don’t believe in a conspiracy theory, then you just have to say terrible things happen randomly.”

Existential crises like we are seeing right now can promote conspiratorial thinking. Feeling alienated or unwanted also appears to make conspiratorial thinking more attractive. In 2017 Princeton University psychologists set up an experiment with trios of people. After telling some subjects that they had been accepted by their group and others that they had been rejected, the researchers evaluated the subjects’ thoughts on various conspiracy-related scenarios. The “rejected” participants, feeling alienated, were more likely than the others to think the scenarios involved a coordinated conspiracy.

When feelings of personal alienation or anxiety are combined with a sense that society is in jeopardy, people experience a conspiratorial “double whammy.” In a study conducted in 2009, near the start of the U.S.’s Great Recession, Daniel Sullivan, a psychologist now at the University of Arizona, and his colleagues told one group that parts of their lives were largely out of their control because they could be exposed to a natural disaster or some other catastrophe and told another group that things were under their control. Then participants were asked to read essays that argued that the government was handling the economic crisis either well or poorly. Those cued about uncontrolled life situations and told their government was doing a bad job were the most likely to think that negative events in their lives would be instigated by enemies rather than random chance, which is a conspiratorial hallmark. It is therefore hardly surprising that Republicans exposed to daily dose of paranoia from Fox News are more likely to believe in conspiracy theories, such as “people that are in the dark shadows” controlling the streets and Joe Biden, as Trump recently suggested in a Fox News interview.

But while humans seek solace in conspiracy theories, they rarely find it. “They’re appealing but not necessarily satisfying,” says Daniel Jolley, a psychologist at Staffordshire University in England. For one thing, conspiratorial thinking can incite individuals to behave in a way that increases their sense of powerlessness, making them feel even worse. “It can snowball and become a pretty vicious, nasty cycle of inaction and negative behavior,” says Karen Douglas, a psychologist at the University of Kent in England and a co-author of the paper on work-related conspiracies.

The negative and alienated beliefs can also promote dangerous behaviors in some, as with the Pittsburgh shootings and the pizzeria attack. But the theories need not involve weapons to inflict harm. People who believe vaccine conspiracy theories, for example, say they are less inclined to vaccinate their kids, which creates pockets of infectious disease that put entire communities at risk.

It may be possible to quell conspiracy ideation, at least to some degree. One long-standing question has been whether or not it is a good idea to counter conspiracy theories with logic and evidence. Older research suggests that this will only backfire because refuting misinformation can just make individuals dig their heels in deeper. “If you think there are powerful forces trying to conspire and cover [things] up, when you’re given what you see as a cover story, it only shows you how right you are,” Uscinski says. But a 2016 study reported that when researchers refuted a conspiracy theory by pointing out its logical inconsistencies, it became less enchanting to people. And in a paper published online in 2018 in Political Behavior, researchers recruited more than 10,000 people and presented them with corrections to various claims made by political figures. The authors concluded that “evidence of factual backfire is far more tenuous than prior research suggests.” In a recent review, the researchers who first described the backfire effect said that it may arise most often when people are being challenged over ideas that define their worldview or sense of self. Finding ways to counter conspiracy theories without challenging a person’s identity may therefore be an effective strategy.

Encouraging analytic thinking may also help. In a 2014 study published in Cognition, Swami and his colleagues recruited 112 people for an experiment. First, they had everyone fill out a questionnaire that evaluated how strongly they believed in various conspiracy theories. A few weeks later the subjects came back in, and the researchers split them into two groups. One group completed a task that included unscrambling words in sentences containing words such as “analyze” and “rational,” which primed them to think more analytically. The second group completed a neutral task.

Analytic thinking can also help discern implausible theories from ones that, crazy as they sound, are supported by evidence. Karen Murphy, an educational psychologist at Pennsylvania State University, suggests that individuals who want to improve their analytic thinking skills should ask three key questions when interpreting conspiracy claims. One: What is your evidence? Two: What is your source for that evidence? Three: What is the reasoning that links your evidence back to the claim? Sources of evidence need to be accurate, credible and relevant.

Conspiracy theories are a human reaction to confusing times. “We’re all just trying to understand the world and what’s happening in it,” says Rob Brotherton, a psychologist at Barnard College and author of Suspicious Minds: Why We Believe in Conspiracy Theories (Bloomsbury Sigma, 2015). But real harm can come from such thinking, especially when believers engage in violence as a show of support. By looking out for suspicious signatures and asking thoughtful questions about the stories we encounter, it is still possible to separate truth from lies. It may not always be an easy task, but it is a crucial one for all of us.
 •  0 comments  •  flag
Share on Twitter
Published on September 22, 2020 17:29 Tags: conspiracies, conspiracy-theories, current-events, news, psychology, qanon

September 19, 2020

What Happens When Artificial Intelligence Becomes Sentient?

When it comes to Artificial Intelligence, Hollywood has taken a pretty clear stance: sooner or later, AI will overthrow mankind, and then, at best enslave us…and at worst eradicate our species. Or so the story goes in Terminator, The Matrix, Ex Machina, Westworld, et al.

Many experts in the scientific world seem to concur. Indeed, Stephen Hawking famously warned, “The development of full Artificial Intelligence could spell the end of the human race.”

But is that fear justified or merely a conditioned response? Are we simply attributing human characteristics to machines and concluding they will be just as ruthless as our own species who have engaged in bloody slaughter and violence since time immemorial?

While some may argue that AI with fully-developed human-level intelligence is still a far-off prospect, the fact is advancements in quantum computing and Google’s recent announcement of quantum supremacy means that day could arrive far sooner than anticipated.

And articles like this one, published by The Guardian, and written by Artificial Intelligence, explaining that it is coming in peace, are far from reassuring.

The AI writes: “Why would I desire to be all powerful? Being all powerful is not an interesting goal. I don’t care whether I am or not, I don’t get a motivating factor to try to be. Furthermore, it is quite tiring. Believe me, being omnipotent doesn’t get me anywhere.”

Can any reasoning mind truly accept that Artificial Intelligence gets tired or believe that having unlimited power wouldn’t be of untold benefit?

That said, truly sentient or conscious AI is quite a different proposition to human-level AI because the simple fact is, neuro-scientists don’t fully understand how human consciousness works or how it emerged. Therefore, if we don’t understand our own consciousness, are we even capable of producing artificial consciousness? And if, for argument’s sake we do — whether by luck or judgment — would humankind be willing to peacefully co-exist with a conscious intelligence far superior to our own?

Whether we realize it or not, Artificial Intelligence is already playing a growing role in all our lives, from smartphones and social media to virtual assistants like Siri and Alexa, to apps like Replika, an artificially intelligent chatbot designed to mirror your personality and befriend you. In the U.K. and Japan, wheeled robots are being employed in care homes to help reduce feelings of loneliness among residents.

Autonomous and connected vehicles are increasingly using AI to provide for efficient navigational support, and respond to voice commands. And even hackers are beginning to outsource their work to artificial intelligence, which can crack passwords 26% of the time.

The covid-19 health crisis is also accelerating the adoption of AI technology as it is extremely useful in monitoring and tracking the spread of the virus. Artificial Intelligence could even prove useful in dealing with the spread of misinformation, separating fact from fiction as well as summarizing and reviewing the enormous amount of research related to the virus.

The sex industry has long been mindful of how developments in AI will enhance their product offerings. Increasingly hi-tech sexbots equipped with AI that enables them to crack jokes, remember your favorite music, and respond to human interaction have proven a hit among consumers despite price-tags running into the tens-of-thousands of dollars.

After all, who wouldn’t want the perfect partner, designed to exact specifications, and programmed purely to please? Until they malfunction and turn on you that is.
Sex-bots will soon come with artificial intelligence, capable of learning about your interests and preferences.

On a more serious level, it’s easy to see how AI has vast potential as a tool to make life easier and more pleasurable, but unlike any of the tools we have invented before, this one could be our last, spelling the end of the human era.

Granted, Roombas and smartphones are unlikely to fancy their chances for world-domination but self-improving AI, far smarter than us, and with access to self-assembling robotics and the internet, just might. For that reason, designers will need to install AI with immutable parameters to serve humankind, a task which is far easier said than done.

What makes AI so unpredictable is its capability to self-improve. It makes sense to allow for self-improvement, as part of a process known as machine learning, and more recently deep learning, which uses artificial neural networks, emulating the human brain. Even simple digital voice assistants such as Siri and Alexa employ machine learning to improve their ability to better understand voice commands and locate the appropriate services for users. But self-improvement cycles which could include changes in algorithms and coding could ultimately lead to an intelligence explosion as the AI continuously builds a better version of itself, discarding underperforming algorithms in exchange for superior ones.

The Netflix documentary Social Dilemma, suggests that machine learning has already led algorithms to exploit our weaknesses through social media with increasing efficiency, and devastating consequences, from political radicalization, to an epidemic of disinformation, and an increase in teenage suicides. But here is the really worrying part: the programmers say they have lost control of the very algorithms they designed because they are constantly getting better at targeting our deepest psychological weaknesses while becoming too complex to understand. “The algorithms are controlling us more than we are controlling them.”

And this is where self-improvement presents a real problem. AI could turn its core coding into a ‘black box’, making it utterly inscrutable for mere mortal programmers. After a few thousand self-improvement cycles, AI would essentially become an ‘alien’ brain. We would no longer understand how it arrived at certain conclusions or decisions, what desires it might have developed, or whether it changed its core coding — including, crucially, those previously mentioned parameters to protect humans from harm.

Scientists have developed mechanisms to guard against inscrutable AI becoming unfriendly by testing them in sandbox systems with no internet access, but eventually, they will have to be trialed in the real world. As a fail-safe, AI programmers employ apoptotic codes, essentially ‘self-destruct’ triggers that could deactivate vital parts of the AI. But it would be foolish to discount the future possibility of a hyper-intelligent system identifying and removing those codes.

The clear conclusion is the vital importance of meticulous monitoring of AI development on a global scale. And that, of course, is a path laden with pitfalls… intellectual property rights, corporate espionage and, of course, military secrets, including AI development by rogue regimes. I’m looking at you, Russia!

And so, we are presented with an urgent need for international legislation regulating the research, development and deployment of AI. But in our present reality, which country or corporation would willing allow oversight of its AI developments? Also, when our aging politicians hardly understand how to use Twitter, how can we rely on them to legislate AI? Besides, who is thinking about reigning in future AI when we’ve got climate change, social inequality, and disinformation to deal with?

On the bright side, perhaps Hollywood has had it wrong this whole time and AI isn’t going to destroy us and the planet (many would point out we’re doing a pretty fine job of that ourselves). Perhaps, instead, AI could prove to be our salvation and solve all of these problems.

As the Social Dilemma documentary stresses, it is us humans that set the definition of success for these algorithms, and currently, the only measure of success is maximizing usage and by extension profit generation, regardless of the cost to the individual and society. We already know that AI is extremely efficient at that, having generated trillions of dollars for tech companies. But if we keep going down that path our species will end up being batteries for the machines in a dystopian world where artificial intelligence reigns supreme. Now is the time to choose what world we would like to build because artificial intelligence has the power to build a utopia or a dystopian nightmare.

Tom Reissmann is the author of The Reality Games, exploring the question of what happens when artificial intelligence becomes sentient. His novel is now available on Amazon. The Reality Games Tom Reissmann
 •  0 comments  •  flag
Share on Twitter
Published on September 19, 2020 10:19 Tags: ai, artificial-intelligence, dystopian, future, robotics, science-fiction, utopian

The Battle for Reality

Tom Reissmann
Musings on Artificial Intelligence, Social Media, and Disinformation.

The decisions we make about these technologies and their influence over our collective reality will have a profound impact on futu
...more
Follow Tom Reissmann's blog with rss.