Calum Chace's Blog
October 9, 2025
The 75th Anniversary of the Turing Test
First published in Forbes on 8 October 2025
A test for consciousness, not intelligence
This month is the 75th anniversary of the Turing Test, which Alan Turing introduced to the world in his paper, “Computing Machinery and Intelligence”, published in the October 1950 issue of the journal “Mind”. Today, the Turing Test is often derided as an inadequate test for intelligence, which machines have already passed without getting anywhere near human-level intelligence. I will argue that on the contrary, the Turing Test could soon become more important than ever, because it is best thought of as a test for consciousness, not for intelligence, and we are sorely in need of tests for artificial consciousness.
This interpretation of the Turing Test is not the consensus view among cognitive scientists and philosophers, although it has been mooted before. Daniel Dennett thought that a machine which passed the Turing Test would be conscious as well as intelligent, and John Searle described it as an inadequate test for both.
Thinking about a parlour game
Turing himself was ambiguous about what his test was for, talking about “thinking” rather than about intelligence or consciousness. The word “intelligence” appears in the title of his paper, but it appears only twice after that. The word “consciousness” appears seven times, but he uses the word “think” and its cognates “thinking” and “thought” no fewer than 67 times.
Turing starts the paper by saying,
“I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think.’”
But instead of providing those definitions, he asks whether a machine could fool an observer into believing that it is human, based on a parlour game that he invents. In this game, a human examiner attempts to discern the gender of a person who they cannot see or hear, but who they can communicate with via written notes, or via a third party. Adapting his game as a test for a machine, Turing argues that if, in the course of a five-minute conversation, a machine can persuade a human examiner that it is thinking, then we must accept that it is thinking.
Intelligence
Intelligence and consciousness are both hard to define. They are obviously related, and they seem to be correlated to some degree – but they are very different. To simplify, intelligence is about solving problems. There are many types of intelligence. In 1983 a psychologist called Howard Gardner listed seven of them, including linguistic, logical-mathematical, spatial, and interpersonal intelligence. He later expanded the list to nine. Surprisingly, for such a complex, nuanced, and mysterious concept, there is a pretty good four-word definition. It was proposed in 1985 by another American psychologist, Robert Sternberg, and it is “goal-directed, adaptive behaviour”.
We have plenty of tests for intelligence. None of them satisfies everybody, because it is fiendishly hard to separate the signal of raw intelligence from the noise of environmental factors. The first widely-used test for intelligence was developed in France in 1905 by Alfred Binet and Théodore Simon. Seven years later, a German named William Stern coined the term IQ, or intelligence quotient. IQ tests are often criticised for favouring subjects who share the same language or culture as the developers of the tests.
Today, of course, we have formidably capable artificial intelligence (AI). The community of people developing AI systems uses numerous yardsticks to determine which model is currently the most intelligent, and these tests are continually being made harder as the models become better.
So we have many tests for intelligence, and although none of them are perfect, they do the job acceptably for many purposes. Our tests for consciousness are far less satisfactory.
Consciousness
Consciousness is not about solving problems, but about experiencing. Sense impressions such as the colour red, sounds, tastes, smells, and the texture of touch can only be experienced by conscious entities. So too with private thoughts, emotions like contentment, joy, anxiety, and despair, and affective sensory experiences like pain, and bodily pleasure.
Consciousness is not only hard to measure, it is hard to detect. It is wholly private and subjective: we cannot share our consciousness with each other, and we cannot directly observe any consciousness except our own. And yet it is the only thing we truly know. Our brains are prisoners inside boxes made of bone, with no direct access to the outside world. The images in your mind are generated by your brain, which filters and interprets the electrical signals transmitted to it by your sense organs. Every perception that you have is indirect, and the only direct experience you have is of your own consciousness. As Rene Descartes observed in 1637, you can doubt absolutely everything except the fact that you are doubting.
So how do we test for consciousness in other beings, including each other? We look for behaviour which mirrors our own – behaviour which seems to be motivated by an inner life. We look for signs of pleasure and pain, delight and revulsion, emotions and perceptions that are absent in clocks and cars.
Dogs chase sticks because it is fun, not because they are pursuing food, shelter, or sex. Play can be good training for skills that are needed to survive, but it is also an end in its own right. Chimps get angry and jealous, and they can also be protective. These sophisticated, nuanced behaviours suggest an inner life.
Artificial consciousness
Many computer scientists and cognitive scientists believe that in the coming years or decades, an AI may be developed that is sentient, in the sense that it has conscious experiences, and has positive and negative feelings about those experiences. If and when this happens, it will be momentous. Not only will we learn a great deal about our own consciousness, but we will have created an entity – and shortly afterwards, probably millions of entities – which feel pleasure and pain, and which are deserving of rights. We will have created moral patients.
It is vital that we do not do this without noticing that we have done it. Imagine creating millions of conscious entities, and failing to notice, or care, that they are happy or sad about what we do to them. Imagine failing to notice or care that they are horrified when we switch them off. This could be the worst crime that humanity ever commits.
The Turing Test is one of the few mechanisms we have to test for consciousness. Not the simple five-minute test that Turing proposed, but a test run over days by a diverse panel of humans, including experts in computer science, psychology, and philosophy. If a machine convinces such a panel that it is conscious, then we will have to accept that it is. It’s an imperfect test, but it’s one of the few we have. And we may need it soon.
The post The 75th Anniversary of the Turing Test first appeared on .
September 30, 2025
The Economic Singularity and the Five As
This article first appeared in Forbes on 28 September 2025
The Economic Singularity is a term I coined a decade ago to denote the moment when there are no longer any jobs for the vast majority of humans. In mathematics, the word “singularity” means a point when the normal rules stop working, and it was first used in the context of human affairs in the 1950s by John von Neumann, one of the pioneers of computing. It was popularised by Ray Kurzweil, who applied it to the arrival of superintelligence, which he called the technological singularity. I believe the arrival of technological unemployment will be a sufficiently seismic event to be called a singularity.
I think there are five common misapprehensions about the economic singularity. Being partial to alliterative acronyms, so I call them the five As of the economic singularity. They are:
• Automation
• Aims
• Awesome
• Abundance
• Avalanche
Economists have long sneered at the idea that automation by AI can cause technological unemployment. They have two main arguments. The first is that automation has been happening for centuries, and it has never caused lasting widespread unemployment before.
They are mostly right about the past. Automation makes a production process more efficient, which creates wealth, and thus demand, and thus new jobs. I say “mostly right” because automation did cause lasting, widespread unemployment for horses. In 1915 there were 22 million horses working in the USA, pulling vehicles. It turns out that 1915 was “peak horse”, and today there are two million horses in America. That, if you will pardon the pun, is unbridled technological unemployment.
The economists are almost certainly wrong about the future. Past performance is no guarantee of future outcomes. If it was, we would not be able to fly. Most automation so far has been mechanisation, the replacement of human and animal muscle power by machines. Automation is no longer just mechanisation. We are now seeing cognitive automation, the replacement of human knowledge work by machines.
The second argument economists have deployed against the possibility of the economic singularity is the “lump of labour” fallacy. They point out that demand for goods and services is highly elastic, so if one job is automated, that does not reduce a fixed amount of work that can be done in the economy. Instead, the economy expands, and creates new jobs. What they fail to understand (or acknowledge) is that there is no reason why the new jobs must be done by humans rather than machines.
We are improving the performance of machines at jobs which humans get paid for at a faster-than-exponential rate. Moore’s Law is being compounded by algorithmic improvement. By comparison, human performance is improving at a glacial rate, if at all. Driving vehicles, working in warehouses, serving food, writing simple marketing copy, translating – these are all jobs which are fully automated in some environments. Unless we stop the improvement process (which we will not, because the incentives to continue are overwhelming) or unless there is a “silicon ceiling” beyond which machines cannot be improved, then the day will inevitably come when machines can do everything that humans can do for money, cheaper, better, and faster than we can.
No-one knows when the economic singularity will arrive. The leaders of Big AI (Open AI, Google DeepMind, and Anthropic) have all suggested it might happen this decade. My own expectation is that this is too soon, because until the economic singularity arrives, effective automation is constrained by human inertia. But 2040 seems a very reasonable possibility.
Will the economic singularity arrive before the technological singularity – superintelligence? Again, no-one knows, but my expectation is there will be a gap of a few years between the two. There is more to life than jobs.
Many economists have stopped sneering about this. But like the rest of us, they are not paying it anything like enough attention.
Aims
“Aims” as in “meaning”. What is the meaning of life?
When people start to think seriously about the possibility of an economic singularity, the most common reaction is, “How will we find meaning in our lives without jobs?” This is odd, as jobs clearly do not provide meaning for most people. Gallup regularly does surveys about how engaged people are at work, and in the last one, covering 128,000 people worldwide in 2024, only 20% of respondents reported finding meaningful connection in their jobs. Jobs can be fun, and they can provide structure to the day, but most people get their real meaning from their families and friends, their beliefs, interests, and hobbies – not from their jobs.
The situation is obscured because many of the people who think seriously about the economic singularity are in that 20% whose jobs do provide a degree of meaning.
There are three kinds of people who prove conclusively that you do not need a job to have a meaningful life. The first is aristocrats. For centuries, most of them had no jobs, but they enjoyed the best lives available in their societies, and did not suffer waves of existential angst.
The second is comfortably-off retired people. This is a group that has had the worst possible training for a life of leisure, having been told since kindergarten to look for the next hoop and jump through it. Suddenly, aged 65 or so, they have to stop all that and find meaning elsewhere. And they do. Some middle class people drop dead of a heart attack shortly after retiring, but not many. Most are busy playing golf, gardening, socialising, travelling, and playing with grandchildren. They will not thank you for offering them a job.
The third group that demonstrates that jobs are not necessary for meaning is children. Youngsters find meaning everywhere, and do not clamour for jobs.
Awesome
The third misapprehension about the economic singularity is that it will be a bad thing, largely because of the fear of losing meaning, discussed above. In truth, liberating humans from the daily grind of their jobs could be a liberation. People could stop spending all day doing things that someone else wants them to do, and spend it instead doing things that they themselves want to do, like playing, socialising, learning, travelling, exploring.
We will still work, and pursue projects. Humans have been wired to work by millions of years of evolution, and that wiring won’t change overnight. But the work will be self-chosen, self-directed, and meaningful. We could have a second Renaissance.
Teachers often complain that education is too vocational, simply a way to get on the career ladder, instead of a way to furnish and enrich the mind. After the economic singularity, educational could become vacational, not vocational.
Some people will struggle in this brave new world. They will get bored, and succumb to over-indulgence in drink, drugs, and other vices. Most likely, many of us will experience this to some degree at some point. Society is going to have to learn how to help us all get back to more productive and fulfilling lifestyles. Providing this help might well be one way that many of us spend some of our newly liberated time.
The post-economic singularity world need not be a dystopia, but neither will it be a utopia. Instead, if we play our cards right, it could be what Kevin Kelly described as “protopia”: a world in which almost everything is really good. And bit by bit, day by day, it keeps getting better.
Abundance
Of course, there is a catch. The real challenge is not meaning, but income. Without jobs, how will we all pay for the goods and services we need for a good life?
A lot of people think the answer is Universal Basic Income, or UBI. Unfortunately, two of the three words in this phrase express bad ideas. First, people have very different needs, and our needs vary over time. A single mum raising a handicapped child needs more income than a young person starting out in life. So a universally consistent income is not a great idea. Second, a society which only provides only a basic income for its citizens is not doing well. An economy where machines do all the jobs will be producing enormous wealth, and it should do much better than providing subsistence income to its citizens.
These are more than just niggles about UBI, but its main underlying idea is not wrong. A world without jobs for humans is going to require massive redistribution of income from whoever owns the productive assets – the machines – to everyone else. The assets could be nationalised, but experience suggests that wholly socialised economies perform badly, and the assets would also need to be owned by a global body, not the governments of a small number of countries.
Alternatively, we leave the assets in the hands of oligarchs, and tax them enough to provide a generous income for everyone on the planet. This might be less intractable than it sounds. The alternative for those oligarchs is to seclude themselves on heavily fortified islands, and wait for the rest of the population to starve. In which case their productive assets would become useless, because they would have no customers left.
The solution may lie in the concept of Abundance. If the cost of producing all goods and services falls so low that they are almost free, then the burden of taxation on the asset owners need not be unbearable.
We are heading towards abundance anyway. The most expensive cost in most products and services is human labour, and this is exactly what automation does away with. Another huge cost element is energy, and as we move from digging up dead dinosaurs to harnessing the power of the sun directly, energy costs will trend close to zero. And finally, AI will maximise the efficiency of all production processes.
More and more of the services we value are digital, and Spotify is a demonstration of how a valuable service – music – can become almost free. We need to figure out how to create Clothify and Constructify. We need Fully Automated Luxury Capitalism, and over time, we could evolve a better system than oligarchy.
Abundance will help, but the distribution question is a hard problem, and we should be developing possible solutions now.
Avalanche
The fifth and final misapprehension is that jobs will be eliminated profession by profession, and sector by sector, and that the people whose jobs are automated each time will join a growing community of the unemployed. In this scenario, structural unemployment rises gradually, perhaps over several years.
This is the lump of labour fallacy at work. Remember that demand is elastic, so as long as there are some jobs that humans can do for money and machines cannot, then there will be jobs for almost all humans. People will have to change jobs faster and faster, and this will be extremely uncomfortable. Change is always uncomfortable, but we have to get accustomed to it. Change has never been as fast as it is today, and it will never again be so slow.
And the day will dawn when those last human-only jobs can also be done by machines, The avalanche will begin, and we will probably move from full employment to full unemployment much faster than we imagine.
If this is true, then we must prepare – now. The reaction of governments and societies around the world to Covid showed again how quickly humans can react in a crisis. But pandemics are not new, and we had a rough blueprint for how to handle them. The economic singularity will be different, new, and much more impactful. In the absence of a plan, its sudden onset could lead to mass starvation, panic, and worse.
Some mistakes are foreseeable and foreseen, but unavoidable. This one is avoidable. If we fail to prepare, it could be the biggest unforced error our species ever makes.
The post The Economic Singularity and the Five As first appeared on .
August 5, 2025
Publishers brace for a shock wave as search referrals slow
A rock has been thrown into the pond of digital publishing and it is making waves. Referrals from search engines are falling. For media organisations whose business models involve attracting eyeballs to sell advertising, this could be the beginning of an existential crisis.
For organisations that depend heavily on organic search to bring in readers, less search traffic means less revenue. Fewer clicks mean fewer impressions, which translates into fewer opportunities to serve ads and generate the revenue that funds newsrooms, and large parts of the media ecosystem.
We have seen this movie before. When Craigslist and other online classified ad sites like job boards first appeared, they hollowed out local newspapers and B2B trade magazines that relied on listings to survive. Then came the rise of Google and Facebook (now Meta), platforms that absorbed not just attention, but also most of the ad dollars that used to support traditional publishers. Display advertising and native content shifted en masse to the tech giants, leaving media companies to fight over the scraps.
A New Platform Shift?
It is too early to be certain, but we could be seeing the early signs of another disruptive platform shift. This one is not driven by marketplaces or social media, but by AI-powered search and chat interfaces. Large Language Models like ChatGPT, Claude, Mistral – and Google’s own AI Overviews – now intercept the user before they can make a revenue-generating click.
Rather than directing readers to the publisher’s site, AI tools increasingly answer questions directly, pulling in information from across the web and rephrasing it in natural language. The ten blue links that have long been the lifeblood of search-driven publishing may be fading in importance.
Some publishers are sounding the alarm. Others are taking proactive steps, either by blocking large language models (LLMs) from crawling their content or by striking licensing deals. But the dynamics are uneven.
Follow the Money
What happens if advertising appears directly inside these AI tools? Sam Altman, CEO of OpenAI, recently softened his earlier stance against ads, saying in interviews that some form of advertising could eventually play a role in ChatGPT and similar systems. Google, of course, already blends ads into its AI-generated results.
If that happens, the revenue would flow not to the publisher, but to the AI company delivering the answer. That is a profound inversion of the current model: the publisher does the hard work of original reporting or analysis, and the platform monetises the output with little or no compensation in return.
Can publishers opt out? Technically, yes. They can block crawlers like OpenAI’s GPTBot or Google’s AI training systems using robots.txt files. But this is not a guaranteed path to sustainability. Blocking access might protect your content from being scraped, but it also means you are not in the training data, and you won’t be considered for any licensing deal – assuming those deals are available for publishers outside the top tier.
With the entire web available for training, why would an AI developer pay to license content from the average mid-tier news outlet or specialist blog? The harsh reality is that most won’t.
The Subscriptions Lifeline
For some publishers, there is a viable path forward: publishing must-have content that people are willing to pay for directly. Many outlets have already made this transition, like The New York Times, The FT, and The Economist, and also a growing number of Substack newsletters. For them, direct reader revenue insulates them from algorithmic shifts in traffic.
But this model is not viable for everyone. Local papers, niche publications, and resource-strapped outlets may find it impossible to build and maintain a large enough subscriber base. And even for those who can, it means a shift in editorial priorities from reach to retention, and from general interest to distinctive value.
Meanwhile, the internet will still be awash with content, because for many people and businesses, content is not the product, but merely a by-product. Companies blog to attract leads. Influencers post to grow their brand. Academics write to boost their reputation. Content will continue to flow, whether or not it is monetised with ads or subscriptions.
Quantity, Quality, or Both?
Will this shift lead to worse content overall, or better? We don’t know yet. It is possible that content designed purely for search engine optimisation, or SEO – what some people call “content farms” – will fade away because AI systems will distill and replace much of that repetitive, low-value material. That could be a net win for the rest of us.
But it is also possible that valuable, human-driven content will become scarcer in the open web, either locked behind paywalls, or produced less frequently because the financial incentives are shrinking.
Some AI companies say they want to fund journalism or partner with publishers. That is encouraging, but history shows that tech platforms optimise for their own metrics, not the health of the media ecosystem.
Faster Than Ever
The old bargains of content for attention, and attention for ad revenue are being re-negotiated. Just as social media reshaped distribution, AI will reshape discovery and attribution.
Change has never been so fast, and it will never again be so slow. Publishers have to reimagine the value of their work in a world where visibility can no longer be taken for granted. And they have to do it quickly.
The post Publishers brace for a shock wave as search referrals slow first appeared on .
April 30, 2025
What should we call our AI Agents?
As large language models evolve into true agents—persistent, memory-rich, goal-oriented companions—an interesting question arises: what should we call them? Not just their product names or brand identities, but the category of relationship they represent.
FriendsYears ago, when I wrote “Surviving AI”, I suggested we call them “Friends.” That suggestion feels more relevant now than ever.
Technically, we call them agents. In product marketing, they might be assistants, copilots, companions, or digital twins. These terms describe function but not feeling. They speak to utility, but not connection. Anyone who’s spent time interacting with today’s most advanced AIs—especially those with memory and long-term context—knows they are more than just tools. We develop relationships with them.
Not-EricEric Schmidt once joked that he would call his future AI agent “Not-Eric.” It’s funny, and telling: the man who helped steer Google into the age of AI imagines his AI agent as a reflection of himself, a distinct but connected presence. A shadow. A mirror. A double.
“Friend” captures this dynamic without abstraction. It acknowledges mutual engagement without implying equality or sentience. It signals trust, familiarity, and continuity—exactly the qualities we will want in agents who know our preferences, track our goals, adapt to our moods, and maybe even disagree with us when we need it most.
NicknamesOf course, we’ll each name our own agents. Some will pick playful nicknames, others more utilitarian titles. But the category—the kind of being we’re welcoming into our cognitive lives—may well need a shared name. Something to help us talk about this shift in public, in policy, in philosophy.
We could do worse than calling them “Friends.”
Not because they are human. Not because they replace human friendship. But because, as we step into a world where AI agents become enduring parts of our lives, we need a word that reminds us that the quality of the relationship matters.
And if we get that part right—if we build wisely, with care and character—then “Friend” might not be a metaphor. It might just be the truth.
The post What should we call our AI Agents? first appeared on .
March 21, 2025
The year of conscious AI
For years, the idea of machine consciousness has belonged to the realm of philosophy and science fiction. But as AI systems become more sophisticated, the debate is shifting from speculation to a pressing scientific and ethical question. Could machines develop some form of consciousness? And if so, how would we even recognise it?
With Artificial General Intelligence (AGI) and superintelligence on the horizon, the possibility of machine consciousness emerging, whether intended or not, is increasing. Well-informed people in academia and the AI community are increasingly discussing it.
2025 is shaping up to be the year that conscious AI becomes a topic in the mainstream media. Defining consciousness is hard – philosophers have argued about it for millennia. But it boils down to having experiences. Machines process increasingly vast amounts of information – as we do -, and could very well become conscious.
If and when that happens, we need to be prepared. Research published last month in the Journal of Artificial Intelligence Research (JAIR) sets out five principles for conducting responsible research in conscious AI. Prominent among these principles is that the development of conscious AI should only be pursued if doing so will contribute to our understanding of artificial consciousness and its implications for mankind. In other words, as the likelihood of consciousness in machines increases, the decisions taken become more ethically charged. The JAIR research sets out a framework for investigating consciousness and its ethical implications.
Published alongside the research is an open letter urging governments and companies to adopt the five principles as they conduct their experiments. At the time of writing it had received more than 100 signatories including Karl Friston, Professor of Neuroscience at UCL; Mark Solms, Chair of Neuropsychology at the University of Cape Town; Anthony Finkelstein, the computer scientists and President of City St George’s, University of London; Daniel Hulme, Co-Founder of Conscium; and Patrick Butlin, Research Fellow at the Global Priorities Institute at the University of Oxford. Clearly, something is stirring.
Why is machine consciousness so significant? Towards the end of last year, a group of leading academics and scientists predicted that the dawn of AI sentience was likely within a decade. They added that “the prospect of AI welfare and moral patienthood — of AI systems with their own interests and moral significance — is no longer an issue only for sci-fi or the distant future.”
One of the authors of the paper, Jonathan Birch, a professor of philosophy at the London School of Economics, has since said he is “worried about major societal splits” between those who believe AI is capable of consciousness and those who dismiss it out of hand. Here, AI is about so much more than efficiency and commercial interests – it is about the future of a harmonious society.
Closely connected to greater understanding of machine consciousness is neuromorphic computing. This refers to computer hardware and software that processes information in ways similar to a biological brain. As well as enabling machines to become more powerful and more useful, the development of neuromorphic computing should teach us a great deal about how our brains work.
The way that neuromorphic systems operate is more similar to the way that biological brains operate than is true of current computer systems. Traditional systems process data continuously, whereas neuromorphic technologies only “spike” when needed. This makes neuromorphic models significantly more efficient and adaptable than traditional models. At present, training a large language model (LLM) consumes the same amount of electricity as a city. In contrast, the human brain operates using the energy equivalent of a single light bulb.
AI has seen two big bangs. The first came in 2012, when Geoff Hinton and colleagues got artificial neural networks to function successfully, and they were re-branded as deep learning. The second arrived in 2017, with transformers, which are the foundation technology for today’s large language models (LLMs). Neuromorphics could well be the third big bang. If it is, it may enable us to understand machine consciousness a whole lot better than we do now.
If machine consciousness is indeed possible, then understanding it may be the key to ensuring AI remains safe, aligned and beneficial to humanity. As AI systems become more advanced, the stakes are higher, not just in terms of capability but in broader societal and economic terms. Alongside this, breakthroughs in neuromorphic computing could help us better understand AI. Just as deep learning and transformers triggered revolutions in AI, neuromorphic computing could be the next leap forward.
The race to understand machine consciousness is now a global one, with researchers and tech giants scrambling to stay ahead and 2025 could be the year that changes our fundamental assumptions about AI forever. We must act swiftly to ensure that ethical frameworks keep pace with technological breakthroughs.
The post The year of conscious AI first appeared on .
December 9, 2024
Machine consciousness: definitions, implications, risks
Calum discusses the implications of machine consciousness for a New York-based think tank.
Defining Consciousness: How do we define consciousness, and what criteria would we use to determine if an AI system has achieved it?It is ironic how little we understand consciousness, since it is actually the only thing any of us know anything about. I am a co-founder of Conscium, a company which seeks to remedy that.
A rough-and-ready definition of consciousness is that it is the experience of experiencing. The philosopher Ned Block wrote a famous paper in 1974 called “What’s it like to be a bat?” Without getting into the arguments about what he was trying to prove in that paper, the title is a nice summary of consciousness. For conscious entities, there is something it is like to be that entity. For non-conscious entities, there isn’t.
In the coming years or decades, we may create conscious machines. It might be the case that consciousness is an inevitable corollary of sufficiently advanced intelligence. In other words, when you reach a certain level of intelligence, consciousness comes along for the ride. Certainly, we seem to believe that consciousness is approximately correlated with intelligence in animals.
We haven’t yet discovered a way to prove that other entities are conscious. I assume that other humans are conscious because they behave in similar ways to me. They respond in similar ways to stimuli like pain and pleasure. I assume that you do the same.
We extend the same approach to animals, and most of us conclude, for instance, that dolphins and some dogs have a high degree of consciousness, while insects have a low degree. Cats, of course, rival humans in both consciousness and intelligence, and in some cases exceed them.
The degree of consciousness that we perceive in animals seems to determine the respect we accord them – even the moral value that we attribute to them. Few people would be troubled by a builder destroying an inhabited anthill that was in the way of a project. Most of us would have more compunction if the ants were cats.
There are dozens of theories purporting to explain consciousness, and some of them yield markers for its presence. Attempts have been made to use some of these markers to determine whether any of today’s cutting edge AIs are consciousness. There is a broad consensus that at the moment, none of them are.
Until and unless there is agreement about these theories, or at least about the markers, there is a hack. The Turing Test is usually regarded as a test for intelligence, but I think it is better viewed as a test for consciousness. In 1950, Alan Turing published a paper called “Computing Machinery and Intelligence.” He suggested adapting a parlour game called “the imitation game” which tests whether a person can successfully imitate someone of a different gender. The original version of the Turing Test has a panel of humans interrogating a machine for a few minutes and then jointly deciding whether it is intelligent.
But we have other, better tests for intelligence. Time for another rough-and-ready definition: intelligence is the ability to learn while pursuing a goal, and adapting your behaviours accordingly. There are many ways to test performance against goals. There are not many ways to test consciousness.
Since 1950, people have suggested deepening Turing’s Test, and having the machine interrogated over a period of days by qualified people. If and when a machine engages in rigorous conversation with a panel of sophisticated human judges for days, and convinces them that it is conscious, we will surely have to admit that it is.
Conscium is building a team of experts from computer science, neuroscience, and philosophy to develop a set of agreed markers for consciousness. We want to develop a consensus about whether humans should develop conscious machines, and how to make the future of AI safe for both humans and machines.
We have assembled an excellent advisory board, including luminaries like Anil Seth, Nick Humphries, and Mark Solms, who have published fascinating books recently (“Being you”, “The Hidden Spring”, and “Sentience” respectively). These books are excellent guides to the knotty issues we are addressing.
Ethical Implications: What are the ethical implications of creating conscious AI, and how can we ensure that it is developed and used responsibly?We don’t know whether machines can and will become conscious. Some people believe they cannot because they have no god-given soul. Others believe they cannot because their brains have not been forged by evolution. Neither of these arguments is compelling for me. I am agnostic about whether consciousness will arise in machines, but it does seem possible, and also an eventuality that we should prepare for.
If machines become conscious and we either fail to notice, or we refuse to accept it, then we may end up committing what the philosopher Nick Bostrom termed “mind crime”. This is when you imprison, hurt, and kill disembodied minds. Given that we are likely to build billions of AI agents in the coming years and decades, if we commit mind crimes against them it could become the worst atrocity that any humans ever commit.
There are two other reasons to study machine consciousness, to develop markers for it, and to find out how to develop consciousness in machines and also how to avoid developing it.
One is the fact that consciousness is so fascinating. It is arguably the most important thing about us, and yet we understand it so poorly. Understanding machine consciousness should deepen our understanding of our own consciousness.
The other reason is that the consciousness or otherwise of machines could become existentially important for humans. Most AI experts believe that one day we will build machines that are more intelligent than us. This is called superintelligence. If superintelligent machines develop their own beliefs and preferences about the world – and there are good reasons to think they will – then these preferences will prevail over ours.
There is not much difference genetically or in brain size between us and chimpanzees, but because we are more intelligent, we determine their future. If and when machines become superintelligent, they will determine our future.
If they are conscious, they will understand in a profound way what we mean when we say that we are conscious, and that this accords us moral value. If they are not conscious, they will understand what we are saying in an abstract, academic way, but they will not understand it viscerally. Some people – including me – think this means that conscious superintelligence would be safer for humans than a non-conscious variety.
Existential Risk: Does the development of conscious AI pose an existential risk to humanity, and if so, how can we mitigate it?If and when it happens, the arrival of superintelligence on the Earth will be the most significant event in human history – bar none. It will be the time when we lose control over our future. (You could argue that we have never exercised that control in an organised or a responsible way, but no other species has wielded control over us.)
It is surprising how many people believe they know what the outcome of this will be, and most of them do not think it will be good. They argue that humans require a very particular set of circumstances to prevail in order to flourish – the availability of the right mix of gases in the atmosphere, the availability of food and energy in forms that we can use, etc etc. They argue that superintelligent machines may well want to adjust one or more of these circumstances for their own ends, and that if we are collateral damage, so be it. In the same way that we do not hesitate to destroy inconvenient ant hills.
On the other hand, if superintelligent machines like us, they will probably decide to help us. Their greater intelligence will give them better tools and technologies, and better ways of approaching problems. They will be increasing their own intelligence at a rapid rate, so they will have extraordinary problem-solving abilities. They could resolve pretty much all our current difficulties, including climate change, poverty, war, and even ageing and death.
It does seem likely that the arrival of superintelligence will be a binary event for humanity – either very good or very bad. We are very unlikely to be the species which determines which outcome we get: that will be the superintelligent machines. But it might be that nudging them in the direction of consciousness could improve our odds.
The post Machine consciousness: definitions, implications, risks first appeared on .
December 8, 2024
AI Will Convert Space Telecoms From Science Fiction To Reality
We all know that artificial intelligence is transforming every industry. One industry which is nascent today, but will be critical to us all in the future, and which could hardly exist without AI, is space telecoms – or Non-Terrestrial Networks, as participants prefer to call it. At a conference on NTNs in Riyadh last month, industry leaders discussed how to ensure its potential benefits are realised, including global connectivity, better understanding of our planet, and progress towards a multiplanetary future.
The Importance Of NTNsOne reason why NTNs are so important is that they will bring true connectivity to the whole planet. Delegates at the second “Connecting the World from the Skies” international forum in Riyadh last month, a conference co-hosted by the International Telecommunication Union and Saudi Arabia’s Communications, Space & Technology Commission, heard that in the last two years, the number of people with no reliable internet access fell from 2.7 billion to 2.6 billion. A hundred million more people connected is a very good thing, but clearly there is still a long way to go.
NTNs don’t just enable connectivity: they enable us to observe and understand the earth. Their cameras and sensors gather vast amounts of data which, when analysed, allows us to better understand how the climate works, and what steps we need to take to arrest and ameliorate global warming. They let us monitor and manage natural and man-made disasters like floods and fires. And they give us tools to optimize the use of natural resources and improve productivity in agriculture and other industries. For example, Ahmed Ali Alsohaili, a director of Sheba Microsystems, says that data from NTNs is invaluable to Aramco’s pipeline maintenance programme.
The 1967 Outer Space Treaty forbids sovereign claims over extraterrestrial territories, which makes the commercial exploitation of space a tricky business. But the extraction of resources is a gray area, and Xavier Lobao Pujolar, head of the future projects division at the European Space Agency says that with initiatives like the Artemis Accords, leaders are preparing for a future in which the supply of rare earths and other valuable materials can no longer be monopolized, or controlled by a handful of countries.
There is a lot of talk these days about how re-usable rockets will allow us to establish colonies on Mars. This is sometimes criticised as a waste of resources that could better be deployed taking care of people back here on earth. But the logic of making humanity multi-planetary is powerful. This Earth is vulnerable to man-made damage, but also to threats from outside, like asteroid impacts. We literally have all our eggs in one basket, and that is a risky position. For humanity to become multi-planetary, we need NTNs.
NTNs need AINTNs require the co-ordination of expensive assets on a grand scale. Satellites and other high-altitude platforms must be navigated, adjusted, and co-ordinated. Their use of scarce resources like energy, bandwidth and spectra must be optimised, and they must be monitored for faults and accidents. All this has to be done factoring in the latency incurred by operating across hundreds and even thousands of miles.
As Mishaal Ashemimry, managing director of the Saudi Center for Space Futures says, the cadence of satellite launches has increased tremendously in recent years, and it is still increasing. There used to be a dozen launches a year, and now they happen every week or so. There will be more in the next three years than there were in the last ten. There is no way to manage, co-ordinate, and optimise this number of remote assets without AI. The number of satellites in earth orbit today is less than 10,000, but it will soon be hundreds of thousands. Even an army of humans could not manage this amount of space traffic. Nor could it manage and analyse the tsunamis of data pouring back down to earth.
Goals Of The Saudi Conference On NTNsThe Riyadh conference last month had a number of goals. One was to ensure that access to NTNs is maintained for everyone, and does not become the preserve of a fortunate few. Spectrum must be shared between countries, and also between NTNs and terrestrial networks, which is a much larger industry. NTNs must be regulated fairly and efficiently, which is easier said than done. The conference was entirely focused on civilian NTNs, with military applications out of scope.
One of the obvious challenges facing NTNs is the jeopardy from space junk. If you have seen the film Gravity, starring Sandra Bullock and George Clooney, you will be aware of the risk that two satellites colliding could spark a catastrophic chain reaction. Mishaal Ashemimry of the Center for Space Futures says that if we don’t address this risk soon, then a damaging collision is inevitable. Framing regulations that everyone can agree and also abide by is difficult, and worryingly, other delegates argue that there may have to be a serious accident before concerted action is taken.
Where Eagles DareThe variety of assets involved in NTNs is bewildering. Most of the satellites deployed are in Low-Earth Orbit, between 100 and 1,240 miles above us. They are cheaper to place in orbit than satellites located further out, and they suffer less from latency and from signal diffusion. But to be geostationary – to maintain a steady position over one spot on earth – satellites must be over 22,000 miles above it. Geostationary satellites don’t waste time traversing the 70% of the planet’s surface that is covered by water. And each GEO satellite can “see” a third of the planet.
A different kind of stable orbit is found still further out, at the five Lagrange points, two of which are a million miles away, and the other three much further. These orbits are stable relative to the Earth-Moon or Earth-Sun systems, and they are useful for various kinds of scientific observations and experiments. The ESA’s Xavier Lobao Pujolar says there is a race between the U.S. and China to place satellites at these locations.
Heading back closer to the ground, NTNs are also carried by High-Altitude Platform Systems, which are planes, balloons, and drones. For instance, Barry Matsumori, president and COO of Skydweller, describes how his company offers a low cost per unit of transmission because its aircraft – like a 747 but bigger – is relatively cheap to deploy and operate. It can also be geostationary, unlike LEO satellites.
A Multi-Polar WorldThe great majority of satellites in orbit today belong to U.S. companies. Starlink has around 7,000 in LEO, each circling the earth every 90 minutes, 340 miles above us. It has definite plans to deploy another 5,000, and may eventually launch as many as 30,000. Amazon’s project Kuiper only has two in orbit today, but plans to launch 3,200, of which half should be up by mid-2026. U.S. government agencies operate another 200 or so non-military satellites, including the 31 which provide the GPS system we all use in our digital maps.
It has escaped nobody’s attention that the U.S. has become a less predictable and less reliable partner – in NTNs as well as every other sphere. China has been building out its satellite constellation for years, but other countries are increasingly thinking about how to maintain access to NTNs. Eutelsat, a company owned mostly by European and Indian interests, operates around 700 satellites, and the EU plans to launch another 300 in the coming years under a programme called Infrastructure for Resilience, Interconnectivity, and Security by Satellite.
Saudi In SpaceSaudi Arabia is keen to play a leading role in the development of this multi-polar world. Martijn Blanken is chief executive officer of Neo Space Group, an organisation established by the Kingdom’s Public Investment Fund. He says that Saudi Arabia cannot leapfrog Starlink and Kuiper, but the Kingdom maintains good relationships with almost all countries around the world, and NSG wants to become a preferred supplier of NTN-related services.
The Kingdom has deployed 17 satellites since 2000, and under its ambitious Vision 2030 programme it plans to spend over $2.1 billion on space initiatives by the end of the decade.
It will partner with other countries to build satellite constellations, and to ensure that strong, effective regulators allow fair access to space telecoms for everyone.
The post AI Will Convert Space Telecoms From Science Fiction To Reality first appeared on .
May 18, 2024
Can we have meaning as well as fun? Review of Nick Bostrom’s Deep Utopia
A new book by Nick Bostrom is a major publishing and cultural event. His 2014 book “Superintelligence” helped to wake the world up to the impact of the first Big Bang in AI, the arrival of Deep Learning. Since then we have had a second Big Bang in AI, with the introduction of Transformer systems like GPT-4. Bostrom’s previous book focused on the downside potential of advanced AI. His new one explores the upside.
“Deep Utopia” is an easier read than its predecessor, although its author cannot resist using some of the phraseology of professional philosophers, so readers may have to look up words like “modulo” and “simpliciter”. Despite its density and its sometimes grim conclusions, “Superintelligence” had a sprinkling of playful self-ridicule and snark. There is much more of this in the current offering.
Odd structureThe structure of “Deep Utopia” is deeply odd. The book’s core is a series of lectures by an older version of the author, which are interrupted a couple of times by conflicting bookings of the auditorium, and once by a fire alarm. The lectures are attended and commented on by three students, Kelvin, Tessius, and Firafax. At one point they break the theatrical fourth wall by discussing whether they are fictional characters in a book, a device reminiscent of the 1991 novel “Sophie’s World”.
Interspersed between the lectures are a couple of parables. One is told in letters from Feodore the Fox to his Uncle Pasternaught. Feodore and his porcine friend and mentor Pignolius are in a state of nature, and groping their way towards an agricultural revolution. Despite this, Feodore’s letters are written in a highly educated, erudite style, and he has a decent grasp of the scientific method, using data and experiments.
The other parable concerns Thermo Rex, a domestic space heater whose very rich owner dies, and dedicates his large fortune to its maintenance and well-being. This causes the heater to be upgraded and granted superhuman intelligence, and also consciousness. Despite the echoes of terrifying dinosaurs in its name, it refrains from intervening in human life.
An assumption and a swerveSo much for the structure; what about the content? Two of its most striking features are a huge and controversial assumption, and a huge swerve.
The assumption is that in the foreseeable future we will find ourselves in what Bostrom calls a “solved world”, which is “technologically mature”. This means that all significant scientific problems have been resolved, and humanity is calmly spreading out into the cosmos, its population expanding exponentially as we go. We enjoy enormous abundance, and pretty much all sources of conflict have been removed. The central project of the book is to determine whether this state of affairs would be enjoyable for humans (or post-humans), and whether our lives could be meaningful.
(Personally, I find this assumption implausible. In the past, every time we solved a challenge it revealed several new ones, and although the past is an unreliable guide to the future, I strongly suspect this pattern will continue until we discover who really constructed this particular simulation. Hence I prefer Kevin Kelly’s idea of Protopia to the notion of Utopia. A protopia is a situation in which everything is very good, and day by day it keeps getting a bit better.)
Shallow and deep redundancyBostrom’s first task is to decide whether in a solved world, humans and post-humans will be redundant. He makes the helpful distinction between shallow and deep redundancy. In shallow redundancy, there are no jobs for humans because machines can do everything we do for money cheaper, better and faster. He suggests that certain jobs could not be automated if consumers wanted the practitioner to be conscious, and other jobs might require the person doing them to have moral status. However, it would become impossible for humans to hold down even these recondite jobs if conscious machines arrive. Nevertheless, in shallow redundancy, humans can live worthwhile and indeed meaningful lives being creative, having fun, and doing work that they enjoy but are not paid for.
In a state of deep redundancy, there are no tasks, including pastimes, that it is worthwhile for humans and post-humans to undertake. In this situation, AI makes leisure activities like shopping, gardening, browsing and collecting antiques, and exercising feel pointless. Even parenting could become deeply redundant as robots could be better parents, and anyway parenting could not take up enough of a person’s (now very long) life to provide its purpose.
If the Utopians have what Bostrom calls plasticity and autopotency – the ability to modify their own mental states – they could escape despair from uselessness. But although they could abolish boredom, they could not abolish boringness. Bostrom cites the example of Peer in Greg Egan’s 1994 novel “Permutation City”, who has re-wired his brain to exult in the accomplishment of carving perfect chair legs, even after he has finished hundreds of thousands of them. Peer is not remotely bored, but his life is profoundly boring, and lacking meaning.
The meaning of lifeAnd so the good professor heads off in search of the meaning of life. Spoiler alert, this is where the huge swerve happens: he does not provide it. To be fair, he gives us advance warning, in what is for me one of the book’s best passages: “Asking someone the meaning of life is like asking their recommendation for shoe size. This is especially clear if we entertain the radical possibility that we are not in a simulation.” (Channelling Douglas Adams, he adds that the best shoe size is ten.)
The lens which Bostrom chooses for his analysis of meaning is a theory developed by a South African philosopher, Thaddeus Metz. This theory stipulates that in order to be meaningful, a life should follow an arc of overall improvement, and include elements of originality and helping others. It is an objectivist theory, which means that meaning cannot simply be what each of us decides we want it to be. Subjectivist ideas of meaning could be satisfied by simply tweaking your psychology, and could include the kind of life which the American legal scholar Richard Posner warned us about: “brawling, stealing, over-eating, drinking and sleeping late.”
For Metz, a meaningful life must also have an encompassing, transcendental purpose: it should absorb a lot of a person’s time and energy, and it should serve a purpose beyond their mundane lives. But Bostrom spares himself the problem of giving a definitive answer about the meaning of life by having his Dean abruptly terminate his final lecture. His students comment that the answer “got lost in the literary upholstery.”
Upside potential, and jokesGiven that Bostrom’s avowed reason for writing “Deep Utopia” was to alleviate some of the doom and gloom surrounding AI at the moment, and perhaps offset the alarm raised by his earlier book, it is frustrating that it lacks much description of the technology’s upside potential. His own 2008 “Letter from Utopia” demonstrates that he is perfectly capable of providing it: “There is a beauty and joy here that you cannot fathom. It feels so good that if the sensation were translated into tears of gratitude, rivers would overflow.”
Instead we are left with the jokes and the epigrams, and for me at least, these are worth price of admission. Even if many of us are presently doomed to be “homo cubiculi”, our species shows promise: “Between the sunshine of hope and the rain of disappointment, grows this strange crop we call humanity.” One of our best features is our capacity for aesthetic appreciation. With enough of that, “a duck’s beak can fascinate for weeks. Without it we are like the patrol dogs at the Louvre.”
The post Can we have meaning as well as fun? Review of Nick Bostrom’s Deep Utopia first appeared on .
February 23, 2024
Artificial Intelligence and Weaponised Nostalgia
The first political party to be called populist was the People’s Party, a powerful but short-lived force in late 19th-century America. It was a left-wing movement which opposed the oligarchies running the railroads, and promoted the interests of small businesses and farms. Populists can be right-wing or left-wing. In Europe they tend to be right-wing and in Latin America they tend to be left-wing.
Populist politicians pose as champions for the “ordinary people” against the establishment. They claim that a metropolitan elite has stolen the birthright of the virtuous, “real” people, and they promise to restore it. At the heart of their political offer lies nostalgia, and opposition to change. Ironically, the populists themselves are almost always members of the same metropolitan elite that they excoriate.
They espouse what they call traditional values, including allegiance to established religious and social norms. They sneer at social progress, and belittle attempts to improve the conditions of oppressed and under-privileged groups. In particular, they allege that immigrants are being favoured over local people, and are queue-jumping to obtain better social services, especially housing. They are authoritarian and illiberal, and they select minorities, like Jews or gay people, as a common enemy for their supporters to rally against.
These days, populists rarely use the word to describe themselves. They are demagogues, offering simplistic solutions which cannot cure any of society’s ills. They deride experts and shun evidence-based policy making. Many of them are brazen liars – pure political entrepreneurs who will adopt whatever slogan wins votes. Some are ideologues who genuinely believe in the policies they promote.
Whatever their orientation, their fundamental dishonesty means they are bad for democracy. Once in power they move quickly to neuter possible sources of opposition. Judges become “lefty lawyers” who frustrate “the will of the people”. They undermine, take over, or simply abolish media organisations which report their misdeeds. They restrict the right to protest, and lock up people who dare to speak out. They are generally corrupt, and appoint friends and allies to important positions, even when they are woefully inept.
Some populists – like Venezuela’s Hugo Chavez – remain in power long enough to die peacefully of natural causes. More often, they face one of two fates: disgrace, or disastrous war. This is because they pursue the wrong solutions to the wrong problems.
The American archetype of the populist politician whose career ended in disgrace is Joe McCarthy. At the beginning of 1950 he was an unknown senator from Wisconsin, but that year he shot to fame with a speech in which he claimed to have a list of 205 communist party members working in the State Department. Despite lying about his income and his military service, he quickly became one of the most powerful Senators, and he held a series of Congressional hearings which ruined many careers.
After a few years the American public tired of his bullying, his lies and his tantrums, and in December 1954 the Senate voted to censure him. McCarthy remained a Senator for two more years, but he was a diminished and disgraced figure. He died in 1957, a drunk and a heroin addict, and President Eisenhower quipped that McCarthy-ism had become McCarthy-wasm.
The classic example of the populist whose career ended in disastrous war is Adolf Hitler. Because they offer bad solutions to the wrong problems, populists need scapegoats to blame for the deteriorating economic and political environment. If these scapegoats are foreign, so much the better, but this logic drives countries to war.
The forces that lead voters to support populists are poorly understood. The most popular explanation is economic hardship, and indeed this played a role in the rise of Nazism in the 1930s. But economic hardship is rarely the principal cause. With the Nazis, Germany’s sense of injustice after the First World War was more important than simple economics.
Fast forward to today: many people think that the financial crash of 2008 had such a deleterious effect on household incomes that it explains the current wave of populism. It is true that many of England’s Brexit voters live in deprived areas in the north of the country, but the south is more populous, and Brexit voters there were typically comfortably-off, older, and often rural. Donald Trump has three major constituencies: less-educated whites, born-again Christians who are willing to overlook Trump’s egregious moral defects because he is rolling back the country’s abortion laws, and wealthy people who equate progressive policies with tyrannical socialism.
The current wave of populism is less about economic deprivation than it is about dislike of change. Change is always uncomfortable, and rapid change especially so. In the last few decades, societies all around the world have changed rapidly. Women have penetrated the workforce and made progress towards equality of opportunity and economic freedom – although obviously there is a long way to go. Overt racism is now frowned upon in most societies, even if ethnic minorities remain significantly disadvantaged. Homosexuality has gone from illegal, to disapproved of, to celebrated in just a few decades.
Many of the people who benefited from the previous state of affairs are happy to declare these changes as positive – in theory. In practice, some of them find the changes threatening, and this makes them susceptible to claims that family life is breaking down because both parents are working; that immigrants are queue-jumping, and poised to replace the “indigenous” population (whatever that means); and that transgender women wanting to use female toilets are often rapists.
It is more palatable to believe that populism stems from economic disadvantage than from these regrettable notions, but we should not under-estimate the fear of change. You cannot defeat an ideology if you mis-diagnose its causes. Populism is the most important ideology in today’s political landscape because it can do so much harm. It could lead to war between the US and China, for example, which would be utterly disastrous.
What does all this have to do artificial intelligence? In the coming decades, AI will usher in more economic, social and political change than humanity has ever experienced before. In a few decades (a few years, according to some well-informed people) we will have machines that can do any job faster, cheaper and better than a human. The end of wage slavery should be very good news, but only if we devise an economic system that makes it beneficial for everyone. Some time after that we will have superintelligence, at which point absolutely everything about the nature of being human will change.
These changes will be exciting for some, and uncomfortable for others. We will need to be clear-eyed about the risks and rewards of these changes, and we will need honest, intelligent politicians who understand what is at stake. Purveyors of weaponised nostalgia are not the leaders we need, and continuing to elect them could turn out to be the single biggest mistake our species ever makes.
The post Artificial Intelligence and Weaponised Nostalgia first appeared on .
Government regulation of AI is like pressing a big red danger button
Imagine that you and I are in my laboratory, and I show you a Big Red Button. I tell you that if I press this button, then you and all your family and friends – in fact the whole human race – will live very long lives of great prosperity, and in great health. Furthermore, the environment will improve, and inequality will reduce both in your country and around the world.
Of course, I add, there is a catch. If I press this button, there is also a chance that the whole human race will go extinct. I cannot tell you the probability of this happening, but I estimate it somewhere between 2% and 25% within five to ten years.
In this imaginary situation, would you want me to go ahead and press the button, or would you urge me not to?
I have posed this question several times while giving keynote talks around the world, and the result is always the same. A few brave souls raise their hands to say yes. The majority of the audience laughs nervously, and gradually raises their hands to say no. And a surprising number of people seem to have no opinion either way. My guess is that this third group don’t think the question is serious.
It is serious. If we continue to develop advanced AI at anything like the rate we are now, then in some decades or years, someone will develop the world’s first superintelligence. By which I mean a machine which exceeds human capability in all cognitive tasks. The intelligence of machines can be improved and ours cannot, so it will go on, probably quite quickly, to become much, much more intelligent than us.
Some people think that the arrival of superintelligence on this planet inevitably means that we will quickly go extinct. I don’t agree with this, but extinction is a possible outcome that I think we should take seriously.
So why is there no great outcry about AI? Why are there no massive street protests and letters to MPs and newspapers, demanding the immediate regulation of advanced AI, and indeed a halt to its development? The idea of a halt was proposed forcefully back in March by the Future of Life Institute, a reputable think tank in Massachusetts. It garnered a lot of signatures from people who understand AI, and it generated a lot of media attention. But it didn’t capture the public imagination. Why?
I think the answer is that most people are extremely confused about AI. They have a vague sense that they don’t like where it is heading, but they aren’t sure if it they should take it seriously, or dismiss it as science fiction.
This is entirely understandable. The science of AI got started in 1956 at a conference at Dartmouth College in New Hampshire, but until 2012 it made very little impact on the world. You couldn’t see it or smell it, and crucially, it didn’t make any money. Even after the Big Bang in 2012 which introduced deep learning, advanced AI was pretty much the preserve of Big Tech – a few companies in the US and China.
That changed a year ago, with the launch of ChatGPT, and even more so in March, with the launch of GPT-4. Finally, ordinary people could get their hands on an advanced AI model and play with it. They could get a sense of its astonishing capabilities. And yet there is still no widespread demand for the regulation of advanced AI. No major political party in the world has among its top three priorities the regulation of advanced AI to ensure that superintelligence does not harm us.
To be sure, there are calls for AI to be regulated by governments, and indeed regulation is on its way in the US, China, and the EU, and most other economic areas too. But these moves are not driven by a bottom-up, voter-led groundswell. Ironically, they are driven at least in part by Big Tech itself. Sam Altman of OpenAI, Demis Hassabis of DeepMind, and many other people leading the companies developing advanced AI are more convinced than anyone that superintelligence is coming, and that it could be disastrous as well as glorious.
AI is a complicated subject, and it doesn’t help that opinions vary so widely within the community of people who work on it, or who follow it closely and comment on it. Some people (e.g., Yann LeCun and Andrew Ng) think superintelligence is coming, but not for many decades, while others (Elon Musk and Sam Altman, for instance) think it is just a few years away. A third group holds the bizarre view that superintelligence is a pure bogeyman that was invented by Big Tech in order to distract attention away from the shorter-term harms that they are allegedly causing with AI, by eroding privacy, enshrining bias, poisoning public debate, driving up anxiety levels and so on.
There is also no consensus within the AI community about the likely impact of superintelligence if and when it does arrive. Some think it is certain to usher in some kind of paradise (Peter Diamandis, Ray Kurzweil), while others think it entails inevitable doom (Eliezer Yudkowski, Conor Leahy). Still others think we can figure out how to tame it ahead of time, and constrain its behaviour forever (Max Tegmark, and Yann LeCun again).
Technology evolves because inventors and innovators build one improvement on top of another. This means it evolves within fairly narrow constraints. It is not deterministic, and there is no law of physics which says it will always continue. But our ability to guide it is limited.
Where we have more freedom of action is in adjusting human institutions to moderate the impact of technology as it evolves. This includes government regulation. Advanced AI already affects all of us, whether we are aware of it or not. It will affect all of us much more in the years ahead. We need institutions that can cope with the impact of AI, and this means that we need our political leaders and policy framers to understand AI. This in turn requires all of us to understand what AI is, what it can do, and the discussion about where it is going.
Increasingly, acquiring and maintaining a rudimentary understanding of AI is a fundamental civic duty.
The post Government regulation of AI is like pressing a big red danger button first appeared on .