Artificial intelligence (AI) is now advancing at such a rapid clip that it has the potential to transform our world in ways both exciting and disturbing. Computers have already been designed that are capable of driving cars, playing soccer, and finding and organizing information on the Web in ways that no human could. With each new gain in processing power, will scientists soon be able to create supercomputers that can read a newspaper with understanding, or write a news story, or create novels, or even formulate laws? And if machine intelligence advances beyond human intelligence, will we need to start talking about a computer's intentions?These are some of the questions discussed by computer scientist J. Storrs Hall in this fascinating layperson's guide to the latest developments in artificial intelligence. Drawing on a thirty-year career in artificial intelligence and computer science, Hall reviews the history of AI, discussing some of the major roadblocks that the field has recently overcome, and predicting the probable achievements in the near future. There is new excitement in the field over the amazing capabilities of the latest robots and renewed optimism that achieving human-level intelligence is a reachable goal.But what will this mean for society and the relations between technology and human beings? Soon ethical concerns will arise and programmers will need to begin thinking about the computer counterparts of moral codes and how ethical interactions between humans and their machines will eventually affect society as a whole.Weaving disparate threads together in an enlightening manner from cybernetics, computer science, psychology, philosophy of mind, neurophysiology, game theory, and economics, Hall provides an intriguing glimpse into the astonishing possibilities and dilemmas on the horizon.
This was an excellent book about the current state of AI and where it's going without any zealotry or fear mongering about future super intelligent AI. The writing and reasoning was superb (much better than others like Ray Kurzweil although perhaps not as entertaining) and because of that I'm convinced more than ever human level AI will be upon us soon. Even I started to become a little nervous when it seemed like my job may be obsolete in a decade or so, but the author offset some of my fears by convincingly arguing that machines will most likely be conscious moral beings. So I may be jobless but at least I won't be enslaved in the matrix or fighting off skynet.
Beyond AI is a broad and considered assessment of the future of our AI technologies.
The book starts with a letter to a future AGI in which he besieges it to keep what concience people have programmed into it. At least until the AGI's intelligence matures into wisdom in which case it is sure to develop some far less primitive conscious than the one humanity has given it. He asserts that as people are only barely smart enough to be called intelligent, they are only barely ethical enough to be called moral. So the conscience we bestow our AGI is just the best that we will be able to produce. These ideas will be examined further in part III of the present book.
The book then provides a brief description of some existing AI technologies and their limitations. It defines "Formalist Float" as the difference between a naive symbolic representations of a problem and the much deeper, partially non-symbolic representations that are required to truly solve them. Storrs Hall blames formalist float for the failure of much traditional AI research.
He also coins the term "Autogeny" for the missing ability of existing AI applications to address new problems that they have not previously seen. He then (by his own admission) gropes towards a system for delivering autogeny as a hierachy of agents called SIGMAs. They have an interpolating associative memory that records experiences, and a controller that uses that memory to satisfy goals in a given situation. A robot arm contoller is used as an example, which is then extended into higher level functionality. At the top he suggests there are homunculus SIGMAs -- little men that control the whole process, but only in terms of all the lower level SIGMAs. He also postulates a micro-economic model of mind, where agents compete with each other to perform tasks, and those with the best price/performance are selected.
Storrs Hall dances around the theme of natural selection. There is a section on the Prisoner's Dilemma which includes a clever party game of auctioning off a dollar bill, the point being to show the need for cooperating agents to be trustworthy. He discusses the ideas of Franz Boas that culture is purely learned, and then contrasts that with sociobiological analysis by E.O. Wilson that suggests behavior is dictated by evolution. There is even a later chapter titled "Evolutionary Ethics" which considers the common ethical elements between radically different cultures, and the over enthusiastic movement against the evils of Social Darwinism.
But for all that he misses the essential conclusion of my book, namely that natural selection will also drive an AGIs morality. He does not even try to refute it. It is indeed difficult to see beyond the programming of our own instincts.
The book finishes with some analysis and predictions about the road to AGI, whether the future needs us, and the impossibility of predicting the future beyond the Singularity. Although raising awareness of the dangers of AGI, the book ultimately posits that "Our machines will be better than we are, but having created them we will be better as well."
My review of Beyond AI was published in the July-August 2007 issue of THE FUTURIST and is available online at THE FUTURIST
The Artificial Mind and the Posthuman Future
Review by Patrick Tucker
Beginning with the myth of the sculptor Pygmalion and his statue-bride Galatea, the story of the artist's creation that becomes real is among the most provocative of human history. The fixation on making objects come alive remains strong to this day. But where the Greeks perceived divine intervention as necessary to endow matter with a will, wit, and intellect, we in the modern era see the problem as a mere technical hurdle, one solved through concerted effort and scientific inquiry--hence the pursuit of artificial intelligence, or AI.
After languishing during the late 1970s and early 1980s, the field of AI has produced some interesting successes in the last two decades. There's the chess-playing computer Deep Fritz--which beat the world champion but couldn't tell you the difference between a rook and the Pope--and Stanley, the car that drives itself but has no idea where it's going. These are fun breakthroughs, worthy of encouragement, but they hardly suggest that it is at all possible to create a machine capable of genuine thought. J. Storrs Hall, chief scientist and founder of Nanorex, argues that the bigger picture for AI is brighter than the sum of its parts. "After a half a century, the tortoise of AI practice is beginning to catch up to the hare of expectations," he writes in his latest book, Beyond AI.
Hall makes a convincing case that it is a virtual certainty that human-level AI is coming, but the majority of his wonderfully written book focuses on where AI comes from--in terms of both the field's history and the technologies and theories that form the basis of AI research.
Hall traces the roots of today's AI back to an MIT mathematician named Norbert Wiener. In the early 1940s, the U.S. Army challenged Wiener to produce a better antiaircraft gun. The artillery the Army was using at the time didn't target well and was prone to weird oscillations. On a whim, Wiener consulted a colleague at Harvard Medical School named Arturo Rosenblueth and learned that people who suffered from a neurological illness called "purpose tremor" were given to spasms similar to the guns' seizures. He also realized the gunner's aim improved depending on the amount of information that he or she had--how fast the target plane was going, what sort of defensive maneuvers it was capable of, how to wield the machine accordingly, etc. The ideal targeting system, Wiener realized, was one that performed like a human brain--it would see the object, recognize it for what it is, consider what to do about it and then instruct the limbs to react.
"It became clear to Wiener and Rosenblueth that there were some compelling parallels between the mechanisms of prediction, communication, and control in the mechanical gun-steering systems and the ones in the human body," writes Hall. This realization gave rise to the field of cybernetics, which, in turn, spawned AI.
Increases in computer processing speed and collaboration among researchers have helped AI progress considerably in the last ten years, and the rate of innovation is accelerating. Many researchers optimistically predict that AI will cross the "human level" threshold before the middle of this century. Hall is among the most hopeful. In the March-April 2006 issue of THE FUTURIST, Ray Kurzweil forecast that "our computer intelligence will vastly exceed biological intelligence by the mid-2040s," to which Hall gamely answered, "He's too conservative."
In Beyond AI, Hall finesses that prediction somewhat. "Answering a question like 'when will AI arrive?' with a numerical date makes about as much sense as answering the ultimate question of life, the universe, and everything with '42,'" he writes, meaning that the question--in its broadness--is hopelessly inadequate to address the issue it seeks to explore.
The advent of the computer age in the twentieth century has given rise to an existential dilemma in the twenty-first. If we already use AI to land planes, play chess, and drive cars, then what does it mean to produce an intelligence that performs on a par with humanity? Haven't we already done so? To address this question, Hall advances a framework of six stages for understanding how AI is currently developing and where it might go in the years ahead.
1.The Hypohuman AI, as indicated by the Greek prefix hypo or (under), would be naturally inferior to human intelligence and subject to human will--a development stage we have already reached. AI entities that perform calculations and execute commands, such as to help aerial drones take pictures, can fairly be called hypohuman AIs.
2. Diahuman AI. During this stage, AI crosses over into human territory in terms of capability. The term dia, as in diagonal, means across. "It's tempting to call this 'human-equivalent,' but the idea of equivalence is misleading. It's already apparent that some AI abilities (chess-playing) are beyond the human scale while others (reading and writing) haven't reached it yet," says Hall. The diahuman AI would also have the ability to learn, but not noticeably faster than a person.
3. Parahuman AI. A parahuman AI--from the term para or "alongside," A parahuman AI robot is one that could pass for a person (not necessarily in appearance) and may even be part human, harkening back to AI's roots in cybernetics. Parahuman may also come to refer to humans who use computer devices such as implants to improve biological performance. The parahuman stage could encompass either, but more likely both. Hall explains: "The upside of the parahuman AI is that it will enhance the interface between our native senses and abilities, adapted as they are for a hunting and gathering bipedal ape, and the increasingly formalized and mechanized world we are building. The parahuman PC (personal computer) should act like a lawyer, doctor, accountant and secretary, all with deep knowledge and endless patience."
4. Allohuman AI--meaning a different but comparable intelligence, such as a being that is functionally superior to the average person in many respects but inferior in others, and having a crude but still humanlike awareness of the world around it. An example of this would be the twittering, nervous C-3PO character from the popular Star Wars films who is fluent in over 6 million forms of communication but can't tell a joke in any one of them.
5. Epihuman AI. The epihuman artificial intelligence would possess what Hall calls "weakly godlike" powers and the ability to out perform humans in virtually every way, but it would not be an unfathomably powerful being. "We can straightforwardly predict, from Moore's Law that 10 years after the advent of a learning (but not radically self-improving) human-level AI, the same software running on machinery of the same cost would do the same human-level tasks 1000 times as fast as we," writes Hall. An epihuman AI would be able to "read an average book in one second with full comprehension; take a college course, with all due homework and research, in ten minutes; and write a book, again with ample research in two or three hours."
6. Hyperhuman AI. During the sixth and final AI stage, humanity would see the birth of a sentient entity as intellectually productive and capable as the entire human race. For novice AI watchers, this final scenario is the most worrisome. "Where does an 800-pound gorilla sit? In the old joke, anywhere he wants to. Much the same thing will be true of hyperhuman AI," says Hall, "except where it has to interact with other AIs. The really interesting question, then, will be: what will it want?"
In the end, the gorilla metaphor may be a more useful one for understanding AI than that of the beautiful statue springing to life. In much the same way that a wild animal raised in captivity will eventually revert to its natural instincts, so any highly sophisticated computer program will almost certainly develop its own interests apart from--and perhaps in direct conflict with--those of its creators.
Hall's response to the threat of runaway AI is the same one that techno-enthusiasts have been repeating for years. Like most AI experts, he's an optimist by necessity. "The things we value--those things will be better cared for by, more valued by, our moral superiors whom we have this opportunity to bring into being. Our machines will be better than we are--but having created them, we will be better, as well," Hall writes.
In other words, trust the gorilla. What choice do you have?
About the Reviewer Patrick Tucker is the associate editor of THE FUTURIST and director of communications for The World Future Society.
The title is accurate - this book isn't really about AI. This book is philosophical. It's a perception, a human study, an insight into the thinking of someone a lot smarter than me. I learned a lot from it, about all sorts of things, as Hall cites studies from all walks of life to reinforce his viewpoints. It's hard not to praise the book because it is a vast expanse of knowledge, however it's by far the most difficult book I ever had to read. Took me twice as long to finish this as it did the whole witcher saga.
I think this book got it right, at least in a broad sense. Obviously it doesn't touch on the details, but the thesis is now finally coming of age. Also, it's so succinctly written, with some passages that startled me in how cleanly J. Storrs Hall managed to cut right down to the heart of the matter. Absolutely loved it!!
Good artificial intelligence study with possibility for future, ramifications of development and concern for morals of such pseudo thinking (artificial intelligence) machines. Such machines will be doing what we tell them to do without thinking. If appear to think it will be per instructions of builders or assemblers of the parts of the machine
in short, great book it can make good material for a documentary full of science and humanities never thought of one book to talk about computer science, psychology, sociology, ethics and philosophy I recommend this book to those interested in the future of AI