Jump to ratings and reviews
Rate this book

The coming technological singularity: How to survive in the post-human era

Rate this book
This book has DirectLink Technology built into the formatting. This means that we have made it easy for you to navigate the various chapters of this book. Some other versions of this book may not have the DirectLink technology built into them. We can guarantee that if you buy this version of the book it will be formatted perfectly on your Kindle.

16 pages, Kindle Edition

First published June 7, 2010

7 people are currently reading
320 people want to read

About the author

Vernor Vinge

121 books2,622 followers
Vernor Steffen Vinge is a retired San Diego State University Professor of Mathematics, computer scientist, and science fiction author. He is best known for his Hugo Award-winning novels A Fire Upon The Deep (1992), A Deepness in the Sky (1999) and Rainbows End (2006), his Hugo Award-winning novellas Fast Times at Fairmont High (2002) and The Cookie Monster (2004), as well as for his 1993 essay "The Coming Technological Singularity", in which he argues that exponential growth in technology will reach a point beyond which we cannot even speculate about the consequences.

http://us.macmillan.com/author/vernor...

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
66 (44%)
4 stars
55 (36%)
3 stars
22 (14%)
2 stars
4 (2%)
1 star
2 (1%)
Displaying 1 - 17 of 17 reviews
Profile Image for John.
1 review4 followers
April 13, 2013
This is a classic paper written by Professor Vernor Vinge back in 1993 that discusses the technological singularity.

Professor Vinge defines the Singularity as "a point where our models must be discarded and a new reality rules" as a result of exponential growth of technology.

I highly recommend his story "True Names" which may be the first story written about the the technological singularity, although perhaps Robert Heinlein's _The Moon is a Harsh Mistress_ may deserve that honor.

A copy of the Vinge paper can be found on San Diego State's web site:
http://www-rohan.sdsu.edu/faculty/vin...

====================================================================
The Coming Technological Singularity:
How to Survive in the Post-Human Era

Vernor Vinge
Department of Mathematical Sciences
San Diego State University

(c) 1993 by Vernor Vinge
(Verbatim copying/translation and distribution of this
entire article is permitted in any medium, provided this
notice is preserved.)

This article was for the VISION-21 Symposium
sponsored by NASA Lewis Research Center
and the Ohio Aerospace Institute, March 30-31, 1993.
It is also retrievable from the NASA technical reports
server as part of NASA CP-10129.
A slightly changed version appeared in the
Winter 1993 issue of _Whole Earth Review_.


Abstract

Within thirty years, we will have the technological
means to create superhuman intelligence. Shortly after,
the human era will be ended.

Is such progress avoidable? If not to be avoided, can
events be guided so that we may survive? These questions
are investigated. Some possible answers (and some further
dangers) are presented.

_What is The Singularity?_

The acceleration of technological progress has been the central
feature of this century. I argue in this paper that we are on the edge
of change comparable to the rise of human life on Earth. The precise
cause of this change is the imminent creation by technology of
entities with greater than human intelligence. There are several means
by which science may achieve this breakthrough (and this is another
reason for having confidence that the event will occur):
o The development of computers that are "awake" and
superhumanly intelligent. (To date, most controversy in the
area of AI relates to whether we can create human equivalence
in a machine. But if the answer is "yes, we can", then there
is little doubt that beings more intelligent can be constructed
shortly thereafter.
o Large computer networks (and their associated users) may "wake
up" as a superhumanly intelligent entity.
o Computer/human interfaces may become so intimate that users
may reasonably be considered superhumanly intelligent.
o Biological science may find ways to improve upon the natural
human intellect.

The first three possibilities depend in large part on
improvements in computer hardware. Progress in computer hardware has
followed an amazingly steady curve in the last few decades [16]. Based
largely on this trend, I believe that the creation of greater than
human intelligence will occur during the next thirty years. (Charles
Platt [19] has pointed out the AI enthusiasts have been making claims
like this for the last thirty years. Just so I'm not guilty of a
relative-time ambiguity, let me more specific: I'll be surprised if
this event occurs before 2005 or after 2030.)

What are the consequences of this event? When greater-than-human
intelligence drives progress, that progress will be much more rapid.
In fact, there seems no reason why progress itself would not involve
the creation of still more intelligent entities -- on a still-shorter
time scale. The best analogy that I see is with the evolutionary past:
Animals can adapt to problems and make inventions, but often no faster
than natural selection can do its work -- the world acts as its own
simulator in the case of natural selection. We humans have the ability
to internalize the world and conduct "what if's" in our heads; we can
solve many problems thousands of times faster than natural selection.
Now, by creating the means to execute those simulations at much higher
speeds, we are entering a regime as radically different from our human
past as we humans are from the lower animals.

From the human point of view this change will be a throwing away
of all the previous rules, perhaps in the blink of an eye, an
exponential runaway beyond any hope of control. Developments that
before were thought might only happen in "a million years" (if ever)
will likely happen in the next century. (In [4], Greg Bear paints a
picture of the major changes happening in a matter of hours.)

I think it's fair to call this event a singularity ("the
Singularity" for the purposes of this paper). It is a point where our
models must be discarded and a new reality rules. As we move closer
and closer to this point, it will loom vaster and vaster over human
affairs till the notion becomes a commonplace. Yet when it finally
happens it may still be a great surprise and a greater unknown. In
the 1950s there were very few who saw it: Stan Ulam [27] paraphrased
John von Neumann as saying:

One conversation centered on the ever accelerating progress of
technology and changes in the mode of human life, which gives the
appearance of approaching some essential singularity in the
history of the race beyond which human affairs, as we know them,
could not continue.

Von Neumann even uses the term singularity, though it appears he
is still thinking of normal progress, not the creation of superhuman
intellect. (For me, the superhumanity is the essence of the
Singularity. Without that we would get a glut of technical riches,
never properly absorbed (see [24]).)

In the 1960s there was recognition of some of the implications of
superhuman intelligence. I. J. Good wrote [10]:

Let an ultraintelligent machine be defined as a machine
that can far surpass all the intellectual activities of any
any man however clever. Since the design of machines is one of
these intellectual activities, an ultraintelligent machine could
design even better machines; there would then unquestionably
be an "intelligence explosion," and the intelligence of man
would be left far behind. Thus the first ultraintelligent
machine is the _last_ invention that man need ever make,
provided that the machine is docile enough to tell us how to
keep it under control.
...
It is more probable than not that, within the twentieth century,
an ultraintelligent machine will be built and that it will be
the last invention that man need make.

Good has captured the essence of the runaway, but does not pursue
its most disturbing consequences. Any intelligent machine of the sort
he describes would not be humankind's "tool" -- any more than humans
are the tools of rabbits or robins or chimpanzees.



(continued on the website...)


Profile Image for Glass Half Full.
35 reviews33 followers
March 4, 2017
Despite his error regarding an ultra-intelligent machine being built in the 20th century (which did not happen -- even 21st century supercomputer is not an ultra-intelligent machine), this is an intriguing collection of ideas. This projection reality is still possible, despite this essay having been written by Vinge back in the 90's. Not much has changed since then.

Since the 2010's or even late 2000's, there have been rising factions of politics that are anti-intellectual and anti-fact, particularly when it does not serve the status quo. That has always been the case for politics, but the advent of the internet has internationalized this much further. This is the main reason why we have hooligans among athletes and even academics, as well as journalists. Because of this, we now have the vision of George Orwell attempting to cement itself as our present.

Vinge is right, as a part in the essay, I believe he meant that politics can obstruct innovation and progress. We can see this every day on various news outlets, even when we don't try our hardest to pay attention. Certain political realities possess the power to derail, or at the very least delay, this possible future, especially considering the rise of ultranationalist, nativist, authoritarian right-wing political leaders, who are almost always science-deniers and desire the return of the so-called Good Old Days, and the rise of the paranoid extreme of left-wing politics who, unlike the original progenitors of liberal thought (liberal, which at the time of the Renaissance and perhaps prior to the World Wars, was considered synonymous with scientific or reasonable), are unable to comprehend nuances and as a result outright demonize certain innovations (such as genetic engineering) that defy the norms, traditions and the common notions of what is natural and what is not, of what is human and what is not, and even deny their effectiveness in aiding the human condition and alleviating its suffering (such as vaccination, considering the spread of complacency in First World comfort, vaccines became poison to the uneducated). Many aspects of left-wing politics now demonize valid science in almost the same way that many aspects of right-wing politics demonizes science (although perversion of science has always been less of a partisan issue and more of a fear-of-war kind of issue).

However, behind all these realities, corporate influence can still provide the necessary influence for the development of the singularity. So perhaps, my concern with politics and the lack of education in society getting in the way of possible progress in technology and contributing instead to its dangers is rather moot. Maybe, the end sum will always lead to technological singularity. And I believe Vernon Vinge of 2017 would still agree, even if the statement comes from an oft-miseducated twenty-year old layman.

When it comes to the concept of transcending humanity, I truly think that other than a completely non-theistic interpretation of Buddhism (none of the Cambodian type), Nietzsche's, Schopenhauer's and Hegel's philosophies (or at least what basic understanding I have of their philosophies), the only other mechanism that one may use to distance oneself from the common spectrum of Good and Evil or to become an agent, essentially, of the rise of an amoral spectrum is through Intelligence Amplification. At this point, you are not only philosophically distanced from (and by distanced from, I mean have a more nuanced understanding of) the outdated cycles of human morals, you are also biologically distanced at the neurological level. Every sobering epiphany would feel commonplace. It could be the gateway for the prevalence of actual wisdom, or it could be the gateway for the justification of injustices due to impaired logic. I'm not so sure how an individual superhuman entity of this sort can possess the capacity to eliminate as much bias as possible or would even have the predilection to do so, as mandated by scientific integrity. Perhaps true wisdom and intellectual integrity will not be achieved even by transcending the current state of humanity.

Anyway, the true point is that I really liked this essay. And I am hoping that more philosophers take the time to be interested in transhumanism, while avoiding the detestable propensity for navel-gazing. Not so coincidentally, the lack of pragmatism is the regular pattern of behavior among futurists. That, I think, is the first step towards producing sinister technological elites -- taking the term technocrat one step too far.
Profile Image for Tim Weakley.
693 reviews28 followers
July 18, 2010
Eerily prescient. Written in 1993, this essay discussing the rise of machine intelligence is very well done.If you can get your hands on it from one of the ebook sites take the time to read it.
Profile Image for Claudiu Leoveanu-Condrei.
28 reviews3 followers
January 1, 2026
Somewhat interesting. Stylistically, I expected more vigor from a sci-fi author.

I generally view the "unexpectedness" argument similarly: the precipitating event will likely be unexpected. I don't particularly like that most authors continue to popularize the concept of the "singularity." There could be better alternatives, especially if one keeps an open mind. For instance, from certain angles, I think the idea of an "attractor" is more ripe.

I also agree with the observation that "we cannot prevent the Singularity, that its coming is an inevitable consequence of humans’ natural competitiveness and the possibilities inherent in technology."

"And yet: we are the initiators. Even the largest avalanche is triggered by small things."
Profile Image for Michael.
230 reviews1 follower
February 28, 2018
Written in 1993 it is prescient about modern AI. It is somewhat alarming to consider that (to paraphrase) "a superintelligent machine will not be humankind's "tool" any more than humans are the tools of rabbits or robins or chimpanzees."
1 review1 follower
October 22, 2025
This essay is relevant today mainly because it popularized the concept of the “singularity” to refer to an AI so intelligent that it transforms our world (“where our old models must be discarded and a new reality rules”).

Vinge does raise some interesting implications of superintelligent AI for us mortal humans, but they’re all premised on a faulty analogy to human-animal relationships. (E.g., imagine if rabbits tried to confine humans and use our brains as tools…) The problem? Rabbits didn’t invent humans. Vinge’s AI speculation sadly falls apart under any scrutiny.

The more interesting part of the essay is the more overlooked: exploring the possibility of superhuman intelligence generated through human-machine interface. Vinge’s near-future ideas include allowing human-computer teams in chess tournaments and a product concept that sounds a lot like today’s “smart glasses”. But while cybernetics is progressing, enhancing our own brains to the point of superintelligence remains the stuff of science fiction. Vinge’s imagination runs a little wild here, more befitting a short story than a fact-based essay. (“What happens when pieces of ego can be copied and merged…?”)

For a more thoughtful and current take on super-intelligent AI and cyborgs, I’d recommend Chapter 10 of Pedro Domingos’ The Master Algorithm.
Profile Image for Esben.
186 reviews14 followers
April 21, 2024
I am continually validated in the belief that modern rationalist thought emerging from the LessWrong sphere on the topic of AI risk is potentially net-negative, and reading pieces such as this informs this position even more. If the development of intellectual thought on AI safety had developed from the bases of pieces such as this and Kurzweil's work instead of being insulated within the blogosphere, the development of AI risk research and mitigation initiatives might be significantly farther.

Here, Vinge identifies many key points to a superintelligence singularity, potential mitigations, and their limitations. Topics that have been discussed in imprecise terms for years on the forums would be moot if this was mandatory reading and humble reasoning was *really* the standard.
Profile Image for Don Skotch Vail.
46 reviews3 followers
March 28, 2014
This is an interesting and very short paper, but Vinge seems stuck on the idea that machines will suddenly wake up one day, like some bad science fiction movie, and that this will hit humanity as a big surprise. That it will all be sudden. It could be that we will be very aware of machines getting smarter and smarter, and we get better at making them compatible with human values. I seriously doubt that any machine will ever just "wake up" unintentionally. The problem of consciousness is far too hard to happen by accident.

Also, he seems to assume that the machines will necessarily be subject to the same foibles and vices that humans have. A machine will have whatever values we build into it. It need not be built to have the drive to conquer and the drive to protect its own ego, and so on, or at least so far I don't see why it would have to have those drives. In fact, it might be built to value human tranquility. I have read other articles where even these machine values could go horribly wrong - imagine a hyper intelligent and powerful machine furiously focused on maximizing your happiness - it could be pretty horrific. But Vinge doesn't address these issues head-on in this paper.

Vinge predicts the singularity will happen between 2005 and 2030. Keep in mind he wrote this in 1993.
I think the Singularity is probably going to happen, but so far it doesn't feel like it will happen in the next 15 years. It might still be 50 or 200 years off, there is just no way to know yet. But my main hope is that it happens slowly enough that we go along with it, we enhance ourselves and so are not really left behind, or crushed in the event. Vinge discusses this, but seems to see this as a dark exit. I am not convinced it would be a bad thing.

Interesting paper and I recommend it.
Profile Image for njpolizzi.
207 reviews7 followers
January 20, 2017
Es un Ensayo muy serio, que podría catalogarse de fundacional. Esta al alcance de cualquiera, no se necesita ser un científico para entenderlo.

Vernor Vinge, del Department of Mathematical Sciencies, San Diego State University, lo presentó en 1993, en un Simposio de expertos y científicos organizado por la NASA. En total son solo 11 paginas (disponible libremente en ingles en Internet).

Cuando el tema de la Inteligencia Artificial (“Artificial Intelligence”: AI en ingles, una maquina que piensa), e incluso la mas “controlable” (!!!???) “Intelligence Amplification”(IA en Ingles, un chip en el cerebro por ej.), …. es tratado y analizado seriamente a nivel científico (como hace Vinge en su Ensayo), la conclusión bien puede ser (es lo que “transpira” del análisis de Vinge, aunque el no lo escribe así terminantemente):

El momento en el cual la inteligencia artificial alcanzará el grado de desarrollo de la inteligencia humana, el llamado "Punto de Singularidad" ..... llegará, es imparable, será muy pronto, no será compatible con los seres humanos, y no podremos controlarla.

Aunque Vinge también ha escrito novelas de ciencia ficción con mucho éxito, este Ensayo no lo es, y considero que debiera ser leído por todos aquellos que lo tengan a su alcance.

Nestor
2 reviews
July 10, 2015
Not much was said about the survival part, nor about presuppositions of the ethical guidelines. Maybe the meta-golden rule comes close.

The ethico-theological conclusions seem unfounded though. One cannot and should not stay neutral regarding the fate of humankind.

From the ignorance of the present it is not enough to conclude, that we might be godlike tomorrow ...Unless one already knows something about human essence - something which he does not tell the rest of us.
This entire review has been hidden because of spoilers.
Profile Image for Kathleen.
138 reviews10 followers
December 31, 2014
Short essay, surprised to learn it was written in 93. Certainly feels relevant 20 years later! Anyway, interesting discussion of IA as an alternative to AI -- intelligence amplification. Reminds me of human argumentation, something englebart talked about.
132 reviews
March 27, 2013
What future enhanced government leaders, with extremely capable strong AI robots, decides it no longer needs human citizens?
367 reviews
May 9, 2016
Very informative but pretty terrifying.
Profile Image for Caleb Kipkurui.
20 reviews3 followers
February 2, 2023
An essay on Artificial intelligence written in 1993, but seems so recent. An enlightening but frightening read. Will we achieve singularity? And if so, what happens immediately afterwards?
Displaying 1 - 17 of 17 reviews

Can't find what you're looking for?

Get help and learn more about the design.