Jump to ratings and reviews
Rate this book

Reframing Superintelligence: Comprehensive AI Services As General Intelligence

Rate this book

ebook

Published January 1, 2019

1 person is currently reading
61 people want to read

About the author

K. Eric Drexler

11 books108 followers
K. Eric Drexler, Ph.D., is a researcher and author whose work focuses on advanced nanotechnologies and directions for current research. His 1981 paper in the Proceedings of the National Academy of Sciences established fundamental principles of molecular design, protein engineering, and productive nanosystems. Drexler’s research in this field has been the basis for numerous journal articles and for books including Engines of Creation: The Coming Era of Nanotechnology (written for a general audience) and Nanosystems: Molecular Machinery, Manufacturing, and Computation (a quantitative, physics-based analysis). He recently served as Chief Technical Consultant to the Technology Roadmap for Productive Nanosystems, a project of the Battelle Memorial Institute and its participating US National Laboratories. He is currently working in a collaboration with the World Wildlife Fund to explore nanotechnology-based solutions to global problems such as energy and climate change.

Drexler was awarded a PhD from the Massachusetts Institute of Technology in Molecular Nanotechnology (the first degree of its kind; his dissertation was a draft of Nanosystems). Dr. Drexler is currently (2012) an academic visitor at Oxford University. He consults and speaks on how current research can be directed more effectively toward high-payoff objectives, and addresses the implications of emerging technologies for our future, including their use to solve, rather than delay, large-scale problems such as global warming.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
0 (0%)
4 stars
2 (66%)
3 stars
1 (33%)
2 stars
0 (0%)
1 star
0 (0%)
Displaying 1 of 1 review
Profile Image for Henry.
159 reviews75 followers
November 29, 2019
This post also appears on Medium and can be viewed here.

For AI historians and researchers, especially those interested in the far future of AI, this is probably the most significant work published in this space since the book Superintelligence by Nick Bostrom. Others have summarised the work so I won't try to duplicate the effort, Rohin Shah in particular has done an excellent job with his summary which is available to read here: https://www.alignmentforum.org/posts/...

Here is Rohin Shah's overview of the CAIS model:

“The core idea is to look at the pathway by which we will develop general intelligence, rather than assuming that at some point we will get a superintelligent AGI agent. To predict how AI will progress in the future, we can look at how AI progresses currently -- through research and development (R&D) processes. AI researchers consider a problem, define a search space, formulate an objective, and use an optimization technique in order to obtain an AI system, called a service, that performs the task.

A service is an AI system that delivers bounded results for some task using bounded resources in bounded time. Superintelligent language translation would count as a service, even though it requires a very detailed understanding of the world, including engineering, history, science, etc. Episodic RL agents also count as services.

While each of the AI R&D subtasks is currently performed by a human, as AI progresses we should expect that we will automate these tasks as well. At that point, we will have automated R&D, leading to recursive technological improvement. This is not recursive self-improvement, because the improvement comes from R&D services creating improvements in basic AI building blocks, and those improvements feed back into the R&D services. All of this should happen before we get any powerful AGI agents that can do arbitrary general reasoning.”

My perspective as a software developer is to see this reframing (CAIS vs AGI) in terms of different teams working in different ways but with similar end goals. For those familiar with Google's AI efforts, 'Google AI' (https://ai.google/) is the brand used to describe both their AI and computer science research, 'Cloud AI' or 'AI & Machine Learning Products' (https://cloud.google.com/products/ai/) is the brand used to describe their various AI services available to developers (e.g. Cloud Vision API), 'Google Brain' is a general AI research team, and DeepMind is a separate company (but under the same Alphabet corporate owner as Google) also working on AI research but with a slightly different commercial focus and set of goals to Google Brain.

Why mention these? The most memorable concept I took away from Drexler's CAIS model is that general intelligence doesn't need to look like an agent created by a team with the explicit goal of building a generally intelligent agent (arguably the goal of organisations like DeepMind, OpenAI, etc). It might instead look like a product offering, what Jeff Ding has called the 'App Store model', or what Drexler calls cloud services (e.g. the Google Cloud Platform), in which we have reached a stage where we have access to general intelligence 'as-a-service' because of the proliferation of AI services. The 'comprehensive' in Comprehensive AI Services thus maps onto the 'general' in Artificial General Intelligence.

There is a warning here about our tendency to anthropomorphise things that we don't fully understand, but Drexler leaves the idea that this is indeed a warning implicit. Drexler has taken great care to do careful academic research, and the sheer amount of interesting ideas in this one technical report can be intimidating at times. Like Superintelligence before it, this report deserves to be read and re-read. Drexler is light on philosophising, but the profound implications of this work should be clear to all interested researchers in the field.

“The emerging trajectory of AI development reframes AI prospects. Ongoing automation of AI R&D tasks, in conjunction with the expansion of AI services, suggests a tractable, non-agent-centric model of recursive AI technology improvement that can implement general intelligence in the form of comprehensive AI services (CAIS), a model that includes the service of developing new services. The CAIS model—which scales to superintelligent-level capabilities—follows software engineering practice in abstracting functionality from implementation while maintaining the familiar distinction between application systems and development processes. Language translation exemplifies a service that could incorporate broad, superintelligent-level world knowledge while avoiding classic AI-safety challenges both in development and in application. Broad world knowledge could likewise support predictive models of human concerns and (dis)approval, providing safe, potentially superintelligent-level mechanisms applicable to problems of AI alignment. Taken as a whole, the R&D-automation/CAIS model reframes prospects for the development and application of superintelligence, placing prospective AGI agents in the context of a broader range of intelligent systems while attenuating their marginal instrumental value.”

“The concept of AI-as-mind is deeply embedded in current discourse. For example, in cautioning against anthropomorphizing superintelligent AI, Bostrom (2014, p.105) urges us to “reflect for a moment on the vastness of the space of possible minds”, an abstract space in which “human minds form a tiny cluster”. To understand prospects for superintelligence, however, we must consider a broader space of potential intelligent systems, a space in which mind-like systems themselves form a tiny cluster.”

“Looking forward, I hope to see the comprehensive AI-services model of general, superintelligent-level AI merge into the background of assumptions that shape thinking about the trajectory of AI technology. Whatever one’s expectations may be regarding the eventual development of advanced, increasingly general AI agents, we should expect to see diverse, increasingly general superintelligent-level services as their predecessors and as components of a competitive world context. This is, I think, a robust conclusion that reframes many concerns.”
Displaying 1 of 1 review

Can't find what you're looking for?

Get help and learn more about the design.