Jump to ratings and reviews
Rate this book

The Scaling Era: An Oral History of AI, 2019–2025

Rate this book
An inside view of the AI revolution, from the people and companies making it happen.

How did we build large language models? How do they think, if they think? What will the world look like if we have billions of AIs that are as smart as humans, or even smarter?

In a series of in-depth interviews with leading AI researchers and company founders—including Anthropic CEO Dario Amodei, DeepMind cofounder Demis Hassabis, OpenAI cofounder Ilya Sutskever, MIRI cofounder Eliezer Yudkowsky, and Meta CEO Mark Zuckerberg—Dwarkesh Patel provides the first comprehensive and contemporary portrait of the technology that is transforming our world.

Drawn from his interviews on the Dwarkesh Podcast, these curated excerpts range from the technical details of how LLMs work to the possibility of an AI takeover or explosive economic growth. Patel’s conversations cut through the noise to explore the topics most compelling to those at the forefront of the the power of scaling, the potential for misalignment, the sheer input required for AGI, and the economic and social ramifications of superintelligence. The book is also a standalone introduction to the technology. It includes over 170 definitions and visualizations, explanations of technical points made by guests, classic essays on the theme from other writers, and unpublished interviews with Open Philanthropy research analyst Ajeya Cotra and Anthropic cofounder Jared Kaplan.

The Scaling Era offers readers unprecedented insight into a transformative moment in the development of AI—and a vision of what comes next.

248 pages, Hardcover

Published October 8, 2025

520 people are currently reading
1187 people want to read

About the author

Dwarkesh Patel

2 books29 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
68 (29%)
4 stars
106 (46%)
3 stars
41 (17%)
2 stars
8 (3%)
1 star
5 (2%)
Displaying 1 - 30 of 41 reviews
Profile Image for Gavin.
Author 3 books607 followers
Read
November 12, 2025
is it sane to write a physically printed paper book about AI, watching as day by day the world changes out from under you (watching as if paralysed, as if in amber)? no

is it sane to rely on twitter or claude to learn about the field? also no

was writing this book cool? yes

The role of the term "Oral history" in the title is as protective amulet, an admission of fragility and modesty. Immodestly, I still think this is the best introduction to the thing.

Delighted to be among Gwern and Nostalgebraist's first appearances in trade. Sadly we didn't get publisher approval in time for the best epigraph I have ever seen.

Note that there are two new interviews in here, with Ajeya Cotra and John Schulman.

Reviews:

* Desaraju gives a nice gloss of the basic themes.
* New Yorker. "asking A.I. researchers detailed questions that no one else even knows to ask, or how to pose."
* WSJ. "Both Mr. Patel and Mr. Summerfield describe our mushy brains using the digital metrics of bits and operations per second, which don’t apply and create a false hope of comparison." And to think this is how I learn that computational neuroscience is a fake field, or better that Church-Turing-Deutsch is false.
* Mindaugas Mozūras attempts to dismiss the whole book as Silicon Valley hype by vague reference to one purported error (that as of June 2024 models were writing 50% of new Google code, but, he thinks, only in the 37% of instances which were later accepted, i.e. so 18% of code overall). The original sentence is frankly unclear, but the graph obviously bears out my reading over his. You can claim they're lying but you do have to claim it.
* Remarkably bad and indicative review from the AI Policy Bulletin. "It doesn’t define terms or pause for context, it immerses you." ...There are 140 definitions in the book, including ones I was made to do that other reviewers gash me for including (like "learning"). Indeed she also says "One of the most valuable components of the book is its glossary"??? To be fair, this review is as laudatory as a ~critical theorist is allowed to be toward any book which doesn't engage with critical theory.
Profile Image for Pete.
1,098 reviews78 followers
April 4, 2025
The Scaling Era : An Oral History of AI, 2019-2025 by Dwarkesh Patel and Gavin Leech is an excellent compilation of themed interviews with leading figures in the AI industry. Patel hosts a very fine podcast and quotes from the interviews form much of the book. The book assumes reasonable familiarity with the current state of AI.

There are chapters in the book on Scaling, Internals, Safety, Inputs, Impact, Explosion, Timelines and more. Each subject is explored with questions and quotes from people interview. The interviewees include Demis Hassabis, Ilya Sutskever, Eliezer Yudkowsky, Dario Amodei cofounder of Anthropic, Leopold Aschenbrenner, Francois Chollet from Google, Tyler Cowen and others.

It’s fantastic to have interviews from these people while the scaling era of AI is progressing. The scaling era being when AI creation is being scaled to create the remarkable large language Models (LLMs). Future historians will find them a treasure trove. The excellent ‘Internet History Podcast’ did something like this for the birth of the commercial internet. However there, the interviews were conducted many years after the events that are discussed.

The book has a lot for anyone interested in AI. The discussions on how the models work are fascinating. The effectiveness of scaling is also discussed. Additionally, the risks of AI and the timelines are explored in detail. The interviews clearly have varying opinions on timelines and risk.

The Scaling Era is a really good read for anyone who has familiarity with AI and who wants a well-edited summary of what leading lights in the field think. Patel’s interviews are really good and the book has been well put together. It will also be a great book for people to revisit in ten years. They can compare what has happened to what people thought at the time.
Profile Image for Mindaugas Mozūras.
430 reviews260 followers
May 10, 2025
This is why I don’t live in San Francisco.

The Scaling Era is a collection of views about AI from top researchers and leaders, primarily focused on LLMs and scaling. While I liked learning more about the topic, two things took away from my enjoyment of this book:

1) The author's bias toward hyping up AI; In one instance, there was one sentence given to a more pessimistic viewpoint, followed by pages of hype about how AI might develop.

2) instances of misrepresentation toward hyping up AI. For example: a quote from the book "At Google, LLMs now complete 50 percent of code characters.”; quote from Google's paper: "with an acceptance rate by software engineers of 37% assisting in the completion of 50% of code characters". If I noticed a couple of these instances, it makes me think there were more.

While I do believe that the current iteration of AI has a good shot at being a transformative technology, the best we can do for AI to end up being transformative is to be realistic about it.
Profile Image for Jami Adarsh.
55 reviews4 followers
July 30, 2025
This book is an extraordinary deep dive into the most transformative period in AI history—the scaling era. Patel weaves together insights from leading voices to chart how language models evolved from “preschooler” GPT-2 to near-human GPT-4, driven by massive compute, data, and the scaling hypothesis.

The narrative explores breakthroughs in architecture, inference scaling, and economic stakes—where $100B+ annual investments make AI the next industrial revolution. Patel doesn’t shy away from the big questions: Do LLMs truly reason? What happens when AI accelerates its own development? The discussions on AGI timelines, power constraints, and societal impact are gripping, thought-provoking, and urgent
Profile Image for Sebastian Gebski.
1,210 reviews1,392 followers
October 18, 2025
A set of interviews (or rather - pieces of sliced interviews) with some people one could consider key personas of Gen AI revolution, perfomed by Dwarkesh Patel. What should you know about it?

1. first of all, it's not that far from other interviews performed by folks like Lenny Rachitsky, that you can watch/listen to for free
2. second of all, the choice of topics is actually quite decent: scaling, evals, safety or impact - these are all valid topics, worth a deep dive
3. the choice of personas is rather decent - there are folks from DeepMind, Anthropic, etc. - all of the actual practitioners from companies that push this whole hype wave forward (but of course one could complain, e.g., where's Karpathy?)
4. as pretty much all the interviewees are strongly vested into LLM hype, I missed the skeptic voice - big part of those interviews was a subtle marketing (they didn't advertise their services per se, but they were of coursing building up the hype)

In the end, I've learned far less than I expected. One of the reasons was that the interviewees were really cautious to not to say too much. Unfortunately, that has made the book rather dull. Maybe it will have more value in few years (as an input for a restrospective), or maybe I'd enjoy it if I wasn't such a podcast lover.

Anyway, it's hard to recommend it. Between 2 and 3 stars.
140 reviews7 followers
March 28, 2025
Probably hard to follow if you don't have at least some background knowledge about how LLMs work, but overall a great selection of various viewpoints from different from people close to the frontier of AI development. Very interesting how confident many of the interviewees were in their mutually incompatible viewpoints. In general I found the views of those involved in actually training and evaluating the models more credible than those whose work was more speculative.

I work in the tech industry but on problems where AI tools have not been very helpful (too little data for them to understand the problem space) so I've been skeptical of LLMs up to now, but seeing how many challenges have fallen to increased scale has made me revise me opinion quite a bit, and the most recent models are just getting to the point where they're able to help me be more productive. Will be exciting to see how it all plays out over the next few years.
Profile Image for Grace.
117 reviews
August 9, 2025
A smattering of interviews. Surprised by how often Open Philanthropy was featured, relative to research scientists. Gathered some interesting ideas around type 1 (intuitive) vs type 2 (formalized, structured) thinking, analogies between mechanistic interpretability and MRI's (you get a scan of the brain, but that might not be complete, nor do you completely understand that scan) -- the chapter on interpretability was certainly most interesting.

Other opinions around economic impact, scaling laws, safety risk felt similar to what you would gather from following conversations/articles about AI online.
This entire review has been hidden because of spoilers.
31 reviews
October 13, 2025
Book does a good job of catching the reader up on why we’ve seen an explosion in LLMs over the past few years, what are the key questions in making these models work, and what are the future directions AI can take. It’s mostly well done, but I think there was too much hyping up and talk about AI takeover scenarios that to me felt a little outlandish.

However, it did not cover things like a brief history of the major players in LLM technology, or the current explosion of agents and how every company is moving into building agents. It did not talk much about AI regulation, and the frameworks with which we can structure a field that is largely being driven by the private industry. I know it’s a collection of interviews so it can’t so everything but that’s what made the book very incomplete for me.
Profile Image for Gijs Limonard.
1,319 reviews34 followers
October 25, 2025
Disappointing to say the least, this is a stitched together hotchpotch of podcast excerpts on the subject of AI and how this technology can and will shape the world to come; you can safely skip this one.
Profile Image for Itay.
191 reviews15 followers
May 7, 2025
הספר מניח רמה מאוד גבוהה של היכרות עם התחום. באופן כללי, כל פרק מורכב מהקדמה עם הסברים, ואז קטעי ראיון ערוכים של דוורקש עם האנשים הבכירים ביותר בתחום הבינה המלאכותית. השיחה תמיד עמוסה בזרגון, ועל אף שיש הערות שוליים, בשלב מסוים נהיה מתסכל לקרוא את כולם ולחזור לטקסט. בדברים שאני שולט יותר הבנתי יותר, בתחומים מאוד טכניים נאלצתי לוותר. ספר טוב לעוסקים במחקר ויישום של בינה מלאכותית, לא כמדע פופולארי.
Profile Image for Nicolai Mohn.
38 reviews4 followers
October 22, 2025
Behøvde å lese meg opp for å henge med på Sutton og Karpathy intervjuene til Dwarkesh, der de utfordrer scaling paradigmet som vei til AGI
Profile Image for David Haberlah.
190 reviews2 followers
March 30, 2025
No one has interviewed more pioneers ushering in the new era of artificial intelligence than Dwarkesh. And no one has interviewed these great minds with as many insightful questions. A highly enjoyable must read for … anyone really who wants to understand why and how our civilisation is changing at a breakneck speed.
Profile Image for Sarah Jensen.
2,090 reviews169 followers
August 19, 2025
Book Review: The Scaling Era: An Oral History of AI, 2019–2025 by Dwarkesh Patel
Rating: 4.8/5

Interdisciplinary Rigor & Public Health Relevance
Dwarkesh Patel’s The Scaling Era delivers a groundbreaking oral history of AI’s explosive evolution, weaving technical insights from industry leaders (e.g., Dario Amodei, Demis Hassabis) with urgent societal questions. While primarily a tech narrative, the book inadvertently highlights public health intersections—particularly in Chapter 6 (Impact), where Patel’s interview with Open Philanthropy’s Ajeya Cotra examines AI’s potential to optimize disease modeling and resource allocation. The discussion on input scarcity (Chapter 5) also resonates with global health equity debates, as Patel probes whether AI advancements will exacerbate or mitigate resource disparities.

However, the book’s focus on corporate and academic elites overlooks grassroots perspectives—a missed opportunity to explore how AI tools are actually being adopted in community health settings.

Emotional Resonance & Narrative Craft
As a public health professional, I was electrified by Patel’s unvarnished dialogues about existential risks (Chapter 4, Alignment). The candid admission from an unnamed Meta engineer—We’re building gods without knowing their commandments—left me oscillating between awe and dread. Conversely, the glossary’s playful definitions (e.g., stochastic parrot for LLMs) offered levity, mirroring my own whiplash between optimism and skepticism during AI-driven projects.

Yet the oral history format, while immersive, sometimes feels disjointed. The abrupt transition from Zuckerberg’s pragmatic capitalism to Yudkowsky’s apocalyptic warnings (Chapter 7) lacks narrative scaffolding, leaving readers to reconcile tonal whiplash alone.

Constructive Criticism

Strengths:
-Unprecedented Access: Curates unfiltered perspectives from AI’s founding generation, a goldmine for historians and ethicists.
-Pedagogical Clarity: 170+ definitions and visualizations demystify technical jargon without oversimplifying .

Weaknesses:
-Elite Bias: Overrepresents Silicon Valley voices, neglecting Global South researchers or public service implementers.
-Temporal Myopia: Focuses heavily on 2019–2023 breakthroughs, underplaying 2024–2025’s regulatory and ethical reckonings.

How I would describe this book:
- A Silicon Valley Confidential for the AI age—Patel’s subjects confess more to him than to Congress.
- If Studs Terkel had interviewed Alan Turing, this would be the transcript.
- Translates AI’s ‘black box’ into a page-turner—equal parts Sapiens and The Social Network.

Gratitude & Final Thoughts
Thank you to Stripe Press and Edelweiss for the review copy. Patel’s work transcends tech journalism; it’s a cultural artifact of a pivot point in human history. I plan to recommend Chapter 4's Alignment to my public health interns—not for answers, but for the right questions.

Rating: 4.8/5 (Docked slightly for narrative fragmentation, but indispensable for understanding AI’s societal inflection point.)
Profile Image for Blazej Banaszewski.
15 reviews
June 16, 2025
Rarely do I absorb a book in just two evenings, but Dwarkesh's The Scaling Era proved hard to resist. It captures Silicon Valley's frenetic bullishness around scaling language models, driven by the conviction that this pathway inevitably leads first to AGI, and soon thereafter to Superintelligence. The excitement is highly contagious. The result is a compelling sense that we stand on the brink of radical technological, societal, and economic transformation - one so imminent and profound that dismissing it suggests either deep internal resistance, ignorance, or sheer delusion.

I particularly enjoyed the conversations around the inner workings of LLMs. Here are two examples of questions being discussed in the book:
- We know next-token prediction leads to impressive capabilities - but precisely why? Ilya Sutskever’s perspective is beautifully simple: to effectively predict the next token, a model must learn a "world model," essentially internalizing the structure and context of the world that produced the sentences in its training set. This seemingly simple paradigm has far-reaching implications; predicting the next word in this review, for example, requires simulating the thought process of the writer. Although this notion is somewhat recognized within the research community, revisiting this idea has not taken away the awe that such a simple concept can lead to the creation of an intellect. 
- There is also substantial discussion in the book about cracking open the so-called black box. This effort is super important not only for enhancing our understanding and improving model performance but, more importantly, for alignment. We are nearing a stage where these models increasingly assist in their own advancement, collaborating with researchers to identify further algorithmic improvements. Ensuring we can reliably detect misalignment or interpret underlying intentions before these systems gain potentially risky levels of power is crucial. It worries me that the interpretability research has consistently struggled to keep pace with model performance, and if it doesn’t change, we are potentially heading for a dangerous situation. 

If you are curious about the LLM research and the future we are heading for, I wholeheartedly recommend this book and Dwarkesh’s podcast. The podcast was the source of conversations featured in this book but it also goes beyond machine learning into geopolitics and anthropology.
12 reviews
November 15, 2025
If you want to understand why the world is investing so much capital and energy into LLMs this book will explain why.

Here’s my takeaway: because of the scaling hypothesis which essentially says that the more training data and more GPUs you use the better the model will get. And if we can scaling up orders of magnitude then we will get AGI. But in order to reach each new major version of AI it requires vast amounts of capital investment and energy. For instance GPT-4 cost $500million, 25K GPUs and used 10 MW of energy to train. GPT-5 likely cost $5 billion dollars, >=100k GPUs, ~100MW. In order to get to a GPT-6 level model we can assume it will take $50 billion with 1 GW energy use. This is where you start to hit the limits of private capital. The revenues of big tech companies are in the tens of billions each year but how much of that can realistically be dedicated to training runs? And if you do solve that and you’re not at AGI yet then you need to plan for trillion $ training runs. To get a GPT-8 level model would require 1% of world GDP which seems impossible until you remember that world spending on internet infrastructure was $1 trillion, and cumulatively we’ve spent $100 billion on LLMs so far. But the experts don’t seem worried about achieving the required capital citing we’ll hit energy constraints before then.

A lot of critics of LLM will call all of this a bubble. However, there is a good counter argument in the book that points that the dot com bubble was a bunch of debt financed companies that had no chance at profitability buying IT infrastructure from Cisco. Compared to the current situation, we have the most profitable companies in the history of the world buying GPUs from Nvidia. But there is a rebuttal that if the next major model version (e.g GPT-6) is a disappointment then all of this could end, but the effects of the economic overhang could still proliferate for years after.

Either way there’s mass incentive to finish this project both for private and national security reasons and we can’t afford not to try.
68 reviews1 follower
April 16, 2025
There’s decades where nothing happens; and then there are weeks where decades happen. In the world of AI, we are definitely in the latter.

Dwarkesh is the “historian” writing this book as an account of the history being made in these “decades” and what the prominent figures of this era think is going to happen next. The book tries to explain scaling, AGI, alignment and so on. According to me, however, its crucial accomplishment is that it chronicles (in writing) the thought process of the pioneers of this era so that we can use it as a touchstone for what may or may not happen next.

This would be a great in the near term to understand if (and how much) of a bubble are we in and in the long term, it would be a great resource to have a view into the minds of technologists of this (all important) age.

PS- I cannot believe the depth of some of these questions Dwarkesh was able to ask in realtime on a podcast. My respect for him has massively grown 🙌🏻

Very well written book for those in the know how of the field. For beginners, it might be a little tough to grok everything but should be highly stimulating all the same.
218 reviews3 followers
July 6, 2025
Dwarkesh Patel pulls off a challenging task: cut twenty interviews to pieces, and reorganize them into coherent chapters on key AI topics. The author also enhances the book with helpful notes and glossaries to introduce the many concepts discussed. While this isn’t a textbook for learning AI, it serves as a good primer of current industry priorities.

As the author notes, it’s remarkable how effective scaling has been in advancing machine intelligence. Even more striking is that many interviewees believe scaling will continue and that AGI could be achieved within a decade, or even four to five years. Although I remain skeptical of the hype, I wouldn't have guessed scaling has worked this far. So let's keep watching this exciting and scary show. See where scaling will take us.
44 reviews
April 6, 2025
a very interesting but incomplete intro to AI

I truly enjoyed The Scaling Era. It is one of the most approachable books to AI I have come across while providing an overall sense of progress and development. It feels more like a series of vignettes and insights of the time reviewed that starts to give you a skeleton of how LLMs work and where they are going. However due to the scale and scope of information in this space and the appropriate lack of technical details you get a sense of how much more there is to learn and discover. Hopefully this books gives a sense of where to to next.
Profile Image for Thư.
9 reviews4 followers
May 12, 2025
A pretty informative and interesting book, capturing the conversations between the author and many leading figures in the field of AI. The first 3 chapters about scaling, evals, and model internals are my favourite b/c they are more technically grounded imo. The subsequent chapters feel like reading people's either very pessimistic or optimistic views about AI (with scenarios very much like what you would see in a sci-fi novel or series). All that aside, the book has raised a lot of good questions, not just in terms of technical aspects but also philosophical ones.
Profile Image for TK.
107 reviews95 followers
August 7, 2025
Interesting, especially the Gwern interview about "The Scaling Hypothesis" and how he showed his reasoning about this hypothesis.

My pet peeves are that sometimes the interviewer talks too much, asks simple questions, and the interviewee answers with one-line sentences without expanding the idea. It would be much richer if there were more discussions about the topic, similar to the podcast interview. But this was just a minor issue. Another is that I missed diversity in ideas and thoughts. Not because it's a fluffy, fancy word nowadays, but because it felt very Silicon Valley-centric.
Profile Image for Kevin Whitaker.
327 reviews6 followers
November 24, 2025
Lots of insights but I don't know how many people this book is actually for. It's very insight-dense but unstructured, so you have to be smart enough about the basics of AI to follow it and have enough interest in the topic to want to dive into the details -- but most people with that level of knowledge and interest are already either listening to Dwarkesh's podcast already or are reading other things from the types of people interviewed for this book, in which case there's not much new.

The first section on the theory and history of scaling was by far the most interesting for me.
Profile Image for Nabeel.
30 reviews13 followers
August 17, 2025
I have read multiple articles, blogs, and podcast transcripts to understand AI, especially its scale and transformation from a business standpoint.

A book like this covering multiple terminologies and views gave me a solid foundation - also glossary covers almost all terminologies inthe field.

Recommended for anyone who wants to grasp the principles behind LLMs / models, training, data centers, power consumption, global impact, and power plays between countries etc..
Profile Image for Cedric.
40 reviews
October 31, 2025
Frontier AI is both frustrating and sensational because of how much we just don’t know. As I continue to listen and read more of what everyone says about AI and its development, the more I think of the entire technology as a mirror of ourselves: reflecting our priorities, demands, and guesses. As we look back at history and talk of how much we “think” we didn’t know, will we be able to say the same in the near feature?
3 reviews
April 27, 2025
For someone technical but not too deep in the weeds in AI. If you've heard the words "LLM", "inference", "RLHF", etc but aren't as privy on stuff like unhobbling or the chinchilla scaling laws then this book's for you. If you studied CS then you're more than equipped to read this book and get a lot out of it.
Profile Image for Stephen Longfield.
69 reviews
May 11, 2025
~2.5 stars. I read the Kindle edition.

The book is open about being a collection of excerpts and transcripts from a podcast, and... well, it reads like a collection of excerpts and transcripts from a podcast. There are some interesting insights and interviews in here, but I found this style just didn't work for me.
Profile Image for NotAWiz4rd.
92 reviews
October 26, 2025
Disappointing. What could have been a really good book is instead just a bunch of lightly curated conversation snippets from Dwarkesh‘s podcast. The explanations and supplements are good, but it could have been so much better if Dwarkesh had taken the time to actually write a proper book.
Also, despite the name, the knowledge cutoff is November 2024
Profile Image for Douglas Sellers.
512 reviews7 followers
October 31, 2025
A pretty good book about a point in time snapshot of where we are with AI. Mostly interviews with people from the frontier labs.

I would recommend it for anyone who is ok with some technical details and wants to go one step deeper than a book like "The worlds I see" (which is great) or the Ethan Mollick book, "Co-Intelligence" (which is also good).
Profile Image for Ben.
310 reviews4 followers
April 15, 2025
this book is just transcripts of his podcast grouped together by category. dwarkesh does a good podcast, and i have listened to almost every episode. i don't want to read the podcasts in book form. my fault because i didn't read up about it beforehand and thought it was going to be something else.
3 reviews
May 2, 2025
Great oral history book. It's worth reading even you've already listened/read all the interviews that were used for the book - the structure of the book provides a lot of context and it's interesting to see multiple answers for similar questions.
Displaying 1 - 30 of 41 reviews

Can't find what you're looking for?

Get help and learn more about the design.