Jump to ratings and reviews
Rate this book

The Next Renaissance: AI and the Expansion of Human Potential

Rate this book
An eye-opening discussion on the transformative impact of AI and how to prepare for a new future.

In The Next AI and the Expansion of Human Potential, acclaimed AI advisor Zack Kass presents an optimistic and compelling vision of how artificial intelligence will shape our lives. Drawing on historical context, cutting-edge advancements, and firsthand experience, Kass lays out how AI will become a collaborative partner in building a better, more creative, and more compassionate world.

Just as the original Renaissance revolutionized art, science, and society, today’s AI-driven Renaissance will redefine how we create, innovate, and flourish. Kass leverages his deep industry expertise to explain how this transformative technology will solve previously unimaginable challenges, presenting entirely new possibilities.

Inside the book, you’ll

●      Practical strategies to navigate AI’s integration into everyday life.

●      Clear guidance on future-proofing your career by emphasizing uniquely-human skill.

●      Insights into how AI opens up entirely new domains of knowledge by solving ambiguous problems.

●      How to overcome the psychological and societal barriers to AI adoption.

●      Concrete examples of AI amplifying human potential by saving time and resources, and sparking creativity.

A must-read for anyone interested in the most powerful technological advancement since the steam engine. The Next Renaissance is here.

256 pages, Hardcover

Published January 13, 2026

Loading...
Loading...

About the author

Zack Kass

2 books10 followers
Zack Kass is a globally recognized AI advisor and the former Head of Go-to-Market at OpenAI, where he built the company's sales, solutions, and partnerships teams. Zack now advises Fortune 1000 companies and institutions on long-term AI strategy and transformation. He also serves as an Executive-in-Residence at the University of Virginia, exploring the socioeconomic impact of AI.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
61 (43%)
4 stars
47 (33%)
3 stars
28 (20%)
2 stars
4 (2%)
1 star
0 (0%)
Displaying 1 - 30 of 32 reviews
Profile Image for Jung.
2,063 reviews49 followers
Read
March 12, 2026
"The Next Renaissance: AI and the Expansion of Human Potential" by Zack Kass explores how artificial intelligence is reshaping the world and what this transformation means for human society. The book argues that humanity is entering a new historical era similar to earlier moments of radical change such as the Renaissance or the Industrial Revolution. What makes this moment unique is the rapid rise of AI systems capable of performing many forms of cognitive work that once required specialized human expertise. As the cost of running advanced AI systems drops dramatically, computational intelligence is becoming widely accessible. Kass suggests that this shift will reorganize societies in profound ways, bringing enormous opportunities alongside serious risks. The central message of the book is that the future of AI will not be determined solely by technology but by the choices humans make about how to develop and use it.

Throughout history, major technological breakthroughs have transformed societies by turning scarce resources into abundant ones. The European Renaissance accelerated intellectual and cultural development partly because the printing press made knowledge widely available for the first time. Later innovations such as the steam engine and electricity had similarly transformative effects. Steam power fueled industrial production and reshaped transportation, while electricity allowed people to extend work and communication far beyond daylight hours. In each case, the sudden availability of a powerful new resource forced societies to reorganize their economies, institutions, and daily lives. Kass argues that artificial intelligence represents a similar turning point, but instead of transforming physical power or information access, it is changing the availability of cognitive processing.

For most of human history, advanced analysis and complex problem-solving depended on a small number of highly trained specialists. Experts such as scientists, engineers, economists, and consultants devoted years to acquiring the skills needed to interpret data and make sophisticated decisions. Their knowledge was valuable partly because it was scarce. However, AI systems are beginning to replicate many of these intellectual tasks at incredible speed and very low cost. The economic trend behind this shift is striking. Running powerful AI models was once extremely expensive, but the cost has fallen rapidly in a short period of time. When resources become dramatically cheaper this quickly, industries reorganize and new possibilities emerge. Kass refers to this new condition as 'unmetered intelligence,' meaning that computational reasoning may soon function like electricity: a resource available instantly whenever it is needed.

The concept of unmetered intelligence has far-reaching implications. Human civilization already possesses enormous stores of knowledge in libraries, databases, and scientific archives, but knowledge alone does not guarantee solutions to complex problems. The real challenge has often been processing information quickly enough to draw meaningful conclusions. Human brains are limited by attention, fatigue, and memory capacity. AI systems do not share these biological constraints. They can analyze vast amounts of data simultaneously, identify patterns, and generate insights in seconds. This capability could unlock progress in fields that have long struggled with complexity, such as renewable energy storage, climate modeling, and personalized medicine. Problems that once seemed impossible may become manageable when analytic power is no longer scarce.

Despite these exciting possibilities, Kass emphasizes that the development of artificial intelligence is shaped by two distinct boundaries. The first is a technical threshold that determines what AI systems are capable of doing. Researchers continue to expand these capabilities rapidly, creating systems that generate images, videos, designs, and even sensory experiences based on simple prompts. However, important technical challenges remain unresolved. One of the most significant is the alignment problem, which concerns how to ensure that AI systems behave in ways that match human intentions. Teaching machines what actions to perform is relatively straightforward, but teaching them what actions to avoid is far more complicated. Engineers must constantly test and refine AI models to prevent unintended consequences.

The second boundary shaping AI development is social rather than technical. Even if a technology becomes capable of performing certain tasks, societies must still decide whether it should be allowed to perform them. These decisions involve laws, ethical standards, cultural values, and institutional policies. For example, an AI system might be able to evaluate job applications or diagnose medical conditions, but societies must determine when such decisions are appropriate and who is responsible if mistakes occur. Social limits vary widely across the world because different communities have different priorities and levels of trust in technology. As a result, the pace and direction of AI adoption will differ from place to place.

The gap between technological possibility and social acceptance creates tension during periods of rapid innovation. Technical capabilities often advance faster than laws, regulations, and public understanding. Corporations and research institutions may develop powerful tools long before societies establish clear rules governing their use. This imbalance raises questions about power and influence. The organizations that build AI systems are concentrated in wealthy regions with access to enormous resources, yet the effects of these systems extend globally. Communities that are most affected by AI technologies may have little say in how they are designed or deployed. According to Kass, the challenge of the coming decade will be managing this gap responsibly.

Another important theme of the book is the hidden cost of technological progress. Every major transformation throughout history has demanded significant resources and created new problems. Artificial intelligence is no exception. Training and operating large AI models requires vast amounts of electricity, often equivalent to the annual energy consumption of hundreds of households. Data centers must be cooled continuously, which consumes enormous volumes of water. In addition, the hardware needed to power AI systems relies on rare earth minerals that are often mined under difficult and dangerous conditions. These material demands reveal that digital technologies depend heavily on physical resources.

The global supply chains supporting AI development also create inequalities. Many of the raw materials used to manufacture advanced computer chips come from developing regions, while the large-scale computing facilities that run AI models are concentrated in wealthy countries. The economic benefits generated by AI therefore tend to flow toward corporations and societies with the resources to build and operate these systems. At the same time, the physical and environmental costs are often borne by communities that extract the necessary materials. This imbalance raises important ethical questions about who ultimately gains from technological abundance.

Beyond environmental and economic costs, Kass highlights the psychological and social challenges that automation may create. Work has long served as more than a source of income. It provides people with a sense of identity, purpose, and belonging. Many individuals derive meaning from contributing to their communities through their professions. If artificial intelligence begins to replace large numbers of cognitive jobs, millions of workers could lose not only employment but also the social structures that shape their daily lives. Retraining programs are often suggested as a solution, but they assume that everyone has equal access to education and that enough new roles will exist to absorb displaced workers.

Another concern involves the gradual erosion of human connection in an increasingly digital world. As more activities move online and interactions occur through screens, opportunities for face-to-face engagement decline. Community spaces that once brought people together may disappear as digital alternatives replace them. Kass warns that excessive reliance on virtual environments could weaken relationships and reduce the sense of shared experience that holds societies together. Preserving human connection will therefore require conscious effort as technology becomes more integrated into everyday life.

The book also examines how artificial intelligence could reshape several major sectors of society. Work is likely to undergo significant transformation as cognitive tasks become inexpensive and widely available. Organizations may place less emphasis on formal credentials and more emphasis on creativity, judgment, and interpersonal abilities. Evidence of this shift is already appearing as younger individuals use AI tools to produce research and innovations that once required advanced academic training.

Health care represents another field with enormous potential for change. AI could enable more personalized treatments by analyzing individual genetic profiles and medical histories. Drug discovery might accelerate dramatically as machine learning models simulate chemical interactions more efficiently than traditional laboratory experiments. Diagnostic tools powered by AI could also help bring medical expertise to remote areas where specialists are scarce.

Education may experience an even more dramatic shift. Traditional education systems rely heavily on standardized curricula and uniform testing methods, often treating students as if they learn in identical ways. AI technology could allow learning experiences to be tailored to each individual, adapting lessons to match personal interests, strengths, and challenges. Natural language interfaces may also remove barriers for people who find conventional technology difficult to use, making advanced tools accessible to broader populations.

Financial systems and everyday administrative tasks could become much simpler as AI automates complex processes. Activities such as tax preparation, financial planning, and navigating government bureaucracy currently require specialized knowledge or time-consuming effort. Intelligent systems could streamline these processes, potentially freeing people to devote more time to creative pursuits, personal relationships, or community involvement.

Despite these promising developments, Kass notes that the industries receiving the most attention from AI developers reflect the priorities of powerful institutions. Fields such as corporate productivity, finance, and education attract significant investment because they promise strong economic returns. Other areas that could benefit from AI, including environmental protection, agriculture, and the preservation of cultural knowledge, receive comparatively less attention. This imbalance reflects the interests of those funding technological development rather than the full range of human needs.

In response to these challenges, the book proposes several guiding principles for navigating the age of artificial intelligence. The first encourages people to spend more time in physical environments rather than exclusively in digital spaces. Engaging with nature and participating in community activities helps maintain the embodied experiences that technology cannot replace. The second principle emphasizes the importance of cultivating distinctly human qualities such as empathy, moral judgment, humor, and trust. As machines handle more analytical tasks, these relational skills become even more valuable.

Another principle focuses on the importance of learning how to learn. In a world where technology evolves rapidly, adaptability and curiosity become essential abilities. Instead of relying solely on specialized expertise, individuals must develop the capacity to explore new ideas, question assumptions, and continuously update their knowledge. The final principle encourages maintaining a sense of optimism. Kass argues that neither blind enthusiasm nor pessimistic fatalism will lead to positive outcomes. Instead, the future will depend on thoughtful decisions about how technologies are designed, regulated, and applied.

In conclusion, "The Next Renaissance: AI and the Expansion of Human Potential" by Zack Kass presents a thoughtful exploration of how artificial intelligence could transform society in the coming decades. The book describes a world in which cognitive processing becomes widely accessible, potentially unlocking solutions to some of humanity’s most difficult challenges. At the same time, it acknowledges the environmental costs, social disruptions, and ethical dilemmas that accompany such a profound shift. By examining both the opportunities and the risks of AI, Kass emphasizes that technological progress alone does not determine the future. Human choices about governance, resource allocation, and social priorities will ultimately shape how this new era unfolds.
Profile Image for Rishi.
12 reviews
March 20, 2026
"AI is not just a technological revolution, but a cultural one."

This book had a great opening. Zack Kass opens The Next RenAIssance with insight from economist John Maynard Keynes and continues drawing comparisons between the late Middle Ages and the Renaissance. It definitely had me hooked right away, and the book had a lot of strong takeaways throughout. There were great allusions to history, psychology, and sociology, and reading through the introduction and Part I really felt like I was getting into something new. Overall, I’m glad I picked it up.

Kass does a great job explaining many of the questions people have about AI: jobs, adoption, bias, regulation, identity, and how all of this actually fits into society. The beginning and Part I were especially strong, as he walks through the history of AI from post-Turing developments all the way to IBM’s achievements and the 2017 Google paper Attention Is All You Need. I appreciated that a lot, because it gives real context that you don’t just get from social media discourse or casual conversations. Breaking down ideas like transformers, compute, and data brings in the technical knowledge needed in a world where not everyone is technical. He’s able to bridge that gap well for readers who are interested in AI but still see it as a "black box," as he mentions.

I also really liked how consistently he connected AI to psychology, not just technology. One of the book’s strongest ideas is that AI is not just a technological shift, but a cultural one. This is where his concept of the adoption gap really stood out to me. He frames it as the gap between the technological threshold, or what the technology is capable of doing, and the societal threshold, or what people are actually willing to accept. And that distinction really resonated with me: resistance and acceptance across industries are going to shape AI adoption just as much as the tools themselves. He supports this well with historical examples, whether it’s the Luddites resisting machinery or the idea that political protection and institutional interests can slow automation in certain professions. One example he gives is that a lawyer might be easier to automate than an oncologist, but the lawyer’s political shield may delay that shift much longer.

One of the book’s best ideas is the K-curve. AI won’t affect everyone equally, and the real dividing line will be agency: whether people use these tools to sharpen their thinking or outsource it entirely. Those who use AI well will benefit from its compounding effects, while those who let it think for them may gradually fall behind. That was probably one of the most memorable concepts in the entire book for me.

He also touches on a topic I’ve done a lot of research on before: p(doom), or the probability that advanced AI could lead to catastrophic or even existential outcomes for humanity. I liked that he treats p(doom) as somewhat of a distraction from more immediate, practical harms. It reminded me of The AI Con by Emily Bender and Alex Hanna, where they argue that focusing on dramatic hypothetical futures can distract us from the real harms AI is already causing today, whether that’s wrongful arrests from flawed automated systems or facial recognition tools that misidentify people and create real-world consequences. That was a smart inclusion, though I do think the book could have spent a little more time unpacking what p(doom) actually is and what its implications are.

The book is also strong when it shifts from theory into real-world applications, especially in the education chapter; even before I started reading, I kept wondering whether Alpha School and MacKenzie Price would come up, and I was excited to see they did. I’ve been following that space and her work for a while, so that section really landed for me. The idea that students in the 31st percentile could jump to the 71st percentile through highly personalized AI-supported learning is kind of insane. I also loved the Duolingo and Luis von Ahn discussion. It actually made me go look more into his background, and I was surprised to realize just how much he’s shaped digital learning and even tools like CAPTCHA.

That said, the book definitely has its drawbacks.

Not every application/industry chapter landed the way the education one did for me. While the education section was one of the strongest in the book, the finance chapter in particular felt noticeably weaker. A lot of it covered things that general readers probably already know, and compared to the rest of the book, it barely engaged with the AI side in a meaningful way. It felt more like a broad overview than a genuinely insightful chapter. The healthcare chapter had stronger ideas than finance, but even there, I found myself wanting more depth in parts.

Also, one of my main disagreements comes at the end of Chapter 3, where Kass suggests that people may be more worried about losing their identities than losing their livelihoods (when it comes to job automation with AI), because identity is so tied to work. He references examples like the 2024 longshoremen protests, where people were fighting to keep their jobs rather than simply asking for higher pay. I can understand that point to an extent, but I think it overreaches. For most people, the more immediate fear is still economic: will I have a job, will I be able to make money, and can I support myself? Whether AI takes jobs or creates new ones is a separate conversation, and the book already addresses a lot of that. My issue is more with the framing. It felt like a real observation about some people that then got generalized too broadly into a claim about human behavior as a whole.

Another issue I had was that Kass acknowledges how LLMs are overwhelmingly trained on Western norms and values, but parts of the book itself fall into the same Eurocentric framing. For example, when discussing education, he talks about how education was historically reserved for the wealthy or the nobility. While that was certainly true in many contexts, especially in Europe, it’s not the only framework through which education developed. Nalanda University in India, for example, was one of the world’s earliest major centers of higher learning and complicates that narrative.

I noticed something similar in the medicine chapter. Kass makes a strong point about precision medicine and how AI can help us move away from treating people based on population averages. That’s an interesting argument, but it also struck me that this framing is still rooted heavily in Western medicine. Systems like Ayurveda have, for thousands of years, emphasized a much more individualized approach to the body and treatment. I’m not saying one system is "better" than another, and I’m definitely not dismissing the value of evidence-based medicine. But it felt ironic for the book to critique the biases embedded in AI models while sometimes reproducing those same biases in its own historical framing. If AI is truly going to help us think more holistically, then the conversation should be more global too.

My biggest issue, though, was the repetition and editing. I got so tired of the printing press analogy. I understand why it’s useful, and it works well the first time or two, but by the end, it felt like it was being forced into every possible chapter. There were also repeated phrases and repeated examples that made the book feel less polished than it should have. The "two-headed monster" analogy showed up twice in almost the same way. There were also moments where the editing genuinely felt sloppy. Years were formatted oddly (for example, 1500 was written as 1,500), and near the end, the discussion about his father’s patient was essentially repeated across two pages in slightly different wording. It didn’t read like intentional emphasis: it genuinely felt like a drafting or editing error that should have been caught before publication. At one point, it honestly made me wonder if AI had written some of the sloppier parts and no one cleaned them up.

Still, despite those flaws, there are some genuinely powerful lines throughout the book. Here are a few that stood out to me:

"The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it."


"We solved much of the economic problem Keynes foresaw, but we have not solved the happiness problem he left us."


"Immortality sounds like progress until you realize how much meaning depends on impermanence. Urgency is the invisible architecture of purpose."


"Narrative sets policies. Belief sets budgets. A leader’s horizon becomes a team’s boundary conditions."


Overall, I’d say this is a pretty solid read. It’s accessible, engaging, and does a good job painting a broad, thought-provoking picture of AI. I also think Kass has the right background for writing a book like this: a history background from Berkeley, deep involvement in tech, former Head of GTM at OpenAI, and now a public-facing role speaking on the future of AI. That combination gives the book a perspective I appreciated. It also reminded me of one of my favorite Steve Jobs quotes:

"I think part of what made the Macintosh great was that the people working on it were musicians, poets, artists, zoologists, and historians who also happened to be the best computer scientists in the world."


I’ll also admit this: from the moment I started reading, I kept wondering whether the book itself was AI-assisted, so I actually appreciated that he addressed it at the end, saying he used AI to help map the structure while humans kept control over the voice. Whether or not that’s fully true, I can’t know, but I’m glad he acknowledged it.

If you want a deeper, more thoughtful look at AI that goes beyond headlines and surface-level hype, this is worth reading. I’d recommend it, but I also wouldn’t take every statement as fact. If you can look past the repetition and editing issues, it ends up being a pretty worthwhile read!
3 reviews
January 24, 2026
The Book on AI That I Was Looking For

Historical comparisons to AI are definitely overdone, but Kass articulates the comparison to the Italian Renaissance very well with four focal points of change: Education, Finance, Healthcare, and Work. All four of these frameworks defined the exit of the Dark Ages in Western civilization ~1500s with the inventions of...

- The printing press (Education and literacy)
- Central banking (Financing and the first central management of personal money)
- Scientific vs superstitous medicine (Healthcare and longevity)
- Individuality (Work and the breaking of serfdom)

The key difference? What took centuries then is years or even months in this renAIssance.

I couldn't help but reflect how this is impacting me, personally. I'm a former public school educator, and all the problems in education were explained perfectly by Kass, and how AI is changing the groundwork is spot on. Same with my personal finances, health, and work. It's all transforming so quickly that I can barely keep up.

So, if you want to pour yourself a nonalcoholic drink and enjoy a good book on AI, I strongly recommend Zach Kass's "The Next RenAIssance."
Profile Image for Tiffany Fabelo.
2 reviews
February 3, 2026
Refreshing, optimistic perspective on the history of AI and how it is shaping our future. Kass weaved the history of The Renaissance throughout and compared AI to other technological revolutions in the past. He also considered opposing viewpoints and ultimately ended on a positive note to readers: Be Human. Highly recommend this book!
115 reviews
May 9, 2026
We’ve had Zack speak at a few work events recently and he’s very engaging in person. I appreciate his perspective on AI but the book itself was very repetitive. Too much focus on history of technological advances and not enough predictions or warnings of what is to come in this new era.
333 reviews
February 24, 2026
Chapter 1: The Past, Present, and Future of Artificial Intelligence:

Core Purpose: Emphasizes that to responsibly shape AI's future, we must first understand its inspiring history—a story of bold human thinkers pushing machines from what they could do to what they might do. AI demonstrates the power of science, imagination, and optimism.

Early Hype, Setbacks, and “False Dawns” (1960s–1990s): Initial optimism fades as the 1960s pass without major breakthroughs; progress slows, funding dries up, leading to the first “AI winter.” Subsequent decades bring periodic “false dawns” and small advances

Machine Learning & Deep Learning Revolution (2010s–Present): Moves away from rigid, hand-crafted rules to systems that learn patterns from massive datasets.

Present State & Key Ingredients: AI is now in a phase of rapid, practical integration—moving beyond research labs into everyday tools. Discusses the “technological threshold” (what AI can do) versus the “societal threshold” (what we allow it to do), with an adoption gap shaped by factors like energy, explainability, trust, alignment, and governance. Frames current progress as setting the stage for abundant, low-cost cognitive power.

Future Vision (“Unmetered Intelligence”): AI is heading toward becoming as ubiquitous, cheap, and invisible as electricity—delivering limitless intelligence at near-zero marginal cost. This “unmetered” era will remove “vicious friction” in work and life while preserving “virtuous friction” (human elements worth protecting).


Chapter 2: Conditions to the RenAIssance:

Core Purpose: Builds directly on Chapter 1 by shifting from what AI is and where it’s headed to what must happen for the full promise of “unmetered intelligence” and the broader RenAIssance to materialize. Kass uses historical analogies to argue that every major technological leap has required a precise alignment of material resources and societal/institutional scaffolding. The chapter frames an “adoption gap” between the technological threshold (what AI can already do) and the societal threshold (what society will allow and support). Closing this gap demands deliberate, coordinated progress on several non-negotiable conditions; none can be taken for granted.

Lesson for AI: Breakthroughs always rest on both physical enablers and the “hidden scaffolding” of trust, rules, and coordination. “Progress depends as much on access to chips, fabs, high-quality data, and energy abundance, as it does on guardrails, governance, and trust.”

The Two Thresholds & Adoption Gap:
Technological threshold: Already being crossed rapidly (thanks to the five enablers from Ch. 1: transformers, compute scaling, data, open-source, algorithms).
Societal threshold: The bigger bottleneck—society’s willingness and ability to integrate AI at scale.

Key Conditions / Pillars (must all align for the RenAIssance to move “from possibility to reality”):
Resource Access (the material foundation):
-Massive, sustained supply of compute (GPUs, specialized chips, new fabs—described as “the most fragile” because of geographic concentration, supply-chain risks, and energy demands).
-High-quality, diverse, ethically sourced data.
-Energy abundance (AI training/inference already consumes enormous power; future scale requires cheap, clean, plentiful energy).

Explainability: Models must be interpretable so humans understand why decisions are made (data sources, trade-offs, uncertainties). Without it, trust erodes and adoption stalls; a “black box” future risks a dystopian “panacea where everything works but we don’t know why.”

Alignment & Safety: Ensuring AI systems pursue human values and care about the consequences of their actions. Not just technical guardrails but a deeper match to collective human interests.

Governance: Clear rules, standards, and oversight mechanisms to steer development responsibly—balancing innovation with protection against misuse.

Coordination: Global alignment of rules, norms, and policies (AI doesn’t respect borders; fragmented regulation creates loopholes or arms races). International cooperation is essential.

Overall Tone & Outlook: Pragmatic optimism. Kass does not downplay the difficulty—“none of these conditions can be taken for granted”—but insists they are achievable with focused human effort.


Chapter 3: Costs of the RenAIssance:

Core Purpose: Every major technological leap carries real costs; the RenAIssance is no different. Kass categorizes them as inevitable (the unavoidable price of progress), avoidable (manageable with foresight), or intolerable (must be prevented).

Historical Framing: Draws parallels to past breakthroughs (writing, printing press, internet, nuclear power) showing how each reshaped cognition, social bonds, identity, and power structures—always with both gains and losses.

The Five Costs (examined at individual and societal levels):
-Shifts in thinking → Cognitive offloading risks intellectual atrophy (“idiocracy”).
-Shifts in connection → Potential erosion of deep human relationships and empathy.
-Shifts in self-definition → Dehumanization and identity displacement (e.g., purpose tied to work/creativity).
-Empowered bad acting → Malicious use by criminals or rogue actors.
-Existential risks → Misaligned superintelligence that could disregard human welfare.

Tone & Takeaway: Balanced, pragmatic, non-alarmist—realistic about challenges but insists they are navigable with human-centered values and governance.


Chapter 4: The Promise of the RenAIssance:

Core Purpose: Kass outlines the extraordinary upside if the conditions are met and costs are managed—AI as the ultimate expander of human potential.

Historical Framing: Just as the printing press didn’t merely improve books but transformed art, science, culture, education, finance, healthcare, and society at large during the first Renaissance, AI will deliver an even broader and far faster reconfiguration. AI acts as a catalyst to all technologies, amplifying every discipline simultaneously.

Key Promises (enabled by unmetered intelligence):
--Novel scientific discoveries at unprecedented speed and scale (small teams solving 50-year problems, e.g., DeepMind’s protein folding).
--Broader, more equitable access to life’s essentials (healthcare, education, financial services democratized for billions).
--Massive freeing of human time and cognitive load, redirecting effort toward creativity, deep relationships, and personal fulfillment.
--Dramatic expansion of what a single human life can encompass—widening potential, purpose, and flourishing for individuals and society.

Tone & Takeaway: Strongly optimistic and enthusiastic. Frames AI as humanity’s collaborative partner in building a better, more creative, and more compassionate world.


Chapter 5: Questions of the RenAIssance:

Core Purpose: Closes the foundational section of the book by candidly confronting the deep, unresolved questions that AI and the emerging RenAIssance inevitably raise—issues that cross ethics, ecology, law, culture, and philosophy and lack simple or universal answers.

Historical Framing: Parallels the printing press (which sparked heresy, censorship, and control-of-knowledge debates) and the internet (privacy wars, free-speech clashes, and truth vs. misinformation battles), showing how every major tech leap creates new frontiers of disagreement.

Key Questions / Sections (the “unfinished work of progress”):
-The North Star Problem: No single neutral global moral compass exists for AI alignment; cultural differences mean one society’s “good” is another’s taboo (e.g., Facebook content acceptable in the US flagged as blasphemy elsewhere).
-The Ecology Problem: Environmental costs of scaling compute/energy plus the broader “information ecology” challenge—AI’s ability to flood the world with synthetic content risks eroding shared truth and our “capacity to believe anything.”
-The Distribution Problem: How to equitably share the gains of unmetered intelligence amid massive economic and professional disruption.
-Social Constructs & Practical Dilemmas: Rethinking liability (who’s responsible when an autonomous car faces a trolley-problem choice?), copyright/personhood/privacy in an AI-augmented world, and biotech boundaries (designing proteins or editing genomes—is it advanced medicine or rewriting biology?).

Tone & Takeaway: stressing that wisdom, governance, and ongoing human dialogue are essential. These questions are not roadblocks but the necessary scaffolding for responsible progress.


Chapter 6: Education 2.0:

Core Purpose: Opens Part II’s sector-by-sector tour by showing how unmetered intelligence can fix one of society’s most expensive and broken systems—education.

Key Sections & Arguments:
-An Antiquated, Failed State: One-size-fits-all pacing, heavy memorization, overburdened teachers, high-stakes testing, and massive inequity produce disengagement, anxiety, and mediocre outcomes despite huge spending.
-Scholastic vs. Social Learning: Clean split—AI masters “scholastic” (facts, skills, drills, personalization for every learner 24/7); humans own “social” (empathy, collaboration, mentorship, character, inspiration). This bifurcation is the breakthrough.
-Teachers at the Center: Teachers evolve from lecturers/grader to high-value coaches and mentors; AI takes over lesson planning, differentiation, grading, and administrative load so educators can focus on relationships and motivation.
-Assessing the Test & New Classrooms: Ditch end-of-year standardized exams for continuous, AI-powered formative assessment. Classrooms become flexible, project-based, often global/virtual hybrid spaces where kids tackle real problems with AI partners.

Tone & Takeaway: AI doesn’t replace educators; it liberates and elevates them, making world-class learning accessible to every child and turning education into a driver of the broader RenAIssance.


Chapter 7: Widening Wall Street:

Core Purpose: Applies the RenAIssance framework to finance, arguing that unmetered intelligence can “widen Wall Street”—democratizing expert-level tools, literacy, and opportunities that have long been gatekept for elites and institutions, thereby expanding economic agency for billions.

Key Sections & Arguments:
-Failure by Design: Today’s financial system is deliberately opaque, complex, and exclusionary—high fees, jargon, restricted access, and low public literacy keep most people on the sidelines.
-The Solution: Literacy, Access, and Visibility: AI becomes a personal, always-on financial coach for everyone—delivering hyper-personalized education, sophisticated planning, real-time market insights, and radical transparency to level the playing field.

Tone & Takeaway: AI doesn’t replace human judgment but removes friction and exploitation so more people can thrive financially.


Chapter 8: Resuscitating Healthcare

Core Purpose: Examines how AI can revive and transform one of the most dysfunctional and expensive systems in modern society—healthcare—shifting it from reactive, bureaucratic, and unequal to proactive, precise, and deeply human.

Key Sections & Arguments:
-The Paradox of Progress: Despite incredible medical advances (genomics, imaging, robotics), patient outcomes, satisfaction, and affordability have not kept pace.
-Precision, Prevention, and Presence: AI enables hyper-personalized (precision) medicine, predictive/preventive care at population and individual levels, and frees clinicians from paperwork so they can offer true human presence and care (Primary Care is where it all starts).
-Expanding Access: Democratizes world-class diagnostics, treatment planning, and ongoing monitoring for billions.

Tone & Takeaway: Hopeful and practical—AI doesn’t replace doctors but supercharges them, making high-quality, compassionate healthcare more accessible and effective for everyone.

Chapter 9: The Future of Work:

Core Purpose: Shows how AI is fundamentally different from all prior automation—finally targeting the “safe,” high-credentialed, knowledge-work jobs long considered untouchable—while reframing widespread anxiety as an opportunity for liberation, reinvention, and vastly expanded human potential.

Key Sections & Arguments:
-AI Is Coming Straight for the Top: Lawyers, analysts, marketers, engineers, doctors—elite, gated professions now face real displacement.
-Displacement, Augmentation, or Creation?: Three possible futures for every role; the winning path is embracing augmentation and actively creating new forms of work.
-Behavioral Adaptability: The single most important trait for thriving—humans who can flexibly partner with AI will flourish.
-The Liberating Upside: AI handles drudgery and cognitive load, freeing people for deeper presence, intuition, creativity, and human connection (roles that will become even more valued).


Chapter 10: Principles for Thriving in the Era of AI:

Core Purpose: Delivers the book’s concise, practical capstone—a short “what should I do now?” guide for individuals. Kass deliberately skips fast-changing tools, models, or tactics (AI evolves too quickly) and instead offers four timeless, high-level principles focused on mindset, behavior, and character to help anyone live, work, and lead successfully amid unmetered intelligence.

Key Sections & Arguments (the four guiding principles):
-Go Outside: Prioritize real-world, physical experiences, nature, and spontaneous in-person human connection over purely digital or synthetic environments—the tangible world supplies irreplaceable serendipity, friction, and depth.
-Learn How to Learn: Master adaptability and the meta-skill of rapid, continuous learning; specific knowledge matters less than the ability to acquire new skills as AI reshapes what is needed.
-Be Human: When intelligence becomes abundant and cheap, uniquely human traits—compassion, courage, empathy, presence, humor, creativity in relationships, and hope—become the most valuable differentiators.
-Lead with Optimism: Actively choose possibility, imagination, and constructive hope over fear, cynicism, or risk-aversion; optimism is the stance that fuels action and helps build the positive future.


Epilogue: A Last, Honest Story:

Key Content: Ten days before the final manuscript deadline, Kass realizes the book (after heavy investment and two ghostwriters) has missed its mark. In a last-ditch effort, he, a technical editor, and AI itself rewrite nearly the entire manuscript from scratch—proving the very principles of augmentation, adaptability, and optimism he has been advocating.
Profile Image for Lisa Christensen.
368 reviews2 followers
April 30, 2026
There is a lot to like in this book. The optimistic angle is refreshing. But the chapters are uneven. There are places the author feels out of his depth. It also needed a very hard edit. There are entire sections that are repeated almost verbatim.

I admire wanting to put a courageous, positive vision in the world, but the book needed more work.
Profile Image for Sarah Cupitt.
887 reviews46 followers
March 11, 2026
AI demands resources at staggering scales, displaces millions from work that gives life meaning, and raises questions about what makes humans valuable when thinking becomes cheap

notes:
- Two thresholds – technical and social – shape what happens next. And the gap between those thresholds will define the coming decade.
- The Renaissance transformed Europe between the fourteenth and seventeenth centuries, as artists rediscovered perspective, while philosophers and scientists challenged assumptions about the natural world. When the printing press was invented around 1440, it accelerated everything. Books became affordable and ideas traveled faster than ever before. Knowledge that had been locked in manuscripts for centuries was suddenly everywhere. This period is remembered as a great leap
- For most of human history, complex analysis required rare expertise from specialists with years of training. Their time was expensive. Problems that needed intense mental work went unsolved if the resources weren’t there. That constraint is dissolving fast. The economics tell the story. Running an advanced AI model cost around $60 per million processing units just months ago. Today the same work costs closer to $4. This kind of price collapse historically signals major disruption ahead, as industries reconfigure and previously impossible projects become routine.
- cognitive work that once meant hiring consultants or spending days in research can now happen instantly. For things like analysis, pattern recognition, drafting documents, and mathematical modeling the shift is from specialized services to a basic utility.
- The first limit is technical – what AI can actually do right now, and what it will be able to do soon. These capabilities are expanding rapidly. Text-to-image generation appeared just a few years ago. Now systems convert text descriptions into video, or three-dimensional models for manufacturing, even into scent profiles for perfume design.
- The alignment problem, for instance, asks how to ensure AI systems behave as intended rather than finding unexpected shortcuts. Teaching a system what not to do turns out to be harder than teaching it what to do.
- The second limit is social: what will societies actually allow AI to do? This involves laws, ethical frameworks, cultural norms, and institutional policies. It asks different questions than the technical ones. It doesn’t ask, for instance, if a system can make hiring decisions, but whether it should.
- Technology races ahead while social institutions struggle to keep pace. Laws get written for previous generations of capability.
- Communities impacted by AI rarely have meaningful input into how those systems get designed or deployed.
- The adoption gap looks entirely different depending on geography, wealth, and proximity to power.
- The factories powered by steam engines darkened skies with coal smoke. Electrification required damming rivers and stringing wire across landscapes. The question isn’t whether transformation costs something, but what specifically gets spent and who pays.
- Training a single large language model can consume electricity equivalent to what hundreds of homes use in a year. Data centers need constant cooling, which means vast amounts of water, often drawn from regions already facing water stress. The hardware itself requires rare earth minerals extracted under dangerous conditions.
- the materials needed for AI chips come from the Global South, while the energy-intensive computation happens in data centers concentrated in wealthier regions in the North. The benefits flow primarily to corporations and populations with resources to access and implement the technology. Resource extraction in one place funds computational abundance in another.
- Advanced AI chips depend on extreme ultraviolet lithography machines. Currently one company in the Netherlands manufactures these machines. This single-point dependency means that geopolitical tensions, natural disasters, or manufacturing problems could halt AI development globally. The technology might seem infinitely scalable in theory, but physical bottlenecks limit what actually gets built.
- What happens to human self-understanding when work disappears? Interviews with dockworkers facing automation revealed that their primary concern wasn’t money. It was belonging, tradition, continuity with the past. Work was more than just income for them. It gave structure to days, connection to community, a sense of contributing something that mattered.
- The third major cost involves what might be called dehumanization, though the term carries more precision than it sometimes gets. Children and adults alike spend increasing hours with screens rather than with physical environments or each other. Community gathering spaces disappear when face-to-face interaction declines. The observation that humans weren’t meant to live this way isn’t nostalgic romanticism. It names a genuine loss of embodied presence and direct relationship.
- Work changes fundamentally when cognitive tasks become cheap. Companies no longer compete primarily on how intelligent their teams are, since AI can match or exceed human analytical capacity in many areas. Competition shifts toward judgment, creativity, and relational skills. Evidence for this shift has already appeared. Teenagers publish research that would have required doctoral training a decade ago. Some companies hire directly from high school based on demonstrated capability rather than credentials. The message is clear: what someone studied matters less than how they think and relate to others.
- Agriculture and food systems, climate adaptation, biodiversity and ecosystem restoration, preservation of indigenous knowledge – these could be transformed by AI too, but they aren’t the domains receiving equivalent attention or investment.
- What is yet to be seen is whether these principles will be sufficient. Whether protecting human qualities while automating human cognition can resolve the tensions. Ultimately, the tensions may point toward questions the playbook doesn’t ask. Questions about what intelligence means, whose abundance matters, what kinds of relationship between human and non-human worlds the technology might serve
83 reviews
March 14, 2026
I saw Zack speak and I appreciated his optimism and viewpoint towards what AI could mean for our future. I do think there are plenty of interesting points made, and I believe those came from Zack himself.

Unfortunately this book wore on me over time. There’s a certain cadence frequently employed by ChatGPT. It’s evident in the text and it makes the prose a bit of a slog. While the author admits he used ChatGPT to “create the scaffold” of the book a few days before the manuscript was due, I believe it did more than that. You can only say “it’s not just x, it’s y” (it’s not just changing the game. It’s rewriting all the rules”) so many times in 200 pages before you start to get eye rolls from me. The most egregious error was when something that was clearly supposed to be a date range was written as 1,600 and 1,800.

I did still take things away from the book. If it was 100 pages vs 200 and written with a little more care I could have given it four stars. I’d suggest seeing Zack’s speech if you can, and maybe just reading a synopsis of the book. It is probably a more efficient use of your time.

Profile Image for Lauren.
388 reviews1 follower
March 7, 2026
This was pretty good. Honestly, nothing was really new or different in terms of the thinking or rational or examples. So this was a good condensed explanation of AI and the zeitgeist around AI currently. For Zach being so close to AI, I expected more discussion around whether we should pursue these technologies. And what about AI fixing wealth inequality or even making the tax code automated? But instead he focused on the usual suspects, education, health care, work. Again, it was a good summary read but nothing groundbreaking or different if you’re close to this space and keeping up.
6 reviews
February 18, 2026
Saw the author speak recently and was really interested in what he had to say. Noted many pages of future reference. I have begun to see how humans using AI can thrive. Sloppy editing in the last chapter, pages 192-195 in my edition.
Profile Image for Sam Ghauch.
14 reviews
February 7, 2026
The Next Renaissance is one of the more thoughtful books I’ve read on AI not because it predicts the future, but because it grounds that future in history.

Zack Kass draws a compelling parallel between the European Renaissance and the AI-driven transformation we’re living through today. His central idea is simple but powerful: every major leap in human progress wasn’t just technological, it was cultural, institutional, and ethical. AI, he argues, is no different.

What I appreciated most is the measured tone. This is not techno-optimism nor fear-driven alarmism. Kass acknowledges real risks: displacement, misuse, concentration of power but consistently reframes AI as a force amplifier, not a force replacer. The determining factor isn’t the technology itself, but human judgment, governance, and intent.

Several themes stood out:
• Innovation accelerates when tools lower the cost of thinking, not just doing
• Societies that flourish during renaissances invest in education, adaptability, and institutions, not prediction
• Progress favors those who lean into uncertainty with agency, not resistance
• The real risk is not AI replacing humans, but humans failing to evolve their role

This resonates strongly with how I see modern leadership and project delivery. Tools change faster than organizations. Strategy lags capability. Governance matters more than hype.

Although the book is conceptual rather than operational and that you won’t walk away with frameworks or playbooks, I see it as a strength since it’s meant to shape how you think, not what you implement tomorrow.

Who I recommend this for:
• Leaders navigating AI without wanting shallow hype
• Professionals thinking about long-term value, not short-term disruption
• Anyone interested in history as a lens for decision-making under uncertainty

This book doesn’t tell you what will happen.
It challenges you to decide who you will be when it does.
1 review
March 8, 2026
This book highlights a major problem with generative AI: people become lazy and no longer do their work properly anymore. This book was written by ChatGPT, and it shows. Embarrassing for someone who is described on the cover as “one of the most sought-after voices on AI.”

The author places the AI boom of recent years in historical context, which is interesting at first. After that, however, the quality of the book declines rapidly and a suspicion arises that is then confirmed in the epilogue by the author himself (or rather, not himself): 10 days before the deadline, the material collected up to that point (research and manuscript) was loaded into ChatGPT and used to prompt an entire book.

The first part (chapters 1 to 5) is relevant and exciting, especially for people not that familiar with AI. Here, the AI boom is placed in historical context. This is the most substantial part of the text, where the author's unique perspective shines and makes for interesting reading. Unfortunately, parts two and three consist almost entirely of filler material. The tenth chapter is the absolute worst: here you will find a repetition of almost identical paragraphs. This chapter was obviously not reviewed before printing. Some of the paragraphs are identical, so you can literally read out the prompt with which they were generated.

My recommendation: If you are not familiar with AI, you can read the first five chapters and then put the book away. There is a complete lack of deeper philosophical or social ideas, future scenarios or food for thought. At most, the opinions of others are juxtaposed, but that's about it.

So it's better to look for another book on the subject that wasn't written by someone whose main job is selling AI consulting.
17 reviews
February 2, 2026
A must-read for anyone thinking seriously about the future of humanity and AI

I had the opportunity to hear Zack Kass speak in January at the Granada Theater, and I left absolutely enthralled. That same sense of clarity, optimism, and intellectual rigor carries beautifully into The Next Renaissance: AI and the Expansion of Human Potential.

What makes this book so powerful is Zack’s ability to reframe AI not as something to fear, but as a profound tool for expanding human potential. Rather than leaning into dystopian narratives or abstract technical jargon, he presents a thoughtful, accessible, and deeply human vision of what’s possible when technology is aligned with our values. Importantly, he does not shy away from the ethical responsibility that comes with such powerful tools—emphasizing that the future of AI must be guided by intentional, human-centered decision-making, not just technological capability.

The book is intellectually engaging without being intimidating, optimistic without being naive, and practical without losing its sense of wonder. You can feel the same passion and insight that came through so vividly in his live talk—this is someone who doesn’t just understand AI, but truly understands people.

This is a must-read for anyone curious about the future, whether you’re deeply embedded in tech or simply trying to make sense of how AI will shape our lives, our work, and our collective potential. Zack Kass is helping lead an important, ethically grounded conversation—and The Next Renaissance belongs on the short list of books shaping how we think about what comes next.
1 review
April 21, 2026
This isn’t really a book about AI. It’s about what kind of human you need to become because of it.

Zack Kass makes a compelling case that we’re not heading into an AI crisis, but an explosion of human potential. AI isn’t replacing us, it’s amplifying us. This is where things get uncomfortable.

Because amplification cuts both ways.

If you’re sharp, AI makes you exceptional. If you’re average, it exposes it quickly.

The writing is clear, optimistic, and grounded in reality. No hype, no technical overload. Just a strong lens on where things are going and what it means.

What it doesn’t do is hold your hand. There’s no step-by-step guide, but that’s the point.

The real takeaway: AI won’t reward effort anymore. It will reward leverage. AI won’t replace you. Humans who are using AI will.

Read it if you want to understand the future. Act on it if you want to be part of it.

So if you get the opportunity to hear Zack speak and meet in person, what a privilege.

Happy reading.
1 review
February 10, 2026
The Next Renaissance is a rare AI book that feels both grounded and genuinely expansive. Instead of defaulting to fear or hype, it treats artificial intelligence as a civilizational tool—one that can amplify human creativity, judgment, and agency rather than replace it.

What I appreciated most is the framing: AI is not positioned as an external force acting on humanity, but as a continuation of humanity’s long arc of cognitive augmentation—language, writing, printing, computation. The book is optimistic without being naive, and ambitious without slipping into sci-fi fantasy.

If you are tired of AI takes that oscillate between doomerism and shallow productivity hacks, this is a refreshing and intellectually serious alternative. It asks the right question: not what will AI do to us, but what kind of humans do we choose to become with it.
1 review
February 12, 2026
I had the opportunity to attend Zack Kass’ keynote at a company event last May, and reading his new book reinforced just how relevant his message is for the work many of us are doing today.

Kass makes a compelling case that AI isn’t just a technology shift, it’s an organizational and leadership shift. The companies that win won’t simply deploy new tools; they’ll rethink processes, decision-making, and how teams work alongside intelligence that is becoming abundant.

In my role driving complex client transformation initiatives, this perspective resonated strongly. AI adoption isn’t only about platforms and models, it’s about aligning people, operations, and strategy to unlock better outcomes. The AI Renaissance is an optimistic, practical read for anyone helping organizations navigate that change.
Profile Image for Marina Furmanov.
280 reviews1 follower
April 24, 2026
The trickle effects of AI compared to other historical movements, in this case, to cognitive intelligence. Compelling and well rounded argument with processing power as the new frontier removing biological constraints such as being tired and needing sleep :)

The gap between what's possible and what's socially permitted defines the adoption period for any transformative technology. Societies collectively decide the use cases.. but this is historically not a democratic decision. The decision gap is different across different societies. What is the real cost of progress and who pays. AI operates utilizing electricity, cooling, rare earth minerals (hardware), etc.

How can all this free time be better spent for people? So they still feel valuable and belong?

Shift from knowledge to curiosity and optimism. Strategic conviction, asking questions, will be the future.
Profile Image for Desert Pearl.
33 reviews2 followers
April 19, 2026
I may gift this book to my AI-hating relative as an attempt to show that AI is not the Big Bad Wolf in her life, but for me, reading it once was enough, as the impression I had was that this book is for the afraid individual who is afraid of AI erasing their sense of creativity in life.

This book did edify me by updating me on the current progress of AI as a tool and on the not-so-distant future of what AI can assist with.

Though i am enchanted by the idea of AI assisting in my day to day in more than it already has I am wary of bad agent's using AI against humanity but as with a gun if the person weilding the tool has a moral foundation in what is good and right then the tool can be used to uphold what is good and right instead of chaos and destruction.
1 review
April 18, 2026
A Refreshing Take on AI: Building Communities, Not Just Technology


I picked up Next Renaissance by Zach Kass after hearing him speak about the importance of connecting people and building stronger communities, and I’m glad I did.

The book moves beyond AI hype and focuses on impact: how technology can help cities listen better, strengthen communities, and restore a sense of agency. Just as powerful is the reminder that as AI advances, human connection and physical spaces matter even more.

A thoughtful, optimistic read that stays with you.
1 review
May 2, 2026
I've read this book twice cover-to-cover. This is phenomenal and informative book for anyone wanting to understand the history of AI, the development and impact in the years/decades ahead. Most importantly, Zack Kass takes a positive and uplifting perspective rather than the "doom and gloom" that AI will replace jobs and impact individuals. I highly recommend The Next RenAIssance - an informative and insightful read for anyone interested in truly understanding the transformational change of AI.
Profile Image for Ella Lawler.
24 reviews
January 25, 2026
I do have my worries about AI. This book however did a wonderful job going through these worries and bringing solid ideas on how AI can beneficially change the world. There will be an AI revolution the same way there was an industrial one- and I think this book prepares readers well to bear and be a part of that revolution.
11 reviews
February 19, 2026
a reassuring and pragmatic view of the world we will live in

Thought provoking spectrum of thought covering a wide range of ideas from the concepts surrounding “artificial” intelligence and our relationship with it. It provides appreciation and the context of how and what it takes to make the technology work.

Most importantly it does not prognosticate on the future but offers ideas on how to cope with change with a sense of optimism and humility. By being Human.
Profile Image for Michael Boezi.
13 reviews1 follower
March 31, 2026
I came to this book with an overall negative impression of AI, mostly because of gen AI in the arts. The author does a really good job welcoming healthy skepticism so it doesn't feel like rampant boosterism, while making great points about the positive potential of AI – especially around education and healthcare. I'm glad I read this.
1 review
April 18, 2026
Honestly, this was such a relief to read. Most AI talk is either super techy or totally terrifying, but Zack Kass provides a refreshing perspective to the AI conversation and actually makes the future sound like something to look forward to and exciting. Always been an avid fan of his podcasts and his books.
242 reviews4 followers
February 15, 2026
Saw this guy speak (decent speaker). You don’t need to read the book if you saw him speak (very similar messages). My advice, have AI summarize the book in five sentences or less. The book is here if you want it.
Profile Image for Juliette Levine.
13 reviews4 followers
February 10, 2026
This book is a great intro to get you thinking about what advances in AI will actually mean for our world (with tangible examples). Great fodder for conversation and brainstorming.
Profile Image for Josh.
10 reviews1 follower
March 11, 2026
awesome book

This is an awesome bike that is related to today and the future of where it is going. Great read.
Profile Image for Natalie Tosti.
11 reviews
April 18, 2026
I read this book quickly, and it gave me a really insightful glimpse into how AI will impact all of us. It reinforced the importance of staying curious, being human, and continuing to learn
Profile Image for Jesse Liberty.
Author 104 books92 followers
May 16, 2026
AI is scaring the hell out of everyone (and with good reason). Kass explains why he sees enormous, civilization changing, potential. Well worth reading.
Displaying 1 - 30 of 32 reviews