Tech visionary and co-founder of Manas AI Reid Hoffman shares his unique insider’s perspective on an AI-powered future, making the case for its potential to unlock a world of possibilities.
"An essential companion." — Fei-Fei Li "An important read." —Bill Gates "Brilliant mind. Compassionate heart. Bold ideas…Read this book!” —Van Jones "Refreshingly optimistic and welcome perspective." — Ariana Huffington "A fascinating and insightful book." —Yuval Noah Harari
As taught at UPenn's Wharton and Stanford.
Superagency offers a roadmap for using AI inclusively and adaptively to improve our lives and create positive change. While acknowledging challenges like disinformation and potential job changes, the book focuses on AI’s immense potential to increase individual agency and create better outcomes for society as a whole.
Imagine AI tutors personalizing education for each child, researchers rapidly discovering cures for diseases like Alzheimer's and cancer, and AI advisors empowering people to navigate complex systems and achieve their goals. Hoffman and co-author, tech and culture writer Greg Beato envision a world where these possibilities, and many more, become a reality.
Superagency challenges conventional fears, inviting us to view the future through a lens of opportunity, rather than fear. It’s a call to action – to embrace AI with excitement and actively shape a world where human ingenuity and the power of AI combine to create something extraordinary.
Entrepreneur Reid Hoffman is a co-founder of LinkedIn, Inflection AI, and Manas AI. He is also host of the podcasts Possible and Masters of Scale.
An accomplished entrepreneur, executive, and investor, Reid Hoffman has played an integral role in building many of today’s leading consumer technology businesses, including LinkedIn and PayPal. He possesses a unique understanding of consumer behavior and the dynamics of viral businesses, as well as deep experience in driving companies from the earliest stages through periods of explosive, “blitzscale” growth. Ranging from LinkedIn to PayPal, from Airbnb to Convoy to Facebook, he invests in businesses with network effects and collaborates on building their product ecosystems.
Hoffman co-founded LinkedIn, the world’s largest professional networking service, in 2003. LinkedIn is thriving with more than 700 million members around the world and a diversified revenue model that includes subscriptions, advertising, and software licensing. He led LinkedIn through its first four years and to profitability as Chief Executive Officer. In 2016 LinkedIn was acquired by Microsoft, and he became a board member of Microsoft.
Prior to LinkedIn, Hoffman served as executive vice president at PayPal, where he was also a founding board member.
Hoffman joined Greylock in 2009. He focuses on building products that can reach hundreds of millions of participants and businesses that have network effects. He currently serves on the boards of Aurora, Coda, Convoy, Entrepreneur First, Joby, Microsoft, Nauto, Neeva, and a few early stage companies still in stealth. In addition, he serves on a number of not-for-profit boards, including Kiva, Endeavor, CZ Biohub, New America, Berggruen Institute, Opportunity@Work, the Stanford Institute for Human-Centered AI, and the MacArthur Foundation’s Lever for Change. Prior to joining Greylock, he invested personally in many influential Internet companies, including Facebook, Flickr, Last.fm, and Zynga.
In 2022, Hoffman co-founded Inflection AI, an artificial intelligence company that aims to create software products that make it easier for humans to communicate with computers.
Hoffman is the host of Masters of Scale, an original podcast series and the first American media program to commit to a 50-50 gender balance for featured guests as well as Possible, a podcast that sketches out the brightest version of the future—and what it will take to get there. He is the co-author of five best-selling books: The Startup of You, The Alliance, Blitzscaling, Masters of Scale, and Impromptu.
Hoffman earned a master’s degree in philosophy from Oxford University, where he was a Marshall Scholar, and a bachelor’s degree with distinction in symbolic systems from Stanford University. In 2010 he was the recipient of an SD Forum Visionary Award and named a Henry Crown Fellow by The Aspen Institute. In 2012, he was honored by the Martin Luther King center’s Salute to Greatness Award. Also in 2012, he received the David Packard Medal of Achievement from TechAmerica and an honorary doctor of law from Babson University. In 2017, he was appointed as a CBE by her majesty Queen Elizabeth II. He received an honorary doctorate from the University of Oulu, an international science university, in 2020. In 2022, Reid received Vanderbilt University's prestigious Nichols-Chancellor's Medal and delivered the Graduates Day address to the Class of 2022 on the importance and power of friendship.
There is surprisingly only a passing mention of AI agents in a book called Superagency. Most of its pages are dedicated to retelling the grand past in which market forces led to prosperity and trying to discredit friction to innovation like regulations. Reid Hoffman says the transition to AI is the “cognitive” industrial revolution and that it will be similarly painful but necessary to keep the US at the top with something better than laws – iterative development. The same development cycles that popularized cars should be applied to AI instead of worrying about safety feature regulations since competition will curb the worst of capitalization. If you sit on the board of Microsoft and hold the reigns to swathes of venture capital, then his attitude is understandable but for the rest of us in the economic 99% I think that this review on Goodreads, shows him to be a Badreid, unlike one of his other books, Blitzscaling which I gave 4/5 stars. (I reviewed this book prior to its general publishing date as part of the Wharton course, Leading an AI-Powered Future)
To echo other reviewers here, this book feels a rushed attempt to hit a word count in order to receive a juicy advance.
I enjoyed Reid’s previous AI book Impromptu much more, though my favourite AI non fiction books are
Life 3.0 (Tegmark) The Coming Wave (Suleyman) Co-intelligence (Mollick) The Singularity is nearer (Kurzweil) Ways of Being (Bridle) Brave New Words (Khan)
There’s very little on agents despite the false advertising title, but lots of padding on the history of cars and maps.
It looks back more than forwards, so positive it lacks critical thinking, and is so US centric there should be a content warning it’s only for Americans.
Very positivist way of looking at things. Reads like an overcorrection of Zuboff’s age of surveillance capitalism and Harari’s Nexus. In line with current geopolitical agenda of US. Reads more like propaganda for deregulation than balanced, critical thinking on a positive future.
Hoffman really wants us to believe that AI will be humanity's stepping stone. That there are no monsters under the bed - only opportunities. But belief isn't enough. Superagency sells itself as a techno-humanist manifesto, but what it delivers is a corporate pamphlet disguised as an ethical vision. Hoffman tries to dispel the ghosts of the digital apocalypse, but in doing so he also erases the alarm bells. His treatment of works like 1984 is revealing.
More than naivety, Hoffman's approach to 1984 reveals a dangerous interpretive void. By suggesting that Orwell failed to exploit the ‘communicational possibilities’ of telescreens, Hoffman turns one of the densest symbols of modern oppression into a technical defect - as if the dystopia were a design error. The phrase ‘the fact that you can be overheard also means you can be heard’ ignores the central role of fear and internalised surveillance in authoritarian regimes. Worse, it assumes that all it takes is political will to transform instruments of repression into channels of participation. This technological reinterpretation empties Orwell's ethical and literary meaning, converting the horror of total submission into an optimistic appeal for state listening. It's a blind gesture, revealing an author who doesn't understand what's at stake when it comes to power.
The book lacks historical, political and anthropological density. It lacks listening. The world is missing. Reid wants to be the anti-Zuboff, the anti-Harari, but ends up being the protagonist of a super-negligent analysis. The book extols the promises of AI, but without recognising the asymmetries of access, the risks of economic capture and the invisible side effects. It ignores the fact that no technology is neutral - and that the agency that is being amplified today is that of those who already have agency.
Tech bro billionaire dispenses with the precautionary principle and writes how hyper-capitalism should accelerate deregulation so that AI can fix problems created by hyper-capitalism (Netflix for therapy!) while trusting robots to hopefully solve the new problems this will create in the future. And that’s from the one tech bro billionaire who didn’t vote for Trump. Truly dystopian stuff.
A con book, leveraging the agency hype and writers name to sell and I am very frustrated I was lured to buy the book. Very unstructured and listing irrelevant AI subjects to make up the page number. Do not waste your time if you would like to learn about AI agency.
This book builds an optimistic case for exploration of artificial intelligence, framing it not as an existential threat, but as a continuation of humanity’s long journey to expand agency and capability through technology. Drawing historical parallels to the Industrial Revolution, the author argues that AI, like steam and oil energy powered machines before it, could unlock a new era of cognitive abundance—enhancing creativity, improving access, and uplifting human dignity.
At the heart of the book is the idea that most concerns about AI are really concerns about human agency. The narrative challenges dominant techno-dystopian framings by categorizing the current voices into four camps: the Doomers, Gloomers, Zoomers, and Bloomers, each with differing expectations and fears about AI’s trajectory.
Through interesting examples through history — The Industrial Revolution, LinkedIn enabling networking and upskilling, AI-powered mental health tools expanding access, and contributions to platforms like Wikipedia and Google Maps enhancing a “private digital commons”—the book envisions AI as a democratizing force. This commons, unlike finite traditional resources, improves with use and contribution, empowering individuals through collective benefit without compromising privacy. The examples are well chosen but often the analysis seems quite one-sided in some respects and complex interaction between society and technology are not explored as often as they should.
I felt that AI benchmarks and standards are not good proxies for progress. While AI benchmarks have pushed models forward, real-world intelligence isn’t reducible to multiple-choice questions. Issues like data leakage and inflated scores highlight the limits of benchmark-centric evaluation, and the author advocates for metrics that reflect real-world adaptability and understanding. I felt that the example of chatbot arena which pits one model against another anonymously and through many real world scenarios in a crowd-sourced manner was interesting.
Regulatory discourse is explored with nuance, contrasting the precautionary principle—which treats new technologies as guilty until proven innocent—with the ethos of permissionless innovation, which favors iterative development and fast feedback loops. Drawing analogies to public works like the Interstate Highway System and private developments such as the increase in the reliability of cars, the book illustrates how adaptive, bottom-up innovation can deliver lasting, scalable impact. Ultimately, the book calls for iterative, responsible deployment along with low regulation and market competition as the path to a better future—one that amplifies human potential while actively preventing worse outcomes. The book does fall short on it's premise and the narrative loses it's appeal midway through the book when AI and it's implications could have been explored more but there are random segues into historical analogues.
A lot of people are worried about AI. They think it might take jobs or cause harm. But the book, “Superagency: What Could Possibly Go Right with Our AI Future,” Authors Reid Hoffman and Greg Beato flip that idea. They believe AI can be a tool to help us, not replace us.
They encourage us to think of AI like a partner. It can make your work better and help you make smarter choices. Like a GPS for your brain. Instead of doing the thinking for you, it gives you better info so you can decide for yourself.
The authors say we shouldn't wait around hoping AI turns out okay. We need to guide it. Use it. Shape it to fit human values. If we sit back, the wrong people might take the lead—and that’s when real problems start.
One big idea in the book is “superagency.” That means giving people more power, not less. AI can help doctors diagnose faster, help students learn better, and help small businesses grow. But only if it's open, fair, and used by many, not hoarded by a few big tech companies.
The key is to keep AI in check while still letting it grow. That means using it in real life, testing it, and fixing problems as they come up. Not hiding it away or locking it down out of fear.
The future isn’t about AI replacing us Hoffman and Beato say, it’s about AI helping us do more.
Key Takeaways
• AI as a Tool for Human Empowerment – AI boosts productivity, creativity, and smart decision-making without replacing humans. • Superagency and Personal Autonomy – The core idea is using AI to increase individual control and opportunity, not reduce it. • Conversational AI and Mass Adoption – Tools like ChatGPT show how fast AI can spread and reshape how we work and learn. • AI Literacy and Education – Learning how AI works is key to using it wisely and avoiding over-reliance or misuse. • Responsible AI Innovation – Safe AI comes from real-world testing, user feedback, and constant updates, not fear-based pauses. • Ethical AI Development and Governance – AI must follow clear rules to prevent bias, protect privacy, and support fairness. • AI in Business and Entrepreneurship – Small businesses can now access advanced tools once limited to large corporations. • AI-Powered Knowledge and Decision Support – AI helps filter data, fight misinformation, and provide useful, real-time insights. • Open Access and the Private Commons – Sharing AI tools and research drives progress and ensures broader public benefit. • America’s AI Leadership and National Strategy – The U.S. can lead in ethical AI by investing in education, innovation, and regulation.
Memorable Quotes
As hard as it may be to accurately predict the future, it’s even harder to stop it. The world keeps changing. Simply trying to stop history by entrenching the status quo—through prohibitions, pauses, and other efforts to micro-manage who gets to do what—is not going to help us humans meet either the challenges or the opportunities that AI presents.
You’ll never get the future you want simply by prohibiting the future you don’t want. Refusing to actively shape the future never works, and that’s especially true now that the other side of the world is only just a few clicks away. Other actors have other futures in mind.
LLMs never know a fact or understand a concept in the way that we do. Instead, every time you prompt an LLM with a question, or ask it to take some action, you are simply asking it to make a prediction about what tokens are most likely to follow the tokens that comprise your prompt in a contextually relevant way. And they don’t always make correct or appropriate predictions.
Distributing intelligence broadly, empowering people with AI tools that function as an extension of individual human wills, we can convert Big Data into Big Knowledge, to achieve a new Light Ages of data-driven clarity and growth.
As a general template, the approach we took with automobility makes sense for AI too. Instead of depending on regulators and industry experts to develop and refine AI behind closed doors, in centralized, undemocratic ways, we should continue to engage in iterative deployment that helps us better understand how people are using AI, see where issues develop as usage scales, and adjust accordingly. Through this process, people will get a firsthand sense of how they value, or don’t value, the new capabilities that AI affords. That, in turn, will help determine what kinds of risks and trade-offs seem reasonable. If all that AI delivers for most people is a convenient way to make images for homemade birthday cards, we as a society probably won’t tolerate much risk at all. On the other hand, if most people come to see AI as a technology that can amplify their agency and expand their life choices, in the way that automobility has over the last century and a half, then we’ll tolerate a higher level of error and risk in pursuit of these greater rewards.
In the coming years, instances like this—where AI devices and services can shape, nudge, automate, dictate, and even preordain the “choices” we, as individuals, are allowed to make—will become more common. More lawsuits will be filed. More efforts will be made to craft legislation that regulates the kinds of law as code that are permitted in the physical world. But whatever laws are passed or not passed, public attitudes will obviously play a major role in how we greet these new scenarios. This will be especially true if different government agencies start imposing their own AI-driven mechanisms of perfect control.
If the long-term goal is to integrate AI safely and productively into society instead of simply prohibiting it, then citizens must play an active and substantive role in legitimizing AI. In this regard, permissionless innovation and iterative deployment aren’t just mechanisms for increasing safety and capabilities, but also for cultivating public awareness about how these technologies work and what their implications are.
With AI, we think this trend will continue, with individual, national, and global impacts. A nation that lags in adopting AI-driven drug discovery and personalized medicine techniques may soon find itself facing a significant gap in health care outcomes. A nation that doesn’t benefit from AI precision farming and climate-adaptive agriculture will likely experience rising food costs and, in more extreme scenarios, increasing food scarcity. A nation with fewer options for personal development and career advancement invites a decline in the relative agency of its individual citizens—which would likely prompt a measure of brain drain, as its top STEM professionals emigrate to countries with more AI-friendly policies.
What if the U.S. made similar commitments to deploy AI in ways that were clearly beneficial to individual citizens and individual agency—and, even more crucially, for increasing opportunities for civic participation? What if the government then championed these efforts the way it once invested in the U.S. Postal Service, the Interstate Highway System, the space race, and the internet? Through forward-looking leadership, we have a generational opportunity to strengthen America’s prosperity, security, and global position, and perhaps even unite a polarized public with a greater sense of national purpose and national consensus. This won’t be simple or painless for the nation’s lawmakers, because embracing AI will create political risks for them. Instead of a Congress full of lawyers with their legal expertise, we’ll need more legislators with expertise in technology and engineering. When law is code, we need coders as much as we need lawyers at the highest levels of government.
Simply put, the more we, as a nation, commit to AI, the more every individual is likely to benefit. Productive regulatory approaches will lead to better and safer systems, faster.
NOTE: I do not get paid to read or review books. All of the books I summarize were either purchased by me or borrowed from my local library.
This is a chronology of AI and around it, based on the analysis of a contemprorary man. However, at the very start, the author warns you that the book is going to be biased, so if you are looking for a comprehensive analysis of pros and cons of AI and the hype around it, this is not the book to read. Other than that, pretty entertaining stuff.
I fundamentally disagree that the tech billionaires will ever want to allow the arguments that the author puts forward to happen. While he does mention that some people in powerful positions in tech are maybe less than moral, he seems to think that we’re all living in some tech utopia where those who have power and money are going to look out for the rest of us.
This book is well-written and full of ideas, but it felt too optimistic, in my view. AI has risks, and I wish the authors had talked more about them. They mention challenges, but they don’t go deep into the problems AI could bring. I liked the real-world examples, and the writing was engaging. However, I wanted a more balanced discussion. If you already believe in AI’s future, you will love this book. If you want both sides of the argument, you might find it lacking. But I think this is an engaging read that everyone can benefit of, so I recommend reading it.
It's absolutely worth skipping. "Current thing" book without deep insight into the agency part of the equation. It also seems to have been written for a different world scenario than the one that materialized. There are better books, and I would be surprised if anyone remembers this book in 5 years.
I picked up this book knowing I would probably disagree with at least part of the authors' argument because, if you don't read something you might disagree with because it could challenge your world view, you're just living in a bubble. But even setting my bar low, this shit sucked. The arguments range from typical neoliberalism to hyper-capitalist growth mindsets, the historical backing the authors bandy about as grounds for their ideas have various factors glossed over at best and outright ignored or, in one memorable case, taken to illogical extremes at worst. The latter actually made me say out loud "What the fuck are you on about?" after I read it, and readers may be able to guess which section I'm talking about. It's also unnecessarily verbose in multiple places, as if the authors had a deadline and a word count to hit. Like I said at the beginning, I disagreed with the premise at the outset but came in willing to hear the authors out, but this is just Pollyannaish garbage. The fact that I cannot give it a zero star rating upsets me.
Sigh - what a let down. Not the book I expected but maybe it’s too soon for something insightful on the state of AI? One impressive feat is that the author found a way to rephrase the same sentence 1000000x
2,5 stars; bla, bla, bla; all good points and yes AI’s promise is mind boggling and yes of course it can be used for good or evil purposes; expected more from the promising title, you can safely skip this one.
Eh - a very optimistic take of AI and quite dismissive of some of the counterarguments. Certainly don’t agree with every word here. Still it made me think - especially about the idea that we don’t necessarily know at the time what technologies will be beneficial and what ones won’t be at the time of implementing.
this was a book and a HALF ANYWAY mixed thoughts mostly piggybacks off ai hype and the author names but there were some interesting perspectives albeit I doubt tech giant would agree with using ai for good intentions
Throughout history, breakthrough technologies have sparked fears of societal collapse, only to become essential to modern life.
context: - In the 15th century, religious authorities warned of a dangerous new invention – the printing press. The clergy warned it would unleash chaos, letting destabilizing ideas spread unchecked. A few centuries later, critics denounced the telephone, claiming it would replace genuine human connection with shallow exchanges. - Modern AI systems can engage in sophisticated conversations, solve complex problems, and even simulate human-like reasoning. While debates rage about what it all means, AI capabilities are remarkable enough to spark both wonder and worry. - At the heart of these worries lies human agency – our ability to maintain control over our lives and make autonomous choices. Will AI eliminate jobs? Will it erode privacy? Will we become overly dependent on machines for tasks we once handled ourselves?
mental health and ai notes: - Just as the Industrial Revolution used synthetic energy to dramatically expand human physical capabilities, AI represents an opportunity to amplify human intelligence and decision-making power. - Imagine every child having access to a personalized tutor as knowledgeable as Leonardo da Vinci, or a highly capable health advisor in their pocket. This combination of human and artificial intelligence could create what might be called “superagency” – an unprecedented expansion of individual human capability. - In early 2023, tech developer Rob Morris sparked an unexpected firestorm. His mental health messaging platform, Koko, added AI capabilities to help compose supportive messages for users in emotional distress. Notably, users consistently rated these AI-assisted responses even higher than human ones. And every message clearly disclosed when it was written by an AI. Even so, social media erupted with accusations of exploitation. - Current digital mental health solutions, including the over 10,000 available apps, show promise, but have significant limitations. Up until recently, chatbots have relied on rigid, pre-programmed responses that can feel mechanical and impersonal. Not surprisingly, only 3.9% of users continue with these apps after two weeks. (But what if mental healthcare could be delivered like Spotify – accessible anytime, deeply personalized, and highly affordable? Advanced AI systems could analyze millions of therapy interactions to understand what approaches work best for different people.) - A 2023 study in JAMA Internal Medicine provided striking evidence of AI's potential: when physicians blindly evaluated medical advice from both human doctors and ChatGPT, they rated the AI's responses higher in 78.6% of cases, finding them both more comprehensive and, ironically, more empathetic. - imagine this: a world in which everyone had access to as much clinically-validated therapeutic support as they wanted. Users could test different therapeutic approaches, combine multiple styles, or assemble virtual therapy teams for real-time second opinions. Researchers recently analyzed 160,000 anonymized therapy sessions containing over 20 million messages, using AI to identify which therapeutic approaches worked best in different contexts. This kind of data-driven insight could transform how we personalize mental healthcare. - The authors make clear that this vision is not about replacing human therapists. Rather the goal is to provide more options for how mental healthcare can be delivered. AI could support human practitioners to serve more patients, and provide immediate support when human therapists aren't available.
development notes/speed: - the story of automobiles actually teaches us something surprising about safety: innovation often proves more effective than regulation. - Rather than halting development, early car manufacturers and enthusiasts engaged in constant experimentation. They organized races and cross-country rallies that pushed the technology to its limits. While these events were risky, they led to crucial improvements in both reliability and safety. The results were remarkable: from 1923 to today, the death rate per 100 million miles driven has dropped by 93%. - Imagine if, in 1923, strict regulators had limited cars to 25 mph. Not only would we have forfeited the technology's transformative impact on individual freedom, economic opportunity, and social mobility, we may have also missed out on the safety improvements. - This illustrates the power of what's called “permissionless innovation” – allowing new technologies to develop through rapid iteration and real-world testing. It stands in contrast to the so-called “precautionary principle,” which insists that new technologies must prove their safety before deployment. While this might sound prudent, it can actually make technologies less safe by slowing down the learning process – all while preventing or delaying their revolutionary benefits. - When millions of people use AI systems daily, we gain valuable insights into real-world interactions, unexpected applications, and potential challenges across diverse contexts. - Rather than trying to perfect AI systems behind closed doors, we should continue this process of iterative deployment – launching, learning, and improving.
on regulation: - Can we instead use regulation to enhance, rather than hinder, human agency? - just as driver's licenses and traffic laws made cars safer and more useful, AI may require new forms of certification and security measures. Some worry this could restrict freedom – but consider how this might actually work: AI licensing could ensure safe access to powerful models while protecting against misuse. Chains of data provenance could help prevent deep fakes. Identity verification systems could allow widespread access to AI tools, while preventing impersonations and fraud. The right frameworks for AI deployment will help us to trust and adopt these systems, just as standardized road rules helped drivers confidently navigate America’s highways.
the information society: - The evolution of GPS offers a compelling parallel for understanding AI's potential. - Consider a student with dyslexia who can now use AI to convert dense textbooks into accessible audio formats, weaving in examples from their personal hobbies or interests. Or a recent immigrant who receives an intimidating legal notice in a non-native language; AI can explain the document's meaning in their own language and provide them with legal references they understand. - This democratizing effect is particularly powerful for those who previously lacked access to expertise. In a recent study, customer service representatives using AI assistance saw their productivity jump by 14% – but the most dramatic improvements came from newer employees, who could suddenly tap into years of expert knowledge on demand. For these workers, AI functions like a personal mentor, helping them quickly develop mastery of complex customer interactions. (ok and what about everyone who learned so much for no fucking reason) - Singapore has launched a national AI strategy specifically designed to reflect regional cultural values and norms. France has pledged $550 million to create its own “AI champions,” with President Macron emphasizing the importance of developing French language databases. These programs reflect a growing understanding that AI will fundamentally shape how societies function and evolve. - South Korea is consolidating approximately 1,500 public services into a single AI-powered portal that will, for instance, proactively notify citizens about benefits they qualify for. The vision is to make government services as seamless as using Amazon or Google. - a more optimistic vision of AI in governance – not as a tool for surveillance or control, but as a means to amplify citizens' voices in policymaking. Forward-thinking countries are discovering that technology can strengthen rather than weaken democratic institutions, enabling more responsive and participatory governance.
Hoffman's reasoning seems flawed and biased to me and does not take the AI alarmists concerns sufficiently seriously. Chapter 6 is titled, "Innovation is safety" and describes how automobile safety improvements somehow prove that innovation leads to safety more generally. Not necessarily so. Should the burden of proof of safety have a higher bar? Is more AI regulation needed to solve the alignment problem? Hoffman does not adequately explore these questions.
Ignore! I was frankly annoyed with the uber-optimistic tone it had from the very beginning and its consistent reference to things that went well with previous technological evolutions as a justificación for what could go right with AI. Yet I continued reading, until I hit page 67.
This turned me off: “As people begin to engage more frequently and meaningfully with LLMs of all kinds, including therapeutic ones, it's worth noting that one of our most enduring human behaviors involves forming incredibly close and important bonds with nonhuman intelligences. Billions of people say they have a personal relationship with God or other religious deities, most of whom are envisioned as superintelligences whose powers of perception and habits of mind are not fully discernible to us mortals. Billions of people forge some of their most meaningful relationships with dogs, cats, and other animals that have a relatively limited range of communicative powers...”
If Reid Hoffman truly believes people’s relationship with a higher being or even with their pets compare even remotely with the way they engage with LLMs, it’s probably a good indicator that this should be the last of his books I spend money or time on.
"Superagency" - what a title, right? Sounds exciting!!
Well, it didn't quite live up to the name.
Look, I know I shouldn't have expected a balanced take on AI from someone as enmeshed in the tech-scene as Reid Hoffman. But, as far as tech guys go, Reid Hoffman is pretty chill. He founded LinkedIn, after all. When it comes to creating social media sites, you could do a lot worse than founding LinkedIn! Another feather in Hoffman's cap is that he's not prone to delusions of grandiosity like some of the other big players in Silicon Valley.
And yet, despite all his positive qualities, Hoffman wrote a book that could have been written by any generic tech CEO. Superagency is vapid and empty, and it reads like propaganda for deregulation. Hoffman bombards us with historical examples where new technology and deregulation led to increased human prosperity. The usual suspects make appearances: printing press, automobile, internet. His point? We've always feared technological advancement, and those fears were ALWAYS unfounded.
Hoffman gestures at the fact that technologies have downsides but rarely engages with these drawbacks meaningfully. For example, when discussing the benefits of the internet, he gives the potential downsides a cursory mention:
"Of course, a little bit of knowledge can be a dangerous thing, right? Echo chambers, filter bubbles, and algorithmic radicalization red-pilling impressionable young gamers into increasingly extreme and destructive viewpoints is certainly a narrative that has gotten significant media coverage. But there’s also a less covered story, even though it works exactly the same way: algorithmic springboarding. That’s what happens when YouTube’s recommendation algorithms lead users down spiraling rabbit holes of education, self-improvement, and career advancement."
So, sure, he acknowledges the problems with filter bubbles, but he immediately pivots to praise the internet. Throughout the book, Hoffman discusses the benefits of technologies at about a 20:1 ratio compared to the costs. This failure to confront technology's downsides undermines his examples of how technology "always goes right." And don't get me wrong, I love the internet! But to not meaningfully engage with the downsides to some of these technologies feels intellectually dishonest.
He also makes almost no effort to meaningfully engage with arguments that AI might be different from every piece of technology that has come before it. He assumes AI has unprecedented potential to improve humanity without considering its equal potential for harm.
His disinterest in AI's downsides becomes clear when discussing AI companions. He briefly nods to potential complexities of replacing human partners with synthetic ones:
"In most instances, we consider our capacity to cultivate relationships with nonhuman entities as one of our most valuable attributes, because of how these relationships can increase our emotional intelligence and complement and impact our relationships with other people. Many of these relationships function as a source of emotional support without judgment. They help create contexts where people feel comfortable enough to express themselves candidly. They often provide a sense of purpose, contributing to overall well-being."
But he quickly sprints past any deep discussion of human connection and dives head first into techno-utopian hubris:
"In effect, while AI models are explicitly not conscious or self-aware, they are, in their own statistically probable way, performatively kind and empathetic in ways that often surpass human norms. The potential consequence of this was brought home for me in conversations I had on two different episodes of my Possible podcast: the first with my Inflection AI cofounder Mustafa Suleyman, and the second with Maja Mataric, a computer science professor at USC Viterbi School of Engineering, who designs socially assistive robots. Both stressed how different kinds of AI simulating empathy can end up having a profound impact on humanity at large. As Mustafa suggested, not everyone has reliable access to human kindness and support. But when that becomes something you do have “always on tap,” it ends up increasing your own capacity for “being able to be kind to other people.” "
Notice the empirical claim slipped in as assumption? Having access to kind, supportive AI will increase our capacity for "being able to be kind to other people." This is pure conjecture. Having supportive AI on tap could just as easily cause people to retreat from the broader social world.
This highlights the book's fundamental flaw. I get it, Hoffman wants to counter AI doomers with a nice dose of AI optimism. This is a noble goal! I agree with him that people tend to only focus on the downsides. But Hoffman counters one myopic perspective with another. By refusing to engage with counterarguments, this book reads like a billionaire's Twitter thread masquerading as serious analysis. In the end, I don't even think I disagree with Hoffman's optimism, I just found this book to be super lazy.
I finally finished listening to Reid Hoffman’s “Superagency” during a road trip to Philadelphia, and found it to be a refreshingly optimistic take on AI’s future impact on humanity. Hoffman presents a compelling central argument: there’s more that could go right with AI than could go wrong. What I appreciated most was that this wasn’t blind techno-optimism. The book takes time to address contrary viewpoints and concerns, creating a balanced discussion of both benefits and risks.
The concept of “superagency” is centered on how AI can amplify human potential rather than diminish it. This is particularly relevant as we personally and professionally navigate this technological transition. Hoffman thoughtfully explores how AI could enhance our capabilities and freedoms rather than restrict them.
The audiobook also offers practical insights into how government policy, industry practices, and societal norms could be shaped to ensure AI fulfills its promise of increasing human agency rather than undermining it.
For anyone feeling anxious about AI’s rapid development, this book provides a measured but hopeful perspective that acknowledges challenges while emphasizing opportunities. A worthwhile listen for those interested in how we might harness AI for positive human outcomes.
Reid Hoffman’s Superagency presents an optimistic vision of AI, framing it as the next great technological leap, akin to the printing press or the internet. While his comparisons hold merit, the scale and speed of AI’s impact make this transition unlike any before. Unlike past innovations, AI is not just a tool—it is a system capable of decision-making and outperforming humans in many cognitive tasks.
Hoffman emphasizes AI’s potential for augmentation rather than replacement, but the immediate reality tells a different story. Job displacement is happening now, and while society has always adapted, the breadth of disruption this time is unprecedented. Businesses, driven by efficiency and the bottom line, will rapidly integrate AI, leading to structural shifts across industries.
There’s also a deeper concern: if not properly regulated, AI could shape human thought and behavior in ways we don’t yet fully understand. Having infinite knowledge at our fingertips is powerful, but who controls the flow of that information?
Superagency is an engaging read that sparks important discussions, but its optimism may overlook the immediate turbulence ahead. The future of AI is not just about opportunity—it’s about navigating the disruption it brings.
Technology, particularly AI, enhances human agency and should be developed iteratively and democratically. Humans are inherently tool-creators and advance with each technological leap. Advocating for equitable access to AI and potential to address global challenges outweighs the risks. It notes the difference between precautionary approaches and unmonitored innovation (split into Doomers, Gloomers, Zoomers, Bloomers) and emphasizes the importance of public participation and feedback in shaping the future of AI.
Some examples pulled to support his ideas include: the Printing Press, Power Looms and Luddites, Telephone, Camera, Cars (including Ford Model T), Internet, US Interstate Highway System (IHS), phones, GPS, The Donner Party, Steam Power, Wikipedia, Google Maps/Yelp/Waze, Digital Photography, South Korea's Covid-19 Response, Red-light Cameras, Facial Recognition at MSG, Telescreens (from Orwell's 1984) and the 1960's proposal of a national data center.
As most of the other reviews point out, most of this book is reviewing the history of groundbreaking technology and giving ammo to the point that we need to allow the industry to grow and utilize AI moving into the future. Wouldn’t recommend unless you are interested in the general history of some of the big technology breakthroughs
Superagency is a refreshing take on AI; one that doesn’t focus on doom and gloom but instead on potential and progress. The examples of AI tutors, medical breakthroughs, and personal assistants aren’t just theoretical; they feel possible, even inevitable. If you’re looking for a book that balances realism with hope, Superagency delivers.
Although I disagree with a lot of the points made in this book, it certainly led me to ask myself more questions, both logical and moral, about AI in the modern age. However if the author's goal was to persuade the layperson, or even a tech nerd like myself, that AI can be used responsibly in the way we're currently progressing, the author failed.