The Death of Truth: How Social Media and the Internet Gave Snake Oil Salesmen and Demagogues the Weapons They Needed to Destroy Trust and Polarize the World – and What We Can Do About It
A best-selling author documents how facts—shared truths—have lost their power to hold us together as a community, as a country, globally, and how belief in “alternative facts” and conspiracy theories have destroyed trust in institutions, leaders, and legitimate experts.
Drawing on the front-row seat he has had as the co-founder of a company that uses journalists to track online misinformation, Steven Brill takes us inside the decisions made by executives in Silicon Valley to code the algorithms embedded in their social media platforms to maximize profits by pushing divisive content. He unravels the ingenious creation of automated advertising buying systems that reward that eye-attracting content, and describes the exploitation of that ad-driven machinery by politicians, hucksters, and conspiracy theorists. He also explains how the most powerful adversaries of America have used these American-made social media and advertising tools against us with massive disinformation campaigns.
Brill explains how with the development of generative artificial intelligence everything could get exponentially worse—unless we act. Apropos, in The Death of Truth, he offers thoughtful, provocative but realistic prescriptions for how we can act and reverse course—proposals that are certain to stir debate, and even action. Finally, Brill chillingly recounts how his company’s role in exposing Russian disinformation operations resulted in a Russian agent targeting him and his family.
I just spent two days writing a review of this book and it just disappeared! Not in the Cloud, not hidden on my hard drive. Gone. So…
I'll just say it's really, really eye-opening and frightening. Like how since passage in 1996 of Section 230 (of the Communications Decency Act) immunized Social Media sites from liability, it's become infinitely easier for anyone -- bad actors, foreign governments, sociopaths, resentful individuals angry at the world -- to post fake and incendiary material. (Brill on Section 230: “Technology platforms had been given the freedom to sell the first consumer product ever that was absolutely immune from age-old common-law or modern regulatory oversight.”) Social media sites thus have no incentive whatsoever to fix the problem. In fact, the current situation is precisely what brings in engagement, clicks, and advertising dollars, so truth be damned. [Not so long ago the number two YouTube news channel (after CNN) was RT. When the RT reached a particular milestone in viewers, a Google executive (Google owns YouTube) praised them for their commitment to honest reporting and the avoidance of propaganda. RT was originally called Russia Today. It was specifically created to spread Russian propaganda and disinformation.]
And then there's Programatic Advertising (PA) — which, as of this year, accounts for almost 80% of US online advertising, costs companies hundreds of billions of dollars, and about which I knew nothing until I read this book. PA is designed to post ads where they will be seen by the most eyes. No one knows how the algorithm making these “decisions” works. What’s more, companies also don’t know where their ads are being seen, nor, it seems, do they seem to care. GEICO was the leading advertiser on the American version of Sputnik News’s global website network. Similarly, Best Buy, E-Trade, Progressive Insurance, Walmart, Amazon, PayPal, AARP, Macy’s, and on and on, had their ads posted on Russian and Hamas propaganda sites that ran anti-American stories and posted articles about how the October 7 attack in Israel was a false flag operation and that Paul Pelosi (husband of then-speaker of the House Nancy Pelosi) was attacked by a male prostitute he had hired. (I was interested to learn, though I probably should have known it already, that three or four people could go to the exact same website at the same time and they’d see different ads. Brill explains why and what it means.)
One story really shook me. In 2018, Steven Brill and former Wall Street Journal publisher L. Gordon Crovitz co-founded NewsGuard, an organization that would rate (on a scale of 0 - 100) news and information websites for their practices in protecting the accuracy of their postings. Conservative sites were just as likely to earn good — or bad — ratings as Progressive sites. When NewsGuard exposed a popular YouTube channel as a Russian propaganda operation, the host of that channel named Brill and NewsGuard on the air — including aerial footage of his home, resulting in threats to Brill and his family — and then ran a story saying that NewsGuard was one of several organizations secretly working with the US government to spread disinformation and suppress Conservative voices. Mere hours after the airing of that false report, Brill received an email from Jim Jordan, chair of the House Judiciary Committee, accusing NewsGuard of doing exactly what the Russian propagandists said.
And how Generative AI (like ChatGPT) can easily and quickly create realistic false photos, video, and audio (one company demonstrated to Brill how easy it was to create fakes by creating a fake recording of Joe Biden telling voters in a particular district that their polling place had been changed because of a water main break — in only seconds while Brill was talking to the company’s representative).
The algorithms used by Generative AI (also secret, no one knows how they’re programmed) make no distinction at all between data that comes from real sites and data that comes from disinformation sites. Asked to write a report on, say, the Holocaust, Generative AI will include material from people who say it never happened. Or that measles vaccine causes autism. Or that the government is hiding the fact that a certain fake cancer treatment works, or that Covid was designed to spare Asians and Jews, etc.
There have been a lot of books published about fake news and “alternative facts,” but “The Death of Truth” shows how truly widespread and deep the problem is, how most of it involves no human agency at all and is the hidden from sight, and how profoundly it threatens democracy. The book includes clear-headed ideas for addressing the problem.
This book will probably appeal to people like myself, who have a stake in the world of countering mis- and disinformation online. It offers a pretty good accounting of case studies that many of us are already aware of that highlight how social media platforms are hotbeds for the creation and spreading of lots of false information. However, many of these case stories are rehashes of things that have been previously written about.
The book also makes some good suggestions for mitigating the spread of misinformation online, including suggestions for platforms and governments -- although surprisingly, there aren't many suggestions here for individuals to do, beyond having citizens or public interest groups to organize state-level ballot initiatives.
However, my major qualm with this book is how often Brill uses this book as a way to promote NewsGuard, the company that he is the co-founder of, which employs journalists to create nutrition labels for information sources online. He mentions it multiple times as a tool that could and should be used for judging the reliability of online outlets of information, and he rarely mentions other tools that could be used for the same purpose.
To be clear, NewsGuard produces good work, and is playing a substantive role in countering the spread of false information online -- one could also argue that its been a success so far. However, it would be more accurate to have the subtitle of this book say something to the effect of -- "My story of how NewsGuard is helping to counter Snake Oil Salesmen and Demagogues of the internet".
Brill has written an insightful, if discouraging, book identifying two of the most devastating forces that have been destroying common agreements about fact and driving polarization in the U.S. and around the world. The first are the algorithms that social media platforms use to give top placement to content that is most likely to generate the most attention among users, which is also most likely to be false and divisive. The second are the algorithms used in 'programmatic advertising', whereby many of the world's most iconic businesses, e.g., GEICO, Verizon, Nike, Capitol One, and Disney, spend much of their enormous advertising budgets for online ads in placements they don't even know of, including fake news sites operated by, e.g., disinformation services, political operators, and scammers, all posing as legitimate news sites, nonprofits, or other innocuous fronts. He offers some suggested solutions, but his speculations about how AI is making it easier to destroy truth and polarize people are, unfortunately, most compelling.
Being concerned about misinformation and disinformation, I wanted to read this book. I also wanted to understand how false news is dividing and conning people and how ChatGPT and other AI tools could be used effectively or harmfully. Is the truth dead? Is truth just a matter of opinion? Will we continue to be increasingly polarized in the US? I have questions, and this book helped me to understand the complex world we now live in. Yes, my trust is destroyed at the moment, and it will take a lot of time to regain it. I have never been a fan of conspiracy theories and "alternative facts" presented as truths. Algorithms, social media, the internet and biased news affects us all. I found this book of high value and thought provoking. I hope others will read it and talk about it. Losing faith in truth and not knowing what truth is anymore is not a good place for society to be. The book ended with suggestions and proposals to reverse course from growing division and restore trust that might bring us together again. I highly recommend reading and sharing this book.
Starts with a brief history of the laws/legal cases that got us here.
Talks about: -his experiences co-founding and running NewsGuard, which rates news sites on basically their trustworthiness based on a series a barometers, -how the enormity of data online leads to the need for algorithms to deal with it, but the algorithms can be mistaken or abused, -how the technology originating in America has had global implications, -conspiracy theories, -Trump and Jan 6, -generative AI, -and more.
Concludes by proposing possible solutions or at least improvements.
Horrifying to understand how little effort the collective we is willing to put in to get accurate and factual information. Equally disgusting how many bad actors know exactly how to prey on people and their weaknesses and biases.
Steven Brill’s The Death of Truth: How Social Media and The Internet Gave Snake Oil Salesmen and Demagogues the Weapons They Needed to Destroy Trust and Polarize the World gives voice to a lot of the concerns in democracies around the globe in 2024.
Brill, who alongside former Wall Street Journal publisher L. Gordon Crovitz founded NewsGuard several years ago, wrote this book in order to identify and address these issues. Throughout The Death of Truth his concern over political polarization and the growing tribalism of America and other Western democracies comes through pretty clearly.
What is NewsGuard? This brainchild of Brill and Crovitz featured the application of ratings to various news sites based on the reliability of information they provide.
These ratings can be accessed via a subscription service and are created by actual human, not artificial intelligence, fact checkers and vetters. The two Newsguard CEOs were concerned about the spread of misinformation (incorrect information spread unintentionally) and disinformation (the intentional spreading of wrong information by malicious actors) on the Internet, particularly when it came to how this impacted decision making by voters in a democracy.
Brill starts out by looking at early legislative attempts to regulate the Internet. In particular, Section 230 of the 1996 Communications Decency Act comes in for close examination.
Little noticed at the time, this amendment essentially shielded (even before they really existed) what would become social media companies from lawsuits when it came to third parties posting knowingly false information on their sites. The thinking at the time went along these lines: a telephone company is no more responsible for what two people say back and forth over a phone than an Internet company is for what content one or two people post on their page.
Furthermore, as the Internet rapidly scaled up, companies simply said there was no way they could, at the current scale, police their sites for what people were saying on them.
But Brill points out that, like laws governing how many people can safely gather in one particular building or meeting place once that number becomes hazardous, online laws could set reasonable limits on web sites under the same line of reasoning. If they are too big to properly regulate themselves, then perhaps they are just too big.
If social media companies admit a scale has gotten too high for safety, then they can take steps to address it. This could perhaps take the form of limiting the number of posts people would be allotted each day.
The book takes care to note that most changes will have to be undertaken by the social media companies themselves. The broad protections of the First Amendment would stop governments from doing too much in this department (and the book in no way argues for strict control of Internet content, merely guideposts or guardrails), but Brill argues for at least putting information in the hands of consumers and advertisers so they can demand these changes once transparency has happened.
One informative portion of the book deals with programmatic advertising online.
Essentially, this has replaced the old days of the ad department of a company buying a spot in a certain magazine, newspaper, or time slot on a TV show.
Instead, using a series of algorithms based on site views and a whole host of factors, digital ads are frequently placed on one platform or another based on a set of factors that constantly change second by second online. The ad department for a company can basically set their ad to run at the lowest market spot price that can target a 30 year old suburban mom of two who likes to drive Subarus (just to pick a random target demo) and then this ad will pop up so they can see it.
But, Brill argues, this has resulted in company ads being placed on sites and next to content they never would have approved of beforehand.
Worst of all, it has often moved ad revenue from reputable news sites to clickbait rage “news” sites that go through little or no vetting. This has created a profitable cottage industry of pink-slime sites which exist only to draw clicks and therefore revenue from programmatic advertising dollars. Most readers have likely encountered these absurd sites and links on social media. These pink-slime sites make no effort to accurately report anything, instead only existing to push outrageous content that will temporarily draw eyeballs and therefore ad dollars.
And the book makes clear that huge social media conglomerates like Facebook have little financial incentive to take down harmful or misleading information on their sites. Despite public promises to get serious about it from the likes of even Mark Zuckerberg, little real effort has been made to counter what could become influence campaigns along the lines of Russia’s 2016 election interference in swing states on social media and that same year’s Cambridge Analytica fiasco courtesy of the Trump campaign.
The mechanism of NewsGuard could allow programmatic advertising to steer clear of sites which have been given a low rating for reliability. The author does emphasize that NewsGuard does not rate content based on its political leanings but instead on how delineated the news side is from the opinion and how much of a good faith effort is made to correct any reporting that is revealed to be false.
The book takes some time to look at some recent concerns when it comes to almost unchecked spreading of dangerous false information. The rise of TikTok over the last few years was also looked at askance thanks to its algorithms crafted to push engagement and constant usage rather than truthful content.
Elon Musk’s acquisition of Twitter and subsequent gutting of its safety and vetting department (allowing it to become a playground for conspiracy theorists and neo-Nazis), combined with the browbeating of social media companies to stop regulating their own content under concerns of an alleged but unproven “bias” against conservatives were further causes for alarm in this department.
The recent unveiling of ChatGTP and artificial intelligence, which can rapidly create believable content, means these concerns of rapidly spreading misinformation have only grown.
The COVID crisis revealed the potential for rapidly spreading misinformation to cost lives and health. The Death of Truth looks at a lot of the articles and videos that spread false information about vaccines in particular and the COVID virus in general; as public health officials grappled with how to convey what they were finding out in realtime about a virus, many turned online to those staking a claim to the truth that the elite were somehow trying to keep under wraps.
The author uses this example to show how so many in America and abroad were willing to trust any untrained individual with a blog over dedicated health care professionals.
Many of these social media personalities were rewarded with the sort of programmatic advertising that went toward what was generating “likes” and “engagement”, not what was getting at truth.
Brill takes particular umbrage in this section toward the then-president’s relentless pushing of conspiracy theories at a time when the public was seeking reliable and good faith information. This, combined with the anti-vaccine push migrating more into the mainstream from the crevices of the RFK, Jr. extremes, created a climate where democratization of the Internet crossed a line from a possible positive to a deadly negative.
The Stop the Steal campaign in the aftermath of the 2020 presidential election was another example that was utilized to show how the spreading of false information resulted in a terrible outcome. The spreading of false and misleading videos about election workers “stealing” votes and the bullhorn given some of the more malicious fringes of the conspiratorial right resulted in many voters outright losing faith in the democratic process for reasons not based on actual facts.
More often than not it was outrage and conspiracy theories generating these interactions, not rational efforts to convey complex medical and scientific information.
Russia’s invasion of Ukraine in early 2022 was another example of the dangers of false information. The Russian intelligence services, notorious in their own country and eastern Europe for using online mis/disinformation campaigns to confuse the public into resigned acceptance, went into overdrive following the Ukrainian war’s onset. Videos that pushed false narratives of bioweapons labs in Ukraine, Nazis in the Ukrainian Army, and atrocities supposedly committed by the Ukrainian Army spread rapidly and helped Russia keep their citizens supporting the war. It also resulted, to some extent, in a softening of support for arming the Ukrainians among the populace of some NATO countries.
The October 7th Hamas attack on Israel also revealed a crisis. The spreading of false information in the aftermath of this brutal attack made it hard for online citizens to discern what was real from what was fake information disseminated by malicious terrorist actors.
A smaller but telling example was the 2022 hammer attack by a man looking to kidnap then-House Speaker Nancy Speaker. This unfortunate incident resulted in her husband Paul being severely beaten and quickly had fake information being shared even by the likes of Twitter CEO Elon Musk himself. The head of the company shared a link to his millions of Twitter followers about a conspiracy behind the attack from the Santa Monica Observer, a pink-slime news site known for creating clickbait stories.
So aside from what are akin to nutrition labels attached to news sites, what, exactly, can be done to tackle online misinformation campaigns?
The Death of Truth indicates that polarization along social and political lines has contributed to the willingness of a lot of the public to buy into information that is at best poorly researched and highly questionable.
To help fix this, there is a proposal for states to use ranked-choice election voting. The goal of this is to eliminate having to vote for the “lesser of two evils” and do away with concerns of votes being “thrown away” on candidates without a chance. (Some states and municipalities actually use this form of voting already.)
Ranked-choice voting is fairly straightforward. Voters can select anywhere from one to four candidates on their ballots and indicate their vote by a 1, 2, 3, and 4 (or at least a 1) next to the candidates they most want to see win. If that candidate fails to finish in the top two, then the second place candidate on their ballot thus is elevated to their “official” vote, all the way through the fourth place vote.
If none of their top four picks makes the cut, only then can their vote be considered “wasted.” This allows voters to pick someone they might think is a long shot as their first choice, but if that candidate fails to qualify for the top two, then their second choice comes into effect, and so on. It would require candidates to have the broadest appeal possible and not feel like a voter in their party “has” to vote for them.
Another way to reduce polarization that is proposed is the redrawing of congressional districts by a bipartisan delegation (as the state of Colorado adopted via voter referendum in 2018 and other states have begun adopting as well).
Instead of drawing a “Safe R” or “Safe D” district, anything from commissions to computer programs could draw up competitive congressional districts that are not “safe” for either party. That way, candidates will go from being most concerned about being primaried by the extreme wings of their party to being worried about appealing to the broadest swath of the electorate possible in their new non-gerrymandered district.
If they go too far out on the left or right, the odds of being defeated by a challenger more representative of the country’s center than either extreme end will be a corrective mechanism (as is supposed to happen in well-run democracy).
The book is clearly intended as a good faith effort to at least try to begin fixing Western democracies that many publics are souring on. Some readers might like some of the ideas and be dismissive of others, but this succinct book is at least worth looking at for some fresh ideas.
The premise that a democracy cannot properly function when citizens do not even have a shared idea of what constitutes facts and truth is only made further salient with the advent of adaptable and self-learning AI.
The large scale creation and deployment of fake and misleading information will only require more and more discernment on the part of citizens online, and Steven Brill clearly wants democracies to put their voters in the best position possible when it comes to awareness of the information they are being exposed to.
Steven Brill’s latest book focuses on perhaps the most dangerous threat facing society today – the widespread dissemination of false information packaged as truth.
As Brill discusses in the book, even ten years ago it seemed inconceivable that tens of millions of Americans would believe a presidential election had been stolen in spite of all evidence, or that an evil global cabal planned and executed the COVID pandemic, or that the Sandy Hook school massacre was a staged event by professional actors. Yet today, after years of repetition, millions have embraced these lies as truth. As the Nazi’s understood nearly a century ago, today’s politicians have learned that lies repeated often enough eventually become truth; if the truth isn’t convenient simply provide “alternative facts.”
As Brill warns, Artificial Intelligence (AI) will tremendously accelerate the spread of highly credible disinformation, cheaply and quickly -- AI is already allowing bad actors to easily generate hundreds of “Pink Slime” websites masquerading as legitimate news.
Today, disinformation is flowing through many channels. But for social media, at least, the author has a simple proposal to reduce the disinformation. Require social media companies to strictly enforce their own Terms and Conditions which, at least for reputable firms, prohibit dangerous or intentionally false postings.
Brill’s book is fairly dense reading full of hard facts and specifics. So it takes some effort to get through it. But it’s worth the effort. After finishing the book, you’ll understand why disinformation in society is so toxic – and widespread.
Social Media Platforms - The Handmaidens to Authoritarianism Section 230 of the Communications Decency Act basically provides "Good Samaritan" protection to online publishers from being held liable for user-generated content. It allows websites, social media platforms, and forums to moderate content without being treated as the publisher of that content. This provision of the Act, which was passed without much deliberation, freed up the social media behemoths of today to publish anything and everything. The evolution of these advertising driven platforms has resulted in a focus on creating user "stickiness" or "engagement", namely the amount of time spent on a platform. Engagement, in turn, is driven by divisiveness and anger. A civil discussion based on facts might be enlightening but it is unlikely to result in more time spent consuming "news" which is what drives profit. Who knew this benign piece of legislation would unleash the tsunami of misinformation (and disinformation) that we are all subject to today. Couple this with the newspaper industry making a strategic misstep in allowing their content to be published freely on the web, in the mistaken belief that this would result in additional advertising revenue rater than a cannibalization of their traditional newspaper sales which resulted in consumers obtaining their news online. As a result we have a seismic shift in the news business and in the perception of what is true! In summary, the books these rests on four core forces that, he argues, contribute to the death of truth, namely: - social media platforms with algorithms designed to promote stickiness and enhance the company's advertising driven revenue - the rise of programmatic advertising technology that supports misinformation - the charlatans who promote conspiracies and bogus products - The "losers" in our community who buy into what the charlatans are selling (the "J-6" mob) Brill contemplated creating a news monitoring and rating business in 2018 - a time intensive, non-scalable model. At that time he estimated that there were 2,800 news sites that accounted for 95% of online engagement. In fact, he did set up a company called NewsGuard to score media businesses on truthfulness. Unfortunately, the second half of the book reads like a defense of the work his company performed which detracts from some of his arguments. That social media has distorted the meaning of truth is undeniable - but his arguments would have more punch if they did not appear to be promoting his corporate interests.
It's not surprising that many people, regardless of where they are on the ideological spectrum, seek out sources that confirm rather than challenge their thinking. But as this book points out, social media companies, in order to foster engagement and retention of eyeballs, has made it where many people today live in an echo chamber in which we see and hear ONLY those things that we want to agree with. Making this even more dangerous is how many bad actors use this tendency to create content that push people towards falsehoods and dangerous anger and resentment.
I agree with much of what the author's thesis is (after all, I also am one who seeks out confirmation of what I want to believe!). But this book loses points because it often interjects the author's own story (which will, and must be, always subjective) into a detailed argument about what is objective truth. And so the two primary tracks of this book makes it sometimes a jumbled mess. Two different books could have been written here, one Brill's own story with his development of NewsGuard (which is interesting enough), and a second work detailing why and how so many bad actors have entered the arena and how this has led to so many problems in our culture. As it is, having both of these books combined into one lessened the impact of both and undermined the truth he was trying to get across.
Worth a read to be sure, but could have been better.
This book sounds a warning about some important problems that are facing humans in the twenty-first century. Increasingly, people are classifying truth and facts as though they are up for debate and subject to one's opinion, and this is causing our country and others to polarize into tribes that are undermining people's lives and threatening democracy. The social media companies, which by (mistaken) law are not liable for the content they allow to be posted on their web sites, amplify mis-and disinformation by setting their algorithms to drive users to the most inflammatory and "engaging" content. They also throw up their hands and say that there is too much content being posted on their sites for them to adequately sort out the misinformation, hate speech, violence, and other bad content that they say in their user contracts they will not allow on their sites. in addition, the web-based systems that drive programmatic advertising result in ads (and thus money) going to unreliable sites without any attempts by advertisers to control where their products are being pitched, even on sites where they would not want their ads to appear, if they were paying attention. Brill has some good proposals to fix all this, but they seem unlikely to be adopted. So I ended up feeling depressed and powerless, even though I was glad I had read the book and think it is important.
Human history does rhyme, and we did not get to the latest iteration of Malebolge by accident. As with every cycle in human history, we arrived here through choices humans made. And because they are system choices, we can change our systems to bring order to the chaos they are currently creating. The only problem is we have entrenched power, media institutions, and even a few tech bros vying to be the first trillionaire who benefit from the anger, chaos, and lack of trust created by pitting humans against each other. These few at the top of our societal hierarchies stand in the way of improving our social media and election processes and a better life for the rest of humanity.
Steven Brill writes persuasively, referencing his experiences fighting to uncover the vast network of Machiavellian actors, AI bots, ideologues, programmatic advertising, authoritarians, charlatans, conspiracy trolls, deranged and disconnected people who all benefit from unfettered global social media platforms and election processes controlled by the few.
Brill's solutions are actually quite simple in concept. No new inventions are needed, "only" humility is needed.
The title really isn’t exaggerating, and the book is full of depressing examples of false information being disseminated every day on television and through social media. Even ten years ago when I was still teaching, I had students tell me that matters of fact were “just your opinion,” similar to when someone tells Steven Brill that a provable falsehood is “her truth.” In a book published before the 2024 election, Brill documents Donald Trump’s many contributions to this phenomenon—for example, I hadn’t been aware of the extent of deception involved in the Trump University case. As Brill points out, “for more than five years prior to the 2016 presidential election, Trump was being exposed for actually selling snake oil”—and that didn’t stop his being elected, then or now. After detailed explanations of how the social media platforms that helped that happen work, Brill offers a chapter of specific steps to counter them, but in December 2024 it’s hard to imagine any of them going through or accomplishing much. Interestingly, almost all of the more than fifty pages of notes at the end identify online sources, no books.
Steven Brill is not the first to identify the problems of polarization and misinformation that come out of social media, though I had not previously considered how programmatic advertising contributes to the problem by creating a pathway for ads of major companies to run on sketchy websites. Still, he's right about all of this. The problem is obvious to everyone, and it's now very clear that spreading untruth and stirring the pot is part of the business model in social media since false and controversial information drives engagement. I admire Mr. Brill's efforts with Newsguard to rate news sources for reliability and responsible journalism. It's a drop in the bucket, but every little bit helps. He offers additional constructive solutions in the last chapter. Still, it feels like too little and too late to me, and with the current Supreme Court and the incoming Trump administration, I'm not seeing how any of his proposals will happen. It's good to have people like Mr. Brill continuing to write about the problem and continuing to search for solutions, but I wish he could be proposing things that I could feel will be more feasible and more likely to yield results.
I picked this up on a whim because I wanted to read more nonfiction books this year. This was amazing and I'm so glad I chose it. The idea of misinformation being damaging seems like something that should be common sense, but Brill really goes into detail about misinformation in the modern age of the internet and all the things that created the current state of things.
People not trusting traditional media and embracing news found on social networking platforms suddenly feels more nefarious when you know the history behind Section 230 and the unprecedented protection these platforms have as a result. There's also the emerging industry of online advertising that wrecked traditional news sources and organizations by stealing a large chunk of their revenue and giving it to bad actors.
It's a very interesting book, and I think it's suitable for people on either side of the political aisle. Brill is upfront about his potential biases, and points out issues not just with one political party but both. Highly recommend for anyone looking for a nonfiction read based on a modern issue or conflict.
"The Death of Truth" is another very important exposé by journalist and attorney Steven Brill. This is my third book by Brill, the first about America's health care mess and the second about America's march toward polarization and power concentration. This book is about the takeover of social media by 'bad actors' and the resultant manipulation of public attitudes, elections, and threats to democratic forms of government. Brill details the cynical power grab and financial might of Google, Facebook, etc. and the extreme harm they are causing. He also sounds the alarm about the new Artificial Intelligence movement and its vast potential for further harm. Finally, and most importantly, Steven Brill offers common sense and doable solutions to this crisis. But these fixes are only pipe dreams until people wake up to what is occurring. I encourage people to read "The Death of Truth" and consider what part of the solution to this crisis they can be a part of.
This book is important, heart-breaking and very, very depressing. Brill assembles plenty of evidence of an information system that is manipulated and unreliable, and users who are woefully misinformed. He also shows the dangers of AI and people who judge individuals and issues on which party backs them rather than what facts do or don't support them. The book, published in 2024, is as up-to-date as a book can be, but Brill could have shortened it and made it more appealing to the people who would benefit from reading it. These problems are not new, just bigger and worse than they were. In the early 2000s, when the internet was just hitting its stride, I often saw posts full of erroneous information. If the posters were friends or acquaintances, I would send them a nice note and a link showing the facts. Almost everyone said, "Thanks, but it's a good story and I'm going to keep it as is." They were willfully ignorant and committed to opinions, not facts.
I think this is THE most important non-fiction book on cultural/societal values that I have read in decades and I wish that everyone would read it to expand our knowledge of how we got here and how to fix it. To realized that 40%+ websites are sponsored by RT (Russian Television) is appalling, they should have to register as foreign agents as do lobbyists. That our corporations who buy ads blindly from 2 main companies for pennies on the 1,000's of placements on websites are sending millions of dollars to Russian that is helping to finance the war against Ukraine while we are sending billions of our tax dollars to Ukraine to help defend them from the Russians and this is mainly filling the pockets of a few 1% tech moguls who have designed these algorithms is beyond appalling. Truly eye opening.
I picked this up as a reaction to Facebook scaling back fact checking and “censorship”. I saw the LADBible giving medical advice the next day wanted to understand how we got here.
This book sheds light on the terrifying reality that “truth” has become the amalgamation of everyone’s individual social media scroll. If truth is no longer an objective thing we’re all cooked. It’s one thing when a neurodivergent person marinating in their quirks and hobbies and fandoms talks about “their truth”, it’s wholly another when a mom is endangering her kids with bunk medical advice from bad websites.
Trump 2.0 is almost upon us and as a Floridian I feel like I’m doing hurricane prep. I don’t want to be around for how bad shit gets. I’ve had a fb account for 20 years. It’s enough. It’s not what it was and it’s not anything I need in my life.
This book is not the "thing I can buy copies of for my weird uncle" that I wanted it to be. Instead, it's in this weird middle ground where I don't think you would buy it unless you were pretty deeply interested in politics and media, but where most of what's in the book is for people who have no idea how the internet works.
I read this book in conjunction with my little brother, who is that person. I think thus, this is a great book for teaching others, but its hard to say that you should buy it for someone who doesn't know most of what's in it to begin with, because those people aren't likely to read books. Instead, this sits in that weird category of books that its cool to own, but maybe not something you ever need to read.
That all being said, Brill's writing is clear, well researched, and his use of personal stories to craft a larger narative about the world we have wrought is a technique that few books of this type use and even fewer master. I was particularly engaged by his discussion of health misinformation, and I have a deep respect for any book that dedicates an entire chapter to Section 230 of the Communications Decency Act
Enjoyed this book. The only thing that gave me pause at the end was his call to end anonymity on the internet (then you read his bio and he's also a founder of CLEAR, the weirdos who want to scan your retinas at the airport, so I guess that tracks). I think there's a discussion to be had around privacy, but I do agree that, for example, "news" sites that are funded by Russia, or by far-left groups, really anyone, should reveal their source funding. The chapter on AI on how it will supercharge the mis- and disinformation on the internet was a little depressing lol but this is the world we live in.
3.5– the book was good. It was easy to read. I was a bit put off by the frequent (almost every page) mentioning of the author’s company. I recognize that it is my personal bias (didn’t pick the book up to be sold on something) because the author did acknowledge his position/ he was coming from a non-neutral space stating the benefits of it. I just think that the book would have been able to make some of the points that it did without mentioning the service that he created. It was a very interesting book. I did appreciate the discussion on generative AI. I appreciate the nuance that was included throughout and the strong conclusion w actions to advocate for
This is a pretty decent book to understand disinformation online- as a disinformation researcher, the points made in this book are spot-on and it’s written in an approachable way. Is it slightly bloated? Yes. And as another reviewer put, I think it could have been much better if the author toned down his emphasis on his company a bit. I would still wholeheartedly recommend it to anyone seeking to understand how disinformation is so rapidly spread throughout our world and how social media has put fuel on the fire of this issue.
A very interesting read. It really is scary where the world is heading. Marketing and big corporations who just want you to buy, buy, buy. They control the internet. It seems to be impossible to avoid your info being sold to companies who just want you to buy stuff. Then you have lies or now what they call alternative truths. What is really nowadays? I feel like moving off grid and avoiding technology all together.
Before, during, and after reading this I am convinced that we cannot believe anything that we see, read, or hear! OMG the things we read, are taught from our earliest age cannot be validated! The only solution might be to teach logic and how to develop rational thought processes. We must learn to develop our own rational logical judgement and to doubt EVERYTHING. Self-proving test everything!!
I am sure writing around a conflict of interest has its challenges but one chapter in particular felt like a mid book ad. That said, this read was incredibly insightful. I am grateful to have read it all the way through as anything less might have left me in a state of dread so dire. Brill sticks the landing.
I highly recommend reading this soon if you are considering it. I fear the references are a bit topical and the references might lose their edge in a few years.
A book that you have to work your way through. BUT, well worth it for the recommendations provided by the author during this time of political upheaval.
Truth is not easy but is essential for any society to work for the people. To that end, it is a must read because in the end it important that all of us "start to believe in the truth again."