Ethan Zuckerman's Blog

October 8, 2025

amac at UMass: How do we regulate AI?

Alex MacGillivray – better known as “amac” – is a lawyer and technologist who was most recently deputy assistant to the president and principal deputy US chief technology officer (CTO) in the Biden-Harris administration. Previously, Macgillivray was deputy US CTO for the Obama-Biden administration, and served as Twitter’s general counsel and board secretary, and as the company’s head of corporate development, communications, and trust and safety. As someone who worked directly on the Biden administration’s Blueprint for an AI Bill of Rights and within Twitter in developing core approaches to trust and safety issues, it’s hard to think of anyone more knowledgeable about how technology regulation is built and how corporations respond to it.

Tech regulator amac – Alex MacGillivray – speakong to UMass Amherst via zoom

amac is leading off a year-long series at UMass titled “AI and Us: Rethinking Research, Work, and Society” with a virtual talk titled “The Past and Future of AI and Regulation: A Practitioner’s View”. SBS Dean Karl Rethemeyer introduced the series with questions about how AI transforms research and teaching in the social sciences, how AI will change democracy and the world of work, all topics central to the social sciences.

amac explains that his current work focuses on coding with AI and, more broadly, trying to understand learning with AI – like many people fascinated with AI, he’s trying to figure out what the potentials and realities of these systems really are. Even before ChatGPT came out, amac tells us, the CTO’s office was studying AI, how it might be used in government and how it might be regulated. Executive orders and the bully pulpit of the White House give several ways of influencing the development of AI, and amac and others were trying to build a structure, which bifurcated around national security and non-national security questions. The National Security Council would likely take on one set of questions and leave the remaining questions to the CTO’s office. Similarly, amac was part of building an AI research resource, allowing people who were not part of massive corporations to investigate AI, given the massive hardware and data needs.

The Bureau of Industry and Security, under the Department of Commerce, issued a restriction on the best AI chips, which could be used to build powerful AI models. Eventually, the restrictions extended to the weights on large AI models, the information one would need to replicate a powerful model like ChatGPT. The Blueprint for the AI Bill of Rights, spearheaded by Alondra Nelson, was probably the key use of the “bully pulpit”, reframing AI within the lens of what rights users should have in using AI. The Blueprint came out in October 2022, as a non-binding white paper, rather than a formal set of regulations. amac tells us that non-binding approaches are often very useful in bringing together large groups of stakeholders and getting them to understand interests they have in common.

All this happened before the launch of ChatGPT, which suddenly launched AI in the front and center of everyone’s policy agenda. In November 2022, when ChatGPT launched, the landscape changed radically. The rapid uptake of the product put a great deal of pressure on regulators to “do something” about this new technology. The blueprint, amac argues, looked really good because it wasn’t about a specific AI, but about algorithmic decisionmaking more broadly. amac brought together a set of AI CEOs to talk with each other at the White House and got them to agree to a set of voluntary commitments. These voluntary commitments often end up being codified as a “floor” for state or federal regulation.

amac was out of government by the time the Biden administration executive order on AI tried to find a balance between the benefits and harms associated with AI. Are we talking about an existential risk, skynet scenario? Or real-world issues like AI discrimination in hiring and housing? The EO was really split between those two, amac tells us, and you can see the document trying to handle both those questions. A later executive order focused on more data center building, essentially an endorsement of investing in the field of AI.

What happened next was a transfer of power from Biden to Trump. The export restrictions were kept, but then relaxed after meetings with Nvidia. The new idea that other countries should have these chips and be able to train large models seems to have come after Saudi Arabia bought $2 billion in cryptocurrency from a Trump linked firm – it seems like the government’s view of what chips could be purchased and where model training should happen has changed. NAIRR – the National AI Research Resource – is now permanent but underfunded. It’s being funded via NSF’s reduced funds and there’s no dedicated funding for it. The Blueprint didn’t need to be rescinded, but the Trump focus appears to be very different: they want to ensure that AI is not woke, and that capitalism has no barriers to spending infinite sums on AI systems.

Voluntary commitments from AI companies are still around, but it’s unclear what they’ll matter – what is intriguing is that the California AI bill has many of those commitments in them. The Trump AI bill is really about a single thing: increasing investment in power plants, data centers and other precursors to building larger systems. A separate EO seeks to reduce government use of “ideologically biased” AI, which has meant largely using Elon Musk’s GrokAI, which has explicit ideology coded into it.

The “sleeper” in all of this, amac tells us, is an Office of Management and Budget “m-memo” which tries to ensure transparency about what AIs are used in government and about training and talent around the use of AI – while that memo came under the Biden administration, it’s survived thus far in the Trump era.

Pivoting to the future, amac warns that people tend to predict a particular future and shape their policy accordingly. He cites the AI 2027 paper, which predicts superintelligence based on exponential growth in AI research – if you adopt that framework, you advocate for different policies than if you anticipate a different scenario. The first question you need to answer: how good will AI get? The 2027 paper postulates that AI will become a godlike, all-knowing intelligence, possibly in the next two years. This scenario also assumes robots and ways of interacting in the physical world also making extremely rapid progress – while amac says he doesn’t believe in this future, there’s a whole world of AI safety people building policies around of this.

There’s another camp (which I tend to side with) that believes that we’re reaching the end of how much better LLMs can get just from more data and more compute. amac also notes the AI as Normal Technology paper, which sees AI as slower and addressable, and manageable through existing policy mechanisms. amac says he doesn’t know which of these is realistic – the competence of models is increasing over time with an exponential increase of resources, and AI CEOs say “it’s worked for the last few doublings of resources” – “they don’t know, I don’t know, I don’t have a reason to say that this will stop here.” But continuing to double means that you will – eventually – use all the chips and all the power in the world. Where do we hit the limit of this as a strategy?

Another question is “how many frontier models will there be?” amac thinks we’re likely to see a world where a bunch of models are, over time, roughly the same as each other in terms of capacity. Open models are a confounding factor within this space – a number of firms are putting out their models with all the information you’d need to run those models. There’s nothing to say that these models will continue to be released, but right now there’s about a six month delay between frontier models and open models.

There’s also a set of questions around the power of small models – if small, frontier, open-source models end up being extremely capable, how does that change the landscape for regulation? Encryption software is massively distributed around the world and it’s hard to imagine a regulatory scheme that can meaningfully restrict it – are we headed somewhere similar within small AI models. How do we square questions of explainability and observability with a huge, diverse set of models?

While this isn’t a roadmap for regulation, it might offer some suggestions on what people will try to regulate. As someone not currently in the regulatory game, amac notes Lina Khan’s observation, “there is no AI exception” to existing laws on the books regarding hiring, housing or other forms of discrimination. amac suggests regulation should focus on real harms, not hypothetical or science fiction harms. Deepfakes are causing real, meaningful harm right now – that would be an excellent place to start.

We need to bring expertise around these technologies and bring these technologies into government so that people in government understand what’s really going on. Ideally, something like the AI Bill of Rights will eventually move into influencing law and policy, influenced by agencies like the Federal Trade Commission. Critically, legislation will need to think about all possible scenarios, and be continually aware of when we’re moving from one scenario to another.

Weightlifting versus forklift

The post amac at UMass: How do we regulate AI? appeared first on Ethan Zuckerman.

 •  0 comments  •  flag
Share on Twitter
Published on October 08, 2025 09:57

July 30, 2025

#KeepGVStrong: Global Voices advocates for a connected world at a dark time

TL:DR; For two decades, Global Voices has done something uncommon and vitally important: it’s amplified voices of people from all around the world, making it possible to hear perspectives usually left out of the news. We’ve done this work for over twenty years, powered by volunteer writers and translators, with a small team of professional editors and coordinators. Due to cuts to international aid, we’re up against the wall financially – we need anyone who’s been helped by or inspired by Global Voices to lend a hand. Many more details below, but if you can, please chip in: https://globalvoices.org/donate/

In 2004, I was a fellow at Harvard’s Berkman Center for Internet and Society, watching a new chapter in the history of media unfold. Thousands of people were starting to share their thinking and opinions online via weblogs, personal journals that mixed links to interesting sites discovered online, personal details, political opinions and observations about the world. Facebook was in its infancy, in a dorm room across the Harvard campus, so blogs were the online space in which many people first experienced individuals sharing unfiltered opinions. At the time – before the rise of influencers monetizing their online presence or algorithms filtering our posts for maximum engagement – it was a space for excitement and hope.

For some, the hope was that writing online would loosen the grip of the mainstream media. Bloggers could write about what they chose, when they chose and might be able to report news directly, as eyewitnesses. For others, the promise of blogging was that their individual voice could be heard – a group of liberal bloggers reveled in the idea that Vermont governor Howard Dean was listening to their blogged suggestions for a his platform as a candidate for President.

Rebecca MacKinnon and I were both interested in blogging for a different reason. We’d both found our way to Berkman after experiences in other parts of the world. Rebecca had been CNN’s Asia bureau chief, where her fluency in Chinese and deep experience in the region meant she saw stories invisible to most US reporters. I had just spent five years commuting between western MA and west Africa, building a technology training nonprofit and learning that Africa as covered in US news bore almost no resemblance to the continent I was regularly visiting.

For both Rebecca and me, the exciting thing about the internet in 2004 was the possibility that we could hear from the entire world. That meant not just Dean supporters and American “future of news” types, but Pakistani poets, Ghanaian entrepreneurs, Egyptian hackers and Bolivian linguists. We both began sharing links to blogs from a much broader world than was usually surfaced in US-centric tech spaces, and our growing list of international bloggers we admired turned into an invitation list for Bloggercon, a gathering of the digerati at Harvard that got significantly more global due to our intervention.

Global Voices was born out of that gathering, into a world that was largely optimistic and excited about the potentials of the internet.

We don’t live in that world anymore.

Blogging gave way to social media, becoming vastly more inclusive, but rewarding image, video, frequency and emotion more than the long-form personal writing that characterized the “golden age” of blogs. Some bloggers became journalists or op-ed writers, while others went quiet. Social media spawned a new economy of influencers, generated a wave of panic about mis/disinformation (some legitimate, some overblown) and another about child safety online. Now social media is feeding AI systems, which anticipate a future in which individual voices are subsumed into a generic voice of authority who knows everything, but fails to credit any of the individuals who’ve actually done the knowing.

Global Voices summit, Nairobi, Kenya, 2012

Throughout it all, Global Voices has been here, presenting a wider world for anyone who’s wanted to learn about it. There have been moments – the Arab Spring, for instance – where American and European audiences have leaned on our work to understand a transformative historical moment. (This 2011 piece in the New York Times by Jennifer Preston, about our work in the Middle East led by regional editor Amira al-Hussaini was one of those moments where broad audiences got a sense for what we were doing.)

But even when the stories we covered haven’t garnered international attention, we’ve served audiences that few others reach. A grassroots translation project – Lingua – blossomed into a massively multilingual community in which stories originate in dozens of language and are translated into dozens of others. In some of the languages we cover, like Malagasy, our site is one of the few resources for international news in a local language. Eddie Avila, now our co-managing director, has led the Rising Voices program, which has helped build local language preservation communities in Mexico, Colombia and Guatemala.

The stories we cover are ones you often won’t see, unless you’re reading extremely widely. Our China team is helping explain “Sister Hong”, a scandal involving sex work, clandestine video recording, LGBTQ issues and China’s particular brand of sexual repression and male loneliness. There’s an amazing series of reflections from Ukrainians fleeing war and bringing home with them, through piles of books and new Ukrainian libraries in cities like Innsbruck, Austria. Meanwhile, as Russia cracks down on “extremist activity”, searching the internet has become a dangerous activity.

Reading Global Voices is a reminder of how big and complex the world really is. Visiting a gathering of Global Voices authors and translators is a reminder of how small and connected we all are. I made it to our 20th anniversary summit in Kathmandu, Nepal this past December almost a week after the gathering had begun – I had to fit my trip between my final two classes of the semester. By the time I had arrived, the hundred or so participants from six continents had built lasting bonds, and it felt a little like joining a high school halfway through the year… until I took a brief pause to decide where to sit for lunch and got lovingly dragged to a table full of writers I’d never met, none of whom knew who I was. Through two decades of working really hard to listen and learn from one another, we’ve created a culture that’s remarkably welcoming, both to a jet-lagged co-founder, to our new executive director, Malka Older, and to the many Nepali authors, journalists and students who joined with us.

This work has never been easy to do. Global Voices is only possible because the vast majority of the work is done by volunteers. A small staff is supported by donations, but mostly from grant funding. Ivan Sigal and Georgia Popplewell, who took the reins from Rebecca and me, and ably marshaled the organization for fifteen years, were very good at helping foundations like MacArthur, Open Society Foundation, Omidyar, Ford, Knight, Kellog and others understand the importance of our work, directly and indirectly. Those funders value the stories and podcasts we produce, but they also saw a literal generation of writers, translators and editors trained within our community. (Many have gone on to be leading journalists in their home countries, or for international news organizations.)

At the 2024 Global Voices Summit in Kathmandu, Nepal

We’ve faced financial hard times before, but we’ve never seen anything like the environment we’re in now. The Trump administration’s cuts to international aid have hit us both directly and indirectly. Directly, some of the organizations we work with, like the Open Technology Fund, have seen their funding held up by the White House, and have had to go to court to continue operating. When they don’t get funded, neither do we. But the secondary effects have been profound as well. Cuts to international aid, public broadcasting and public health have left thousands of worthy organizations seeking the support of a small number of foundations who now see massively increased demand for their limited funds.

We have the blessing of being a genuinely international organization – we were founded by US citizens as a Netherlands nonprofit, and our board represents Egypt, Nigeria, the UK, Indonesia, India, Peru, the Netherlands and Hong Kong. Like a lot of international organizations, we’re turning to European funders… but we’re hearing from our European members that nationalism is making work like ours harder in their countries as well.

It’s a dark and difficult time in the world right now. The work we’ve done at Global Voices has long represented a vision of how the world could be different. We could listen carefully to one another, to understand our world from multiple points of view. We could work together on projects too big for one person – or one group of people from the same nation – to take on. We can fight for an internet that connects us and builds understanding, rather than separating us into easily marketed consumer categories.

We’re in real trouble here, and we could use your help. If you’re in a position to make a gift to Global Voices, it really would make a difference right now. I have enormous confidence in Malka Older, Eddie Avila and Krittika Vishwanath, our directors, who have taken the wheel of our ship in the stormiest seas I’ve ever seen. We need help getting through the next few months so we can figure out who’s able to support the hard work of international connection at a moment where the world is in danger of getting more fragmented and isolated.

If Global Voices is or has ever been an inspiration, please help us out. And check out the Everest Roam Nate Matias (dear friend and US Global Voices board member) is taking on this weekend, a bike ride that includes a vertical climb the height of Everest, raising money for GV.

The post #KeepGVStrong: Global Voices advocates for a connected world at a dark time appeared first on Ethan Zuckerman.

 •  0 comments  •  flag
Share on Twitter
Published on July 30, 2025 12:30

July 23, 2025

Road trip roundup… preparing for Great Lakes 2025

It’s midsummer here in the Berkshires, one of the most beautiful times of year, with warm days, cool nights, fields filled with fresh vegetables and theaters and museums attracting visitors from around the country. It’s idyllic, lovely, and also a clear sign that I need to hit the road.

Picture of a farm in rural western MA

Caretaker Farm, Williamstown MA

For whatever reason, when midsummer comes around, I develop an itch for a road trip. Sometimes I’m able to find a good excuse to spend a week or so driving around this weird and wonderful country – a friend’s car needs moving from Texas to Maine – but usually I’m forced to confront the fact that road trips are just meeting my peculiar psychological and intellectual needs.

In 2022, Amy and I took a vacation together to the eastern Great Lakes, visiting “legacy cities”, the metropolises that boomed before WWII and have shrunk afterwards as American industry moved south, west and eventually out of the country. I was amazed at how much I enjoyed visiting those cities, and wrote a long blog post: Legacy Cities and the Changing Nature of the Good Life, which turned into a talk at the DC-based PopTech conference.

(It was PopTech’s first year in a new venue, and there were quite a few technical problems, but the talk is fun to watch. )

Reflecting on the deindustrialization of the Great Lakes and Americans migration south and west, I wrote an essay in 2023 that’s basically an outline for a book project: From Phoenix to Cleveland. I am not brave enough to start writing this book, but I’m deep into the process of researching it, which involves equal parts processing large data sets and taking road trips.

Last year’s drive was dictated by a straightforward, but fascinating data set: the Joint Center for Housing Studies at Harvard’s data on price/income ratios on residential properties. Basically, you can roughly characterize the affordability of a city by measuring how many years of local average income it takes to buy the local average house. This is a lot more informative than telling you that Boston is more expensive than Binghamton, NY – sure, but salaries are better in Boston. Still, it takes 6.6 years of Boston salary to buy the average house in the Massachusetts capitol, while 2.6 years of Binghamton salary will get you comfortably settled in the Southern Tier.

Picture of men choosing fruit at a farmer's market

Last summer, I visited 15 of the 50 “highest value” cities in the US – cities with unusually low P/I ratios, like Decatur, IL, which leads the nation at a 2.2 ratio. I wrote a lot on that trip:

Driving by Data Set

By the numbers: what statistics can and can’t tell you about “undervalued” cities

A Square Deal in Binghamton

Buried, With Dignity, In Elmira

Side Quest in Northern Ohio

In Search of the Statistically Improbable Restaurant

Decatur: From Corn to Soy to Cricket City

From Peoria to Kankakee

Utica Starts With You

A Tale of Two Cities (Toledo, OH and Gary, IN)

The Company Town and the Corn Fields (Columbus, IN and Terre Haute, IN)

It was a delightful, if exhausting, trip and I’m proud of the writing I did along the way. (The chapter from Elmira may be one of the best bits of storytelling I did during the trip, if you’re looking for one essay to start with.)

I’ll be looking for undervalued cities on this trip as well – there’s 11 more of the top 50 on the itinerary – and as I’ve been exploring in some of my other writing, I will be using big data sets to search for restaurants along my route.

But this year’s route is more about geography than driving to explore a single data set. I’m trying to understand the Great Lakes better than I currently do. My first stop is in Rome, NY, where the first segment of the Erie Canal connected Rome and Utica, before growing into the shallow, crowded, smooth highway that linked Lake Erie – and the upper four great lakes – to New York City and the Atlantic. I’ll check in on Lake Ontario in Oswego before cutting across the “Golden Horseshoe” in Southern Ontario to spend a few days in Michigan, staying in Hamtramck. I’m vising friends in Traverse City on Lake Michigan before crossing the Upper Peninsula and northern Wisconsin and saying hi to Lake Superior in Duluth. After visiting friends in Minneapolis, I’m down the Wisconsin coast from Green Bay to Racine and back home through the corn and soy belt, with visits in Champaign, IL, Indianapolis, IN and Cleveland.

I’m still assembling my reading for the trip. I’ve been enjoying Dan Egan’s The Death and Life of the Great Lakes, which I’m starting to think of as a nonfiction ecothriller. Next in my queue is Paul Collier’s “Left Behind”, which offers some language – “neglected cities” – that I suspect will complement the best book I read on last year’s trip, Alec MacGillis’s “Fulfillment”, which helps me understand “winner take all” cities and their opposites. I’m open to suggestions as I prepare, as well as for must see roadside attractions and unmissable Bengali, Balkan or BBQ restaurants.

I also have high hopes of forcing myself to write less and make more videos. Inspired by my friend Casey Fiesler’s admirable practice of sharing insights on data ethics via TikTok, I am hoping to record a set of road stories from the communities I visit. This is going to force me well outside my comfort zone in more ways than one, so I’m writing this here in hopes of forcing myself to follow through on the project.

More to come as the journey gets underway. You seasoned road trippers know that planning a trip is part of the fun, and so organizing my past writing from and about the road is part of psyching myself up for many, many hours behind the wheel of an underpowered Prius C…

The post Road trip roundup… preparing for Great Lakes 2025 appeared first on Ethan Zuckerman.

 •  0 comments  •  flag
Share on Twitter
Published on July 23, 2025 12:39

July 3, 2025

Stalking the Statistically Improbable Restaurant… With Data!

Last summer, I wrote about the statistically improbable restaurant, the restaurant you wouldn’t expect to find in a small American city: the excellent Nepali food in Erie, PA and Akron, OH; a gem of a Gambian restaurant in Springfield, IL. Statistically improbable restaurants often tell you something about the communities they are based in: Erie and Akron have large Lhotshampa refugee populations, Nepali-speaking people who lived in Bhutan for years before being expelled from their county; Springfield has University of Illinois Springfield, which attracts lots of west African students, some of whom have settled in the area.

Fine food from The Gambia in Springfield, IL

The existence of the statistically improbable restaurant implies a statistically probable restaurant distribution: the mix of restaurants we’d expect to find in an “average” American city. Of course, once you dig into the idea of an “average” city, the absurdity of the concept becomes clear.
There are 343 cities in the US with populations of over 100,000 people, from 8.47 million in New York City to 100,128 in Sunrise, Florida (a small city in the Ft. Lauderdale, FL metro area). Within that set are global megacities like New York and LA, state capitols, college towns, towns growing explosively and those shrinking slowly.

I’ve retrieved data about the restaurants in 340 of these cities using the Google Places API. This is a giant database of geographic information from across the world – not only does it include information about restaurants, but about parks, churches, museums and other points of interest. The API was designed to make it easy to search by proximity – “return all restaurants within 2km of this point” – but it’s recently gained an “aggregate” attribute, which allows you to ask questions like “How many Mexican restaurants are there in Wichita Falls, Texas?”.

The API is not perfect. I tested my queries on my hometown of Pittsfield, MA and while it got some questions (the number of Dunkin’ Donuts) completely correct, it missed others entirely, failing to identify our two excellent Brazilian restaurants when I searched for that category. We’re going to proceed with the assumption that the data is imperfect, and sanity-check when we get surprising results.

graph of the relationship between population and number of restaurants

For starters, we look to see whether there’s a relationship between the population of a city and the number of restaurants located within city limits. It seems obvious that New York City should have significantly more restaurants than Lincoln, Nebraska, and indeed, that’s true. When we look at the whole set of cities between 100k – 8 million, there’s a straightforward linear relationship between population and restaurants with a few interesting outliers: Houston has more restaurants than we might expect, Phoenix fewer than we’d anticipate for cities their size.

graph of the relationship between restaurants and population in large american cities

The data is messier as we look at smaller sets of cities. Looking at cities with populations over 250,000, lopping off the four largest US cities (New York, Los Angeles, Chicago, Houston), a linear regression no longer fits as well. Some of the cities that are celebrated for their “creative economies” – Austin, San Francisco, Portland, Seattle, Nashville, Boston – have more restaurants than we might expect, while some less celebrated cities of comparable size – Fort Worth, Jacksonville, Indianapolis, El Paso, Oklahoma City – have fewer than we might expect.

Graph of the relationship between population and number of restaurants in small American cities

Exploring the cities between 100,000 – 250,000, there’s still a clear relationship between population and the number of restaurants, but that relationship explains just more than half the data variance (R2=0.5333) Some of the cities that are especially restaurant-rich are relatively small capitol cities – Little Rock, AR; Providence, RI; Baton Rouge, LA; Tallahassee, FL – and college towns – Knoxville, TN; Tempe, AZ. Some of the cities that have fewer restaurants than expected are close to larger cities – Cape Coral, FL is next to Fort Meyers; Yonkers, NY is next to New York City; Moreno Valley, CA may be overshadowed by Riverside and San Bernardino.

(These are rough guesses based on staring at scatterplots. I’ll want to try some regressions before positing that capitol cities have a higher than usual number of restaurants because lobbyists need to take legislators out to eat.)

With all this data, we can now imagine an “average” American city of 100,000 people. We’ll call our imagined city “New Springfield, California”. (California has 76 cities with 100,000 or more people, ahead of Texas with 42. There are three Springfields in our set of cities, and 5 cities that start with “New”.)
AI generated image of an imaginary city with a

There are 305 restaurants in New Springfield.
61 (just over 20%) are fast food outlets, including:
9 Starbucks and 4 Dunkin’s
6 McDonalds, 3 Burger Kings and 3 Wendy’s
4 Taco Bells and 2 Chipotles
9 Subways
3 Dominos and 2.5 Chick-Fil-A’s

55 restaurants describe themselves as selling “American” food. Additionally, New Springfield boasts 5 BBQ joints, 5 diners, 12 bar and grills, 22 burger joints, 29.5 pizza parlors, 28 sandwich shops and 5 steakhouses.

122 restaurants offer some sort of “international” cuisine. Mexican is the most numerous with 38 eateries. There are 12.5 Chinese restaurants, 12 that identify as “Asian” (not clear how those categories overlap), 11 Japanese, 3.5 Korean restaurants, 1.5 Ramen bars, 7 sushi restaurants, 4 Thai restaurants and 3.5 Vietnamese places. (Not clear if any have good bahn mi, or just pho.) There are 4.5 “Mediterranean” restaurants, 10 Italian, 1.5 Greek, two Middle Eastern and half a Lebanese place. There are four Indian restaurants, two Brazilian restaurants, and my favorite place has a 47% chance of being African on any given night, a 20% chance of being Afghan and is otherwise likely to be Turkish.

These numbers won’t add up neatly, due to rounding, overlap between restaurants (the combination Pizza Hut and Taco Bell might well be coded as Mexican, Italian and fast food) and the fact that this is an imaginary city based on the distribution of messy and incomplete data. But it gives us a statistically probable city that we can now deviate from.

We can – and will, in just a moment – look at individual variable to discover that Newark, NJ has the highest percentage of African restaurants and that Quincy, MA has the lowest proportion of Mexican restaurants of any American city over 100,000. (If you’ve been to Quincy, that tracks – while the town is not solely white and blue collar as it used to be, there’s been an influx of Asian immigrants and far fewer Latinx immigrants than in Boston’s western suburbs.) For now, we want to ask: which of our 340 cities is the closest to New Springfield, the most “average”.

I represented the restaurant distribution for each city in terms of 41 vectors, each a probability between 0 – 1 that a random restaurant fits within a specific category (Is it fast food? Chinese? A Dunkin Donuts? etc.) The cities closest to the centroid of that vectorspace are Lexington, KY; Colorado Spring, CO; North Charleston, SC; Indianapolis, IN and Columbus, OH. Three of those cities are relatively close to one another (Lexington, Columbus and Indianapolis), suggesting some sort of southern/midwest conceptual center for the nation’s culinary tastes.

(I badly wanted Peoria, IL to be close to the centroid because of the old idea that Peoria was middle American enough to be America’s test market. That old saw hasn’t been true for decades – Peoria hasn’t diversified as quickly as the rest of the US and is no longer demographically average. Interestingly enough, Columbus, OH is one of the cities most mentioned when people look for a demographically representative test city. And in an ironic twist, both Peoria, IL and Peoria, AZ are right in the middle of my centroid ranking, meaning they are right in the middle between being average and being unusual.)

The five cities furthest from the centroid – statistically the five most unusual cities – are a weird mix: South Fulton, GA; Garden Grove, CA; Menifee, CA; Jurupa Valley, CA; and Quincy, MA. All three California cities are in the south of the state, east of Los Angeles. Menifee and Jurupa Valley are part of the “Inland Empire”, while Garden Grove borders on Anaheim. Garden Grove, CA and Quincy, MA have larger Asian populations than many similarly-sized cities, and Jurupa Valley is majority Latinx, not unusual for California, but quite different from the rest of the nation. South Fulton, GA is a suburb of Atlanta. Like Jurupa Valley, CA, it was recently incorporated, which might explain why it’s got the fewest restaurants of any city in our set. (It might also be a data error.)

My centroid calculations currently weigh all vectors equally, despite the fact that some have very little variation, and others have lots. Google’s API has a category for Indonesian restaurants, despite that the fact that the vast majority of US cities don’t have any – removing Indonesian restaurants, for example, might give me more explicable results and help me find clusters of restaurants from the data.

But we don’t care about clusters of cities – we’re looking for statistically improbable food! I’m right there with you, friends.

Our method is quite good at showing us concentrations of restaurants in the categories that Google explicitly tracks. In the average American city, 0.07% of restaurants serve Afghan food. But six California cities – Fremont, Elk Grove, El Cajon, Tracy, Hayward and Concord – boast that at least 1% of their restaurants are Afghan. Fremont, Concord and Hayward are all in the East Bay, inland from San Francisco, and Elk Grove and Tracy in the same general part of the state, suggesting that migrants may well move to parts of the US where there’s an established population of compatriots to open their businesses.

African restaurants are similarly rare – 0.15% of total restaurants in our set. But the cities with high concentrations of African food are more widely distributed. Newark, NJ is well within the orbit of New York City, and Inglewood, CA within LA’s penumbra, and both have attracted African immigrants for whom real estate within the megalopolis is too expensive. The other three cities with more than 1% African restaurants – Minneapolis, MN, St. Paul, MN and Fargo, ND – are well known destinations for migrants from East Africa, particularly Somalis. (Wonderfully, two cities in easy driving distance of me – Albany, NY and Worchester, MA – rank in the top 20 of African restaurant distribution.)

Sometimes what’s interesting is what’s NOT present in a city. The cities with high concentrations of Mexican restaurants are where you would expect them to be – southern California, with a few in the Central Valley; border areas of Texas and New Mexico, the Phoenix suburbs. Mexican food deserts include some very cold places (Rochester and Buffalo, NY), and some cities with large non-Mexican immigrant populations (Arabic speakers in Dearborn, MI, Asian americans in Quincy, MA). Digging into demographic data, I discovered that two Mexican food deserts in Florida are demographically distinct from the Miami area, where both are located. Miami Gardens, FL is 62% African American, down from a decade ago – there’s significant Latinx immigration, but it’s very demographically distinct from Miami, which is 70% Latinx and 12% African American. Sunrise, FL, another Miami area city, has large Jamaican and Haitian populations (as well as a surprising number of Yiddish speakers.)

I had a hypothesis that concentrations of fast food were correlated to poverty. As the data is coming in, I think it may correlate more closely to rapidly growing suburbs – West Jordan, UT (Salt Lake City), North Las Vegas, NV (Las Vegas), Ontario, Rialto and Menafee, CA (Inland Empire/Riverside/San Bernardino) all rank high on that score. Fast food also may inversely correlate to population. Of the twenty largest cities in the US, only four have fast food prevalence over the mean (20.15%): Phoenix (20.85%); Jacksonville (21.72%); Fort Worth (21.53%); Oklahoma City (21.77%).

There’s something of a snob factor going on as well. The nine cities I found with fewer than 10% fast food restaurants are:
San Francisco, CA; Seattle, WA; Portland, OR; Berkeley, CA; San Mateo, CA; Miami, FL; Oakland, CA; Pittsburgh, PA and Honolulu, Hawaii. Four of those cities are in the SF Bay Area, one of the wealthiest and most expensive parts of the country. Neither the Pacific Northwest nor Hawaii are cheap, either.

I’ve got tons more to do with this data. I’m fooling around with k-means clustering, trying to identify emerging patterns. I’ll make my code more efficient and expand this to the cities I am most in love with – the 50k to 100k cities – and see if the overall patterns change. Once I’ve fixed a few more data quality problems, I’ll release a spreadsheet at CSV of the data – if you’d like to play with it in the meantime, let me know.

For now, let me close with a set of top ten lists:



Highest prevalence of fast food:
Menifee, California 35.66%
West Jordan, Utah 35.07%
Rialto, California 32.77%
Ontario, California 32.72%
North Las Vegas, Nevada 32.55%
Fontana, California 32.35%
Independence, Missouri 31.85%
Victorville, California 31.72%
Wichita Falls, Texas 31.50%
Olathe, Kansas 31.44%

Highest prevalence of American restaurants:
Lincoln, Nebraska 28.29%
Surprise, Arizona 27.54%
Goodyear, Arizona 27.15%
Menifee, California 27.13%
Tuscaloosa, Alabama 26.09%
Lafayette, Louisiana 25.68%
Independence, Missouri 25.56%
Rio Rancho, New Mexico 25.53%
Billings, Montana 25.47%
Moreno Valley, California 25.43%

Highest prevalence of BBQ restaurants:
Shreveport, Louisiana 5.23%
Kansas City, Kansas 4.55%
Chattanooga, Tennessee 4.18%
Columbus, Georgia 4.05%
New Braunfels, Texas 4.01%
Huntsville, Alabama 3.94%
Honolulu, Hawaii 3.87%
Fayetteville, Arkansas 3.85%
Concord, North Carolina 3.84%
Beaumont, Texas 3.74%

Highest prevalence of Bar and Grill restaurants:
Davenport, Iowa 13.73%
Cedar Rapids, Iowa 9.90%
Sioux Falls, South Dakota 9.89%
Madison, Wisconsin 9.42%
Akron, Ohio 9.09%
Evansville, Indiana 8.96%
Manchester, New Hampshire 8.82%
Lee's Summit, Missouri 8.62%
Spokane Valley, Washington 8.60%
Omaha, Nebraska 8.59%

Highest prevalence of diners:
Yonkers, New York 4.30%
Lancaster, California 3.96%
Davenport, Iowa 3.87%
Hesperia, California 3.86%
Mobile, Alabama 3.74%
Rochester, New York 3.64%
Cape Coral, Florida 3.41%
Macon, Georgia 3.40%
Augusta, Georgia 3.37%
Jurupa Valley, California 3.37%

Highest prevalence of burger joints:
Rio Rancho, New Mexico 17.73%
Menifee, California 17.05%
Moreno Valley, California 14.74%
Jurupa Valley, California 13.94%
West Jordan, Utah 13.74%
Victorville, California 13.45%
Yuma, Arizona 13.36%
Fontana, California 12.75%
Nampa, Idaho 12.72%
Surprise, Arizona 12.68%

Highest prevalence of pizza parlors:
Hampton, Virginia 17.70%
Worcester, Massachusetts 17.31%
Lowell, Massachusetts 17.24%
Deltona, Florida 16.96%
Quincy, Massachusetts 16.80%
Newport News, Virginia 16.29%
Chesapeake, Virginia 15.95%
Lynn, Massachusetts 15.52%
Warren, Michigan 15.45%
Virginia Beach, Virginia 15.30%

Highest prevalence of steakhouses:
Billings, Montana 4.09%
Evansville, Indiana 3.54%
Tyler, Texas 3.34%
San Angelo, Texas 3.31%
McAllen, Texas 3.23%
Fort Wayne, Indiana 3.22%
Suffolk, Virginia 3.21%
Davenport, Iowa 3.17%
Shreveport, Louisiana 3.14%
Rockford, Illinois 3.09%

Highest prevalence of Afghan restaurants:
Concord, California 2.01%
Hayward, California 1.74%
Tracy, California 1.68%
El Cajon, California 1.26%
Elk Grove, California 1.18%
Fremont, California 1.01%
West Valley City, Utah 0.65%
Sacramento, California 0.55%
Kent, Washington 0.54%
Antioch, California 0.52%

Highest prevalence of African restaurants:
Newark, New Jersey 2.08%
Inglewood, California 1.85%
Minneapolis, Minnesota 1.62%
Fargo, North Dakota 1.58%
St. Paul, Minnesota 1.20%
Richmond, California 0.93%
Arlington, Texas 0.78%
Menifee, California 0.78%
Worcester, Massachusetts 0.77%
Providence, Rhode Island 0.72%

Highest prevalence of Brazilian restaurants:
Newark, New Jersey 3.96%
Lowell, Massachusetts 3.88%
Worcester, Massachusetts 3.27%
Richmond, California 2.80%
Carlsbad, California 2.33%
Coral Springs, Florida 2.26%
South Fulton, Georgia 2.22%
Huntington Beach, CA 2.18%
Brockton, Massachusetts 2.08%
Orlando, Florida 2.06%

Highest prevalence of Chinese restaurants:
Quincy, Massachusetts 12.70%
Bellevue, Washington 11.18%
Daly City, California 10.58%
Philadelphia, Pennsylvania 10.05%
San Mateo, California 9.92%
Fremont, California 9.92%
Sunnyvale, California 9.07%
San Francisco, California 9.05%
New York, New York 8.80%
El Monte, California 8.50%

Highest prevalence of Greek Restaurants:
Carmel, Indiana 2.33%
Boca Raton, Florida 1.82%
Alexandria, Virginia 1.75%
Tempe, Arizona 1.74%
Lee's Summit, Missouri 1.72%
Cincinnati, Ohio 1.71%
Salt Lake City, Utah 1.66%
Stamford, Connecticut 1.64%
High Point, North Carolina 1.64%
Manchester, New Hampshire 1.63%

Highest prevalence of Indian Restaurants:
Sunnyvale, California 16.95%
Fremont, California 13.77%
Irving, Texas 9.82%
Tracy, California 8.40%
Santa Clara, California 8.07%
Frisco, Texas 8.00%
Jersey City, New Jersey 7.36%
Cary, North Carolina 7.09%
Naperville, Illinois 6.34%
Bellevue, Washington 6.30%

Highest prevalence of Indonesian Restaurants:
West Covina, California 0.71%
Torrance, California 0.40%
El Monte, California 0.40%
Inglewood, California 0.37%
Albany, New York 0.30%
Sugar Land, Texas 0.27%
Round Rock, Texas 0.26%
Oceanside, California 0.23%
Rancho Cucamonga, CA 0.22%
Philadelphia, Pennsylvania 0.20%
(NB: Indonesian restaurants are quite uncommon in the US - that 0.3% in Albany, NY represents a single restaurant.)

Highest prevalence of Italian Restaurants:
Boca Raton, Florida 11.62%
Stamford, Connecticut 10.38%
Yonkers, New York 9.31%
Worcester, Massachusetts 9.04%
Scottsdale, Arizona 8.80%
Boston, Massachusetts 8.25%
Coral Springs, Florida 7.74%
New Haven, Connecticut 7.39%
Pompano Beach, Florida 7.38%
Palm Coast, Florida 6.76%

Highest prevalence ofJapanese Restaurants:
Torrance, California 15.59%
Honolulu, Hawaii 15.32%
San Mateo, California 13.32%
Costa Mesa, California 12.85%
Berkeley, California 11.04%
Federal Way, Washington 9.34%
Bellevue, Washington 9.15%
Irvine, California 9.09%
Elk Grove, California 8.53%
San Francisco, California 8.34%

Highest prevalence of Korean Restaurants:
Carrollton, Texas 14.67%
Federal Way, Washington 12.45%
Santa Clara, California 8.74%
Garden Grove, California 8.20%
Irvine, California 7.75%
Fullerton, California 7.46%
Ann Arbor, Michigan 5.14%
Honolulu, Hawaii 5.13%
Killeen, Texas 4.40%
Torrance, California 4.25%

Highest prevalence of Lebanese Restaurants:
Dearborn, Michigan 4.73%
Sterling Heights, Michigan 2.08%
Miramarm Florida 1.44%
Toledo, Ohio 1.29%
Paterson, New Jersey 1.28%
Richardson, Texas 1.19%
Downey, California 1.07%
Anaheim, California 1.03%
Peoria, Illinois 0.95%
Lafayette, Louisiana 0.91%

Highest prevalence of Mediterranean Restaurants:
Glendale, California 8.08%
Burbank, California 5.97%
Richardson, Texas 5.93%
Sterling Heights, Michigan 5.88%
Dearborn, Michigan 5.68%
Plantation, Florida 4.12%
Irvine, California 3.43%
Pasadena, California 3.38%
Tempe, Arizona 3.36%
Warren, Michigan 3.33%

Highest prevalence of Mexican Restaurants:
Jurupa Valley, California 32.69%
Santa Ana, California 29.71%
Buckeye, Arizona 29.55%
Oxnard, California 28.91%
Pasadena, Texas 27.66%
Santa Maria, California 27.62%
El Monte, California 27.53%
Salinas, California 27.07%
Brownsville, Texas 26.51%
Laredo, Texas 26.46%

Highest prevalence of Middle Eastern Restaurants:
Sterling Heights, Michigan 7.96%
Dearborn, Michigan 7.57%
Glendale, California 5.27%
Paterson, New Jersey 4.04%
Anaheim, California 3.08%
Burbank, California 2.99%
El Cajon, California 2.93%
Warren, Michigan 2.73%
Richardson, Texas 2.57%
Plantation, Florida 2.47%

Highest prevalence of Ramen Bars:
Honolulu, Hawaii 2.45%
San Mateo, California 2.35%
Elk Grove, California 2.06%
Cambridge, Massachusetts 2.05%
Torrance, California 2.02%
Tempe, Arizona 2.01%
Fullerton, California 1.99%
Coral Springs, Florida 1.94%
Daly City, California 1.92%
Costa Mesa, California 1.89%

Highest prevalence of Spanish Restaurants:
Elizabeth, New Jersey 3.95%
Newark, New Jersey 2.40%
Lynn, Massachusetts 2.30%
Yonkers, New York 1.91%
Paterson, New Jersey 1.91%
Hialeah, Florida 1.86%
Rochester, New York 1.73%
Albany, New York 1.51%
Jersey City, New Jersey 1.45%
Worcester, Massachusetts 1.35%
NB: I strongly suspect that "Spanish" is shorthand for "Latin", and includes Puerto Rican, Dominican, Cuban etc., knowing some of these cities well.

Highest prevalence of Sushi Restaurants:
San Mateo, California 6.79%
Simi Valley, California 6.32%
Costa Mesa, California 5.67%
Roseville, California 5.58%
Coral Springs, Florida 5.48%
Boca Raton, Florida 5.24%
Honolulu, Hawaii 5.18%
Pembroke Pines, Florida 5.15%
Davie, Florida 5.06%
Berkeley, California 5.00%

Highest prevalence of Thai Restaurants:
St. Paul, Minnesota 5.67%
Portland, Oregon 4.97%
Amarillo, Texas 4.63%
Berkeley, California 4.58%
Vancouver, Washington 4.12%
Seattle, Washington 4.08%
Anchorage, Alaska 3.74%
Alexandria, Virginia 3.71%
Tacoma, Washington 3.57%
Lowell, Massachusetts 3.45%

Highest prevalence of Turkish Restaurants:
Paterson, New Jersey 2.77%
Plantation, Florida 1.65%
El Cajon, California 1.26%
Richardson, Texas 1.19%
Waterbury, Connecticut 0.99%
Daly City, California 0.96%
West Jordan, Utah 0.95%
Dearborn, Michigan 0.95%
Kent, Washington 0.82%
Bridgeport, Connecticut 0.76%

Highest prevalence of Vietnamese Restaurants:
Garden Grove, California 21.12%
San Jose, California 6.44%
Renton, Washington 5.71%
Federal Way, Washington 4.67%
Tacoma, Washington 4.46%
El Monte, California 4.45%
Kent, Washington 4.36%
Garland, Texas 4.18%
Everett, Washington 3.49%
Lowell, Massachusetts 3.45%

Finally, some grossly over simplified, aggregate statistics:

Highest prevalence of "domestic" cuisine, including "American", diners, pizza, burgers, bar and grills,steakhouses:

Davenport, Iowa 76.76%
Billings, Montana 75.16%
Goodyear, Arizona 74.66%
Evansville, Indiana 74.30%
Surprise, Arizona 73.92%
Springfield, Illinois 72.52%
Spokane Valley, WA 72.39%
Rio Rancho, New Mexico 71.64%
Suffolk , irginia 71.55%
Lee's Summit, Missouri 71.11%

Highest prevalence of "international" cuisine, including all restaurants that mention a specific non-US nationality, plus sushi/ramen:

San Mateo, California 71.83%
Federal Way, Washington 68.86%
Sunnyvale, California 67.07%
Garden Grove, CA 67.00%
Santa Clara, California 62.77%
Berkeley, California 62.49%
Fremont, California 62.14%
San Francisco, CA 61.40%
Bellevue, Washington 61.39%
Costa Mesa, California 61.26%

So very many disclaimers apply:

– Some of this data is guaranteed to be wrong. Some will be wrong because Google’s knowledge of US restaurants is imperfect. Some will be wrong because my code got something wrong. I welcome your “that can’t possibly be right” comments, but won’t be fixing rankings or code in response to them.

– There’s a small set of cities with populations over 100,000 who I had consistent problems getting accurate data for. They’ve been removed from the data set. (They know who they are.) I think this is a Google Places API problem but remain open to the idea that it’s my particular stupidity.

– These categories don’t make sense! Aren’t all the pizza places Italian restaurants! What the hell’s the difference between a Lebanese, Middle Eastern and Mediterranean restaurant? Yep. This one’s on Google – those are the categories I have access to. I strongly suspect they overlap, and it’s not clear whether restaurants self-categorize or are somehow categorized into these buckets.

– Where are the Uighur, Burmese and Peruvian places? Again, blame Google and its categories. I, for one, would welcome an app that told me how many miles I am from Uighur food at all times, and how to alter my driving so I can detour to eat cumin lamb. But so far, this is what I have easy access to. My next version of the tool is going to search for specific terms – “Uighur”, “Xinjiang”, “Ughyur”, etc. – in hopes of identifying some of my favorite cuisines.

Lastly, the code that I used for this analysis was written almost solely by Google Gemini, which was an experience in and of itself. I’ll post about that at a later date.

The post Stalking the Statistically Improbable Restaurant… With Data! appeared first on Ethan Zuckerman.

 •  0 comments  •  flag
Share on Twitter
Published on July 03, 2025 10:08

April 24, 2025

AIxDemocracy: What are the politics of AI?

Some of my favorite UMass students have founded an organization called the Responsible Tech Coalition. It’s a group of graduate students who want to talk about “public interest technology”, the idea that technology can and should be designed with the public interest in mind. RTC organizes book clubs, brings speakers to campus and, yesterday, pulled off a terrific day-long conference called “AIxDemocracy”. I was honored to be the closing speaker, and took the occasion to try out some ideas I’m wrestling with around whether AI has an embedded politics within it, and what control we might have over these tendencies.

Sometimes I give talks because I know what I think and am working to convey it as well as I can – other times, I’m giving talks to figure out what I think. This is one of those second talks, and I am very much open to the ideas that a) I’m flat-out wrong about how I’m characterizing some tendencies of AI systems or b) that the ability to iterate and fix AI systems goes a very long way towards counterbalancing my concerns. So first, more or less what I said yesterday, followed by some reflections on feedback I got from friends during the conference and after my talk.

My coping strategy for digesting the waves of news about attacks on democracy by the Trump administration has been thinking about how I’m going to teach my fall class here at UMass. The class is called “Defending Democracy in a Digital World”, and I’d like to point out that I haven’t recently renamed it – democracy is inherently a fragile thing and perpetually needs defending.

My class is cross-listed in computer science, communication and public policy, but really it’s a history class. Specifically, it’s a history of how democracies have shaped and been shaped by waves of communication technologies. Democracy depends on a public sphere, the space in which people learn what’s going on in the world, debate what we as citizens and voters should do about it, and organize to take action. Over the course of millennia, we’ve moved the public sphere from a physical space – the Greek agora – to conceptual spaces of information. And it’s in this context that we need to think about democracy and AI.

When Athenians invented democracy, the communications technology was speech, and democratic debate included whoever made their way into the agora, a public space that was simultaneously holy ground, a commercial center, a space to be entertained by performers or elevated by lectures by philosophers. (In our class, we use Astra Taylor’s Democracy May Not Exist But We Will Miss it When It’s Gone to talk about Athenian democracy.)

When colonial Americans sought to reinvent democracy for a new nation, they were confronted with a challenge – the American colonies spanned a thousand miles from Boston to Charleston – there was no physical space in which a citizenry could come together and collectively debate the matters of the day. They built an unprecedented infrastructure for democracy: the postal system. It was designed to be universal, serving the cities and the rural areas, and to be ideologically neutral, carrying newspapers and pamphlets from all political perspectives. The goal of the system was to make newspapers extremely cheap, allowing families in far-flung, lawless frontiers – you know, like Vermont – to know what was happening in the nation’s capital… and the scale of the system was massive. In 1830, 75% of US government jobs, outside of the military, were in the post office. The early US was basically a postal system with a small standing army attached to it. (Here we use Paul Starr’s Creation of the Media and Winifred Gallagher’s How the Post Office Created America)

That version of democracy was partisan and fractious – political parties descended from newspapers, not the other way around. It wasn’t until the next technological revolution – the penny press – that we got the idea of a neutral press. It wasn’t an ideological movement, but a commercial one. Newspaper moguls like Pulitzer and Hearst realized that they could make more money if they sold newspapers in the morning and the evening to both democrats and republicans. Cheap print made newspapers accessible to much broader audiences, and we saw the press serving immigrants with papers in their native languages, and content aimed at women and children, which brought remarkable women like Nelly Bly into the newsroom.

The penny press expanded the range of voices that could participate in democratic dialog. The broadcast age pushed in a different direction: the technical constraints of the medium meant that only a very few voices could be delivered to massive audiences. The intimacy of this medium – a stranger’s voice delivered into the sanctum of the home – changed Americans relationships to their leaders, particularly to FDR who used his “fireside chats” to establish a personal, parasocial relationship with millions of citizens… giving him four terms in office. But the power of a single, personal voice was part of the formula for propaganda: persuasive if misleading speech used to shape public opinion. As Americans began to understand the power of radio and film in the rise of fascism in Europe, we saw efforts to ensure “fairness” in broadcasting, a rough balance between perspectives from two political parties.

We are now roughly thirty years into the next act of technology and democracy, the age of the internet. It’s a vastly more open and participatory age than the broadcast age, for good and for ill. The idea that anyone can command an audience online has allowed social movements like Black Lives Matter and Me Too to gain currency and power… and it’s also allowed figures like Joe Rogan to become powerbrokers. The explosion of information has greatly democratized knowledge, but it’s also allowed people to select their own facts, leading to a very real concern that political divisions in the US are no longer about differing interpretations of a common reality, but between irreconcilable realities themselves.

Let’s posit for a moment that the next age is unfolding, the age of AI. What might we expect a public sphere transformed by AI to mean for democracy?

I’m going to constrain that question by embracing some language proposed by Arvind Narayanan and Sayash Kapoor at Princeton, the authors of an excellent book called AI Snake Oil. Their book is not nearly as hostile towards AI as the title might suggest – it’s helpful in understanding why some areas of AI, like image generation, are developing so quickly, and others are making little if any progress, like prediction of uncommon events. Arvind and Sayash released a paper last week called “AI as Normal Technology”, which is simultaneously a description of how AI is now, a prediction of how it will evolve in the near future and a proposal for how to regulate and live with it.

Their core idea is that while AI may be important and transformative – they offer comparisons to electricity and the Internet as similarly transformative general technologies – it’s not magic. They dismiss both the scenarios where artificial general intelligence makes most human jobs obsolete and necessitates universal basic income and the scenario where superintelligent AIs unleash killer robots to exterminate the planet’s population as unlikely and worthy of less consideration than a scenario where AI is important, but ultimately just another technology.

What does the future of AI and democracy look like if you take scenarios that are fun to think about, but unlikely to happen, off the table? It might look a little like an experience I had last Thursday – I was moderating a panel at the Museum of Science called “Democracy is a Disability Issue”. My panelists included Kim Charlson, the librarian at the Perkins School for the Blind, who is herself blind, and the CIO of the city of Boston, who has the challenge of designing information systems for the city that are accessible to residents with a wide range of disabilities. I expected to hear about three-D printing and braille everywhere, but we mostly talked about ordinary AI: Santi Garces, the CIO, uses generative AI to summarize the formal, legalistic minutes from City Council meetings into 20 word descriptions, which are designed to be easily delivered by screen readers. This is not just good for visually impaired people – they’re vastly more popular as a way of understanding what the council is doing, even for sighted readers. When I asked him about the challenge of making data visualizations into something blind citizens can experience, he explained that his team is now experimenting with chatbots that can help users ask questions about government data sets – and his prediction that these will be wildly popular outside the disability community as well. And as we were having this conversation, it was transcribed near flawlessly by Zoom’s AI… which could also be redirected to a handheld braille device that Kim the librarian could read. That’s ordinary AI, and it’s already pretty damned impressive when we think about the simple but profound problem of ensuring that democracies include the voices of all citizens.

I think the technologies that Eric Gordon has been talking about also fall within the realm of AI as normal technology. One of the hardest problems of democracy since its inception is the problem of listening at scale. From listening to those voices in the agora, to thousands of citizens crashing the Congressional phone system, ensuring that every voice in a democracy is heard has been an unsolved problem. Many of the systems we associate with democracies – voting, polling, petitions, the structure of representation itself – are technologies designed to enable listening at scale. (NB: Eric spoke at the conference, and is finishing a new book for MIT Press on AI and civic listening. I had the honor of reading the proposal and offering some thoughts on it – it’s one of the books I’ve been most looking forward to seeing in print.)

AI promises that we might listen to everyone: the callers to congressional offices can have their opinions summarized into briefings for their representatives. We can listen to people speaking on social media and translate those disparate voices into a complex tapestry of frustrations and hopes. We can seek out the solutions people are proposing and bring the best of them to leaders who might implement them.

But there’s at least as much reason for caution. If AI is good at digesting speech, it’s at least as good at producing it. We are already drowning in a sea of voices, and we are increasingly figuring out whether we should ignore the voices of machines as inauthentic speech, or accept the potential of these tools to allow those marginalized by language, disability or education to participate fully in our debates as citizens.

Assuming that we’re dealing with AI as an ordinary, normal technology like the ones that have previously transformed the public square, should we be optimists or pessimists about what AI will do to democracy? I am nervous, because I am starting to believe that AIs are inherently conservative technologies.

(And I’ve gotten a LOT of pushback on that term. To be clear, I mean it not in a Democrats/Republicans way, but in the way that conservatives warn that we lose essential aspects of our culture and character if we move too far from our own history. Unfortunately, our history is often a past of things we are, in retrospect, happy we have moved beyond. It’s that attachment to the past, even when it’s problematic, that I am trying to evoke with the term “conservative”.)

AIs extrapolate from training data – this process is the same whether they are generating new sentences from millions of texts scraped from the internet, or predicting whether an individual granted bail will reoffend if released. In both cases, systems are likely to inherit the biases of the data they are trained upon. Florida’s notorious COMPAS system identified proportionately more Black defendants as more likely to be rearrested than their white counterparts in major part because Florida’s justice system was more likely to arrest Black people than white people – a system trained on racial injustice replicated those biases in code.

We’re going through a period where we’re starting to hear more voices of people of color, of queer people, indigenous people, people from the Global South as they gain the tools, skills and time to make their voices heard online. But we’re training our AIs on the documents that have been put online so far… including the pirated books and scientific papers in LibGen that Facebook has gulped up to feed their “open” models. The corpus these AIs are trained on is disproportionately filled with documents written by people like me – ageing white dudes who post too much content on the internet. It’s likely that biases within the text we write get reflected in the text that LLMs generate. You’ve heard of the em dash – commonly used in academic literature – being called “the Chat GPT dash”. Basically, a literary convention that was going out of style re-emerges because it’s baked into the language model.

You probably shouldn’t be too worried about the em dash. But the situation gets a bit more concerning when you consider pronouns. In an effort to include non-binary people, “they/them/their” is becoming a more common collective pronoun than “he or she”, much like “he or she” replaced “he/him/his” as a generic pronoun for a person of unknown gender. But they is just making it into the corpus – as we generate text, does our language get pulled back to “he or she”? What other linguistic regressions are we going to find if LLMs take our language back to the authorship of the texts they’ve been trained on? What other biases creep into the texts we’re authoring, the images we’re generating, the decisions we are making? (In the talk, I used some of Mark Graham’s visualizations of contributions to Wikipedia, in which both contributors and content from the Global South are not well represented.)

Perhaps more worrisome is the concentration of power that contemporary AI systems seem to be bringing about. The AIs we currently know how to build require a lot of resources. You need vast sets of data and enormous sets of machines to process it. Many of the companies leading the field are huge, and a few are companies so large – Meta, Google – that the US government is investigating breaking them up.

We’ve seen that social media platforms and search engines, two of the dominant technologies of the internet age, give power to influence speech to whoever controls them. When Elon Musk decides to express an opinion – that we should all support Alternative fur Deutschland – everyone who uses X as a medium for political discussion gets to hear that opinion. What happens when platforms that deliver information confidently, with only the flimsiest of disclaimers that we should be careful in assuming their information is true, start influencing our political discussions?

This is a good question, because this is the road we seem to be on, and it looks like a road that favors conservative plutocracy, a flavor of democracy we’ve become all too familiar with.

And so it’s a good time to remind ourselves of the idea of “technodeterminism”. This is the idea that technologies follow their own internal logics and bring about unavoidable changes in the world. The good news is that the history of technology and democracy suggests that we have a lot of choice. The world got radio in 1912, and different societies used it in radically different ways. In the Soviet Union and in Nazi Germany, it was a tool for propaganda. In the UK, a public broadcaster turned radio into a powerful tool for diversity preserving local languages, providing news from around the world and anchoring political debates in a common set of facts. We get a choice as societies over how these technologies get used. But we have to choose quickly – it took about ten years before the basic models of how radio would unfold in the US, the UK and the Soviet Union were set in place, and the long tail of those decisions are with us still. (For lots more about this, my piece The Case for Digital Public Infrastructure is one place to start.)

Even harder, our intuitions about how technologies will shape societies are not always right. There’s a lot of people – myself included – who believed that the internet would diffuse power and flatten hierarchies, giving us greater exposure to marginalized voices and making concentrated capital less influential. There were even good reasons to believe all those things, all of which turned out to be wrong. And so I don’t want to confidently declare that AI is inherently conservative or plutocratic, because I can imagine scenarios in which the opposite proves to be the case.

What I do feel confident saying is that conversations like the one we’re having today are worth our time. If AI shifts our technological landscape – and it appears that it will – it will likely shift our democratic landscape in ways that are significant, hard to predict, but are also influenceable. Articulating the relationship we want between AI and democracy doesn’t guarantee that we will get it – failing to consider the relationship all but guarantees we will get a relationship we do not want.

My colleague Yuriy Brun spoke earlier in the conference and gave me a great deal to think about on the question of AI systems and bias. One of the many excellent points he made is that it’s easier to iterate with AI systems than it is with human systems: i.e., if both human judges and algorithms like COMPAS are biased against Black defendants, at least with an AI system, you can run multiple versions of the algorithm, tweak them to address biases and see if you can get a more just system. I think that’s both right and interesting: I am curious whether some systems are more easily de-biased than others. My fear is that systems where we simply do not have data – what would language models look like with full Global South participation – may be very hard to debias.

So perhaps my fear is that AIs – unless we carefully and consciously address biases in the ways Yuriy suggests – tend to embed in code existing societal biases. Eric Gordon pushed back on that after I gave the talk, pointing out that it seems to contradict my point that hard technodeterminism is not true – radio has turned out very differently in the UK, US and in totalitarian countries. My intuition is that technologies do have political influences that are tied to their specific affordances, and that we have a (relatively brief) period of influence when technologies are introduced to steer them towards the social values we want them to embed.

Looking forward to other pushback or reflection on this set of ideas as well. And ever so grateful to RTC at UMass and everyone else who made this excellent gathering possible.

The post AIxDemocracy: What are the politics of AI? appeared first on Ethan Zuckerman.

 •  0 comments  •  flag
Share on Twitter
Published on April 24, 2025 14:42

April 4, 2025

Science on Screen: Don’t Look Up

Science on Screen is a partnership between the Coolidge Corner Theatre in Boston and the Alfred P. Sloan Foundation to pair classic, cult and current films with talks by local academics to bring together science literacy and cinema. Amherst Cinema is a long-time participant in the series and, this year, they invited me to come give a talk about mis/disinformation. After some debate – they proposed talking about The Social Network, a film I don’t love – we settled on Don’t Look Up, a film that deserves more attention than it got, as it was released just as the world was reopening from the pandemic.

I thought I was going to talk about the climate science message of Don’t Look Up and the challenges of scientific communication in our media moment. But the last 75 days or so happened, and other details of the film seemed work emphasizing, including the ways in which this film, written late last decade, anticipated some of the stupidest moments of our current predicament.

I’m attaching my remarks because a) I probably didn’t give as precise a version of this talk as I meant to, b) I worked hard on them and, c) they’re in dialog with a post from last week, where I tried to take on the idea of “post-truth” politics. I suspect there’s at least one more essay in this series, likely about authenticity, which I’m still sharpening my thinking about.

I had seen Don’t Look Up twice when I wrote these notes – once shortly after its online premiere and second time a few months ago, as I started drafting the talk. In those watchings, I found the film preachy and smug, self-satisfied with pointing out how dumb everyone’s reaction to impending catastrophe was. It was a different experience watching it with a crowd in a theater. Not only is it a visually powerful film (and Amherst Cinema has terrific facilities – it looked and sounded great), but it’s a particular type of funny that benefits from a room laughing along with it. Alone, it’s a bit too sharp, a bit too real, a bit too depressing – in a group, you get the experience of sharing the absurdity, the feeling that all you can do is laugh to keep from crying. It’s a film that deserved a theatrical run – I suspect it would be better remembered and appreciated it if we’d seen it that way.

Hi everyone. I’m very pleased to be with you tonight. We scheduled films for this series back in the fall, so perhaps it was a lucky coincidence that we are able to present a highly political apocalypse comedy… or perhaps a dark foreshadowing. I will say that I’ve written this introductory talk and ripped it up at least five times this past month as current events have demanded.

Don’t Look Up is the most recent film by Adam McKay, a comedy writer who got his start on Saturday Night Live and who wrote and directed some of Will Farrell’s best known work, including Anchorman and Talladega Nights. It’s not necessarily who you’d expect to create an epic length social satire about impending apocalypse, but McKay made a turn towards more serious fare, adapting and directing Michael Lewis’s The Big Short about the financial crisis of 2008, and a cutting portrait of Dick Cheney, called Vice, released in 2018.

Don’t Look Up is not a film about the COVID pandemic, but it was profoundly shaped by the pandemic. It was announced in late 2019, but filming was disrupted by the pandemic, and ended up being completed between November 2020 and February 2021, a time during which vaccines were not widely available and most of the world was on lockdown. When it was released in December 2021, lots of us were wearing masks in public, avoiding restaurants and wondering whether the world would ever return to “normal”.

The film had originally been scheduled for distribution by Paramount – it sold to Netflix and released on Christmas Eve 2021, and set the record for the second-most watched film on Netflix, with 360 million watch hours. (This is not necessarily a great indicator of quality, as it’s slightly outpaced by the Dwayne Johnson comedy Red Notice) If you’re wondering why you didn’t see it in the theater, that’s why – this was a film most viewers experienced in their home, and it’s not hard to see the political divisiveness portrayed in the film as a commentary on the battles erupting around the US about mask wearing, vaccine mandates, school closures and everything else we argued about early in the Biden presidency.

In fact, one of the major challenges with Don’t Look Up is that it reads as a little too on the nose. This is a story about the failure of institutions – all institutions from the media to the government to the university. And spoiler alert – it’s an apocalypse film without a happy ending. But when you discover that main reason humanity fails to stop the apocalypse is a neurodivergent tech billionaire obsessed with humanity’s colonization of space who’s got a disturbing amount of influence on the US president, I just want you to remember that this was written in 2019, at a moment where people bought Teslas and weren’t especially worried about the political message they were sending by doing so.

I got invited to speak about Don’t Look Up because I study the relationship between media and democracy. Specifically, I teach a course at UMass in the fall called “Defending Democracy in a Digital World” and one in the spring called “Fixing Social Media” – in other words, I believe pretty strongly that media as a whole, and social media in particular, have a lot to do with our current moment in politics and the dissatisfactions we’re experiencing with democracy.

And there’s a way of reading this film that aligns with a popular and widespread critique of American media in particular put forward by a media scholar named Neil Postman in a book called “Amusing Ourselves to Death”.

Postman wrote his book – which is subtitled “Public Discourse in the Age of Show Business” as an extension of remarks he made on George Orwell’s “1984” at a panel at the Frankfurt Book Fair that year. He warned his readers that the danger for society was less the totalitarian surveillance state Orwell warned of, and more the Aldous Huxley “Brave New World” scenario in which we self-medicate ourselves into passivity and bliss, forgetting our responsibilities as citizens. Postman’s book argues that rational argument is possible in the medium of the printed word, but not on television, where time constraints and particularly the emotive power of the image make rational discourse impossible.

And that’s certainly one of the threads in this film. Our protagonists have a message – rooted in scientific discovery – about an impending catastrophe, and they lose the attention war, again and again, to the stupidest conceivable celebrity news and blatant political maneuvering. It’s a theme of the movie that scientists don’t know how to communicate – there’s multiple references to getting media training for the scientists – but when one of them does learn how to reach a broad audience, he loses track of the message he was trying to communicate in the first place.

The message is that our media is so dumb – and has made our citizenry so dumb – that we can’t possibly respond to an existential threat… which is a thesis Postman would likely have endorsed.

There are at least two problems with the Postman thesis.

Leo DiCaprio in a suit, holding a drink, smiling and looking smug

One, it encourages us to feel pretty smug. Because many of us read the New York Times, live in a college town and attend academic lectures, we can conclude that we’re not part of the problem – we’re not afraid to look up! Many critics complained that this was a smug film: it invites you to side with the rational scientists against an irrational world and it may be satisfying to watch them fail because the world is impossibly stupid… but it’s not very helpful, if our goal is to figure out how to motivate people to take action and change the world.

(Just as an aside, for those of you not neck deep in internet meme culture, this is the “smug Leo” meme. There are at least three images of Leo DiCaprio, star of Don’t Look Up, that are routinely circulated to illustrate smugness, though oddly none associated with his role in this film.)

Second, the Postman thesis is forty years old. He published the book during the Reagan administration, a moment where Reagan’s telegeneity appeared to give him immunity to otherwise outrageous scandals like Iran Contra. At that moment, the enemy of rational thought was the sound byte – the reduction of complex issues into short, pithy statements:
“We did not trade arms for hostages” followed by “My heart and my best intentions tell me that’s true, but the facts and evidence tell me it is not.”

By the 2010s, we had a different threat to the information landscape. Mis and disinformation: Britain’s exit from the EU and the election of Donald Trump, both in 2016, suggested that something deeply unpredictable was happening with politics around the world. Pundits and scholars looked for an explanation for why voters had behaved so irrationally and came up with a likely suspect: fake news.

That’s the term that was initially chosen – it referred to news stories that were entirely fictitious, made up by college students in North Macedonia to gain clicks and earn ad revenue on Facebook. Craig Silverman, the American reporter who began interviewing these young entrepreneurs, found that they’d tried other fictions, including fake sports news and fake left-leaning news. What worked best was news that slandered Hillary Clinton and supported Donald Trump. Within weeks, Trump adopted the term “fake news” and twisted it in a way that would have made Orwell proud to mean “news I don’t like”. So the academic community chose a new set of terms: mis and disinformation, or if you’re really in the know, “information disorder”.

The logic behind all these phrases is the same: voters have been misled by inaccurate information, either spread intentionally (disinformation) or inadvertently (misinformation). Because voters were misinformed, they made bad decisions. We should respond by funding fact checking services and requiring social media platforms to check the information they spread.

There’s something comforting about this view of the world: if we could prevent information disorder, by strengthening high quality journalism, requiring platforms to take down disordered information, we’d return to a world where we had a single set of facts in common and people made sensible political decisions. And again, this is a pretty comfortable worldview if you’re someone who, like me, is a member of the expert class, people who are well-informed for a living.

But correcting mis and disinfo hasn’t gone very far in improving America’s political culture or healing divisions. If anything, it has exacerbated divisions and led to accusations of political censorship on social media platforms. It’s not unreasonable to argue that democracy is difficult or impossible if we can’t all agree on a common set of facts. But it’s also possible to see that being told that your facts are wrong and can’t be spoken can feel Orwellian.

As the pandemic wore on, we saw parents who believed that closing schools was causing their children learning loss and emotional damage arguing with other parents who feared for the safety of the immunocompromised or otherwise vulnerable from a deadly disease. This is the sort of a debate a healthy society should be able to have, and that this debate became so vituperative might reflect some of the deep dysfunction in American society we’re experiencing now.

One aspect of the COVID debate I want to point out that’s become central to this moment in media and politics is the rise of “authenticity”. The huge difference between the media world Postman talked about and the world we live in today is the fact that billions of people aren’t just consuming and interpreting media – we are producing it as well. During COVID we saw a surge of people producing media of all sorts, documenting their struggles to adapt to life in this new reality and to process their fears. These mediums, particularly short video like TikTok and long form audio like podcasts operate around intimacy, the idea that you’re seeing an unvarnished, authentic self.

And like any change in a media landscape, it quickly gets harnessed towards political ends.

You might think of the 2024 presidential election as a choice between authentically stupid ideas and an inauthentic presentation of a mediocre status quo.

Authenticity won, and friends of mine who are Democratic party insiders tell me that they’re scared to talk to voters because they know that Democratic politicians don’t know how to connect to audiences that are turning to Barstool Sports and Joe Rogan as their guides to political and social issues.

Screengrab from Don't Look Up - Jennifer Lawrence's character has had her face photoshoppd into a smear

Don’t Look Up sees authenticity as weakness. You’ll see moments where both Jennifer Lawrence and Leo DiCaprio’s characters can’t play the Reagan era game of sound bites because their emotions are too strong, and the film punishes them for it, in one case leading to an abduction by government agents that takes on all new resonance in the last month or so with ICE abducting international students to punish them for their speech.

McKay sees these moments of authenticity as a sign that these characters don’t know how to play a rigged game. But he’s wrong – one of the reasons authenticity is so powerful in media and politics is that we’re living in authentically terrifying times. Whether you’re terrified of climate change, of a housing crisis and the loss of a path to prosperity for most Americans, or the collapse of democracy, there’s ample reason right now to feel like institutions of all sorts are failing us and that an authentic cry of anguish or terror is a reasonable response.

McKay’s movie uses a 1980s theory of change to take on the problems of 2020. In Neil Postman’s world of Amusing Ourselves to Death, the most powerful figures are the global celebrities who dominate the market for global attention. In that world, a film starring the panoply of acting royalty this film features would have been guaranteed an audience and some award wins. Instead, it’s a 56% on Rotten Tomatoes with a few Academy Award nominations and no wins, and we’re watching it in a series that invites us to reconsider cult films.

But there’s something Don’t Look Up gets right. Many of our institutions are failing us. The experts don’t know what to do. The feelings of terror and helplessness lead people to behave strangely and unpredictably. McKay doesn’t offer us a solution to an ongoing apocalypse, but this film, at its best, offers us a moment of empathy. Look past the machinations of the politicians and the media stars, and look towards the people in the background trying to make sense of a reality that no longer makes sense, disrupted by COVID or climate change or by a slide into authoritarianism. If there’s a solution to this polycrisis, this interconnected mess of apocalyptic threats, it’s in the groups working to live through this scary moment, not through the politicians or celebrities who’ll get to escape to their bunkers or spaceships.

Thanks for listening, and enjoy the apocalypse.

The post Science on Screen: Don’t Look Up appeared first on Ethan Zuckerman.

 •  0 comments  •  flag
Share on Twitter
Published on April 04, 2025 13:20

March 27, 2025

“Post-post-truth”, or how the SignalGate story is forking reality

Three days ago, the editor of The Atlantic, a major American news magazine I occasionally write for, revealed that he was inadvertently added to a group chat on the encrypted messaging app Signal, in which plans to bomb Houthi rebels in Yemen were discussed. Two days ago, the government officials who participated in that chat lined up in front of Congress to say “nothing to see here”, arguing that these obviously sensitive conversations were not classified and that the Atlantic was just trying to stir up trouble.

So yesterday, the Atlantic released an even less redacted version of the conversations, still protecting the identity of a CIA officer in the group chat, and we are now seeing that Defense Secretary Pete Hegseth listed the timeline and targets for the attack to a list that included a journalist, and might as well have included any foreign intelligence service who’s managed to hack any one of the civilian-issued phones that received this data. (If you think that’s unlikely, consider this piece of reporting from Der Spiegel, where they quickly retrieved usernames, phone numbers and likely passwords on senior American officials implicated in the leak.)

One of the problems of writing about the Trump administration is the “flood the zone” strategy. If you write about your despair at academic funding cuts, you don’t take the time to write about the chilling abduction of an international student for writing an innocuous oped. The Signal scandal forces you to navigate “flood the zone” within a single incident – if you write about one absurdity, you run the risk of ignoring the others. So, to begin, let me note that:

1) Including a national security reporter in an exchange of pending warplans is an inconceivably huge fuck up.

2) Holding a meeting about war plans using Signal, rather than special purpose secure comms, is unwise, against all protocols and possibly a violation of the espionage act.

3) Using Signal disappearing messages to discuss government business – particularly presidential-level business – is a violation of government records law.

But what I’m focused on today is the ways in which this story may be an unfolding case study about the forking of reality.

Early polling indicates that many Americans, including a majority of Republicans polled, considered the Signal “leak” to be a serious or very serious problem. Those figures may be significant, because more Americans saw the Signal leak as more serious than other information scandals when they first broke: the Clinton email server, the Trump secret documents or the Biden secret documents.

The Trump administration is not backing down from its preferred strategy: deny everything and demand Republicans get in line. Rather than acknowledging the severity of the screw up, Republican officials, led by the President, are smearing Atlantic editor Jeffrey Goldberg, denying that the documents were a “war plan” and terming it an “attack plan”, insting that no classified information was shared, speculating that Goldberg had somehow hacked his way into the group and that Signal’s security was somehow broken.

On the surface, this seems absurd: acknowledging that this was a stupid mistake and demanding consequences for officials who breached security protocols seems easier than demanding that Americans accept this sloppy train of denials. But the practice of denying reality and demanding that we come along has worked well for Trump thus far, and SignalGate may offer a case study in how reality splits in two.

* * *

In 2004, Ron Suskind of the New York Times, reported that a George W. Bush administration official accused him of being part of the “reality-based community”, and that now that the US was an empire, “…when we act, we create our own reality.” This was a deeply prescient statement, and arguably, opened the era of “post-truth politics”.

Politicians have always lied and exaggerated, and especially after Watergate, journalists have worked to confront them with the falsity of their statements. Projects like PolitiFact, launched at the Tampa Bay Times in 2007, turned factchecking into an attention-grabbing feature, labeling egregious claims as “Pants on Fire” with accompanying graphics. PolitiFact won the Pulitzer for reporting in 2009, and has been praised by scholars for its accuracy, though journalism scholar Dan Kennedy points out that there’s only a limited number of claims that can simply be judged true or false.

There are many complaints that PolitiFact cites Republicans for lying far more often than Democrats. Recent research suggests that populist right politicians in Europe are more likely to lie than left politicians, or non-populist right politicians – the authors suggest that lying may be a successful rhetorical strategy and needs to be examined as a form of political speech, not just as an abberation.

The rise of fact-checking in the late 2000s set the stage for the rise of misinformation research in 2016. In that remarkable year, political pundits were stunned by Britain’s exit from the European Union and the US election of Donald Trump, two developments that had seemed extremely unlikely up until the moment that they came to pass. Journalists and scholars took an interest in “fake news” – stories invented by North Macedonian teenagers in order to make money from ad views – pivoting to the terms “misinformation” and “disinformation” when President Trump began using “fake news” to mean “news he didn’t like”.

Smart and talented misinformation researchers began tracking rumors as they spread online, offering factchecks, documenting patterns of misinformation spread and providing social media platforms with tools they would need to combat the spread of information disorder. In return, they got hauled in front of Congress, and doxxed and harassed by Trump supporters and vaccine skeptics who believed their work was biased and political. Platforms took their recommendations to heart during the pandemic, then reversed course, apologizing for exerting control over content and turning to methods of content moderation that turn sensemaking over to community volunteers and bridging algorithms.

Work on mis/disinformation has helped expose the involvement of foreign adversaries in election interference, and may well have saved lives during the COVID-19 pandemic. (If someone has a good academic citation for a study that looked at a relationship between information controls and COVID-connected deaths, I’d love to see it – my cursory look found a lot of papers asserting the importance of fighting misinfo, but little evaluative data that looked at morbidity.)

But it has emphatically not helped unite all Americans in a coherent, consistent information universe. Instead, PRRI’s post-2024 poll of election voters found that 63% of Republicans believed that the 2020 election had been stolen from Donald Trump, while only 4% of Democrats agreed. (31% of all voters and 26% of independent voters believe the election was stolen.)

Given waves of court cases, investigations and oceans of physical and digital ink written on the topic, we might have expected opinions about the 2020 election to have changed significantly. They haven’t. If anything, they’re moving in an opposite direction. A Washington Post/University of Maryland poll in 2021 found that 69% of voters thought the 2020 election was legitimate. In 2023, the same team used a similar method and found that 62% thought the election was legitimate – in other words, years of investigations and reporting didn’t increase confidence in the election, but accompanied a 7% decrease in confidence.

Results like this are not just discouraging for factcheckers and mis/disinfo researchers. They’ve led some commentators to talk about a “post-truth” moment in politics. A brilliant essay by the philosopher Michael Hannon – The Politics of Post-Truth – begins with an array of references to “an epistemological crisis” offered by thinkers from David Brooks to Barrack Obama. Unable to agree on what is true, philosopher Julian Baggini offers consolations and strategies for how we might navigate our “post-truth” world.

Hannon is rightly suspicious of the term “post-truth” – the term is invariably used as an insult. Someone who believes the earth is flat does not believe that there is no truth and that truth is irrelevant: they believe something different than I do because they choose to believe different sources of authority than I do. When we move beyond the insult of post-truth, we can get to the interesting questions of why someone chooses different sources of authority and, as a result, a different set of beliefs.

Some very clever friends have offered clues to solving this problem. Renee DiResta, veteran of the mis/disinfo mines, offers the idea of “bespoke realities”, composed from the floods of information available online, driven by algorithms designed to maximize attention and engagement towards commercial ends. Rather than “manufacturing consent”, as Lippmann both celebrated and warned about in 1922, through assembling an authoritative consensus reality, we end up in mutually incompatible realities that make it challenging to acknowledge the possibility or accuracy of other points of view and the actions they might dictate.

Political scientist Henry Farrell adds the key insight that information disorder is not an individual but a collective problem: “The fundamental problem, as I see it, is not that social media misinforms individuals about what is true or untrue but that it creates publics with malformed collective understandings.” Rather than ending up in purely individual bubbles, our bubbles cluster into publics that have sufficiently compatible worldviews that are capable of interpreting events and debating actions. Belief in the worldviews held within these publics may not be as simple of belief that water is wet, to use Henry’s example – some of these beliefs are “reflective beliefs”, things you’re supposed to believe because you are a Republican or a Democrat.

The big problem for Farrell is not thinking as a collective – that’s inevitable in a field as complex and information-rich as politics – but the fact that the technologies we use to connect as publics have powerful systemic biases. Twitter/X doesn’t develop a consensus from pro-Trump conservatives so much as it amplifies the ideas and obsessions of Elon Musk. (In a long and brilliant analogy to how online pornography sites have preferences shaped by people who pay for online porn, not necessarily those who consume online porn, Farrell observes, “…X/Twitter is a Pornhub where everything is twisted around the particular kinks of a specific, and visibly disturbed individual.”)

A key piece of the puzzle fell into place for me when I blogged Jay Rosen’s conversation with Taylor Owen at the “Freedom, Interrupted” conference in Montreal two weeks back. Jay noted that the “Big Lie” – the belief that Biden somehow stole the 2020 election – has become a litmus test for service in the Trump administration and, arguably, for support of the contemporary Republican party. Jay suggests that this shared belief creates a sort of camaraderie between participants, much as those engaged in a criminal enterprise might feel linked together by their shared culpability and vulnerability to arrest.

Building on thoughts of these four thinkers, I think this “post-post-truth” moment comes about when two conditions are true:

1) We have a vast array of information offering a variety of different perspectives and interpretations and

2) We’ve lost confidence in some or all institutions and the information systems associated with them.

The rise of the consumer internet and participatory media has brought about the first condition. My 2021 book Mistrust argued that mistrust in institutions of all sorts has been on the rise in the US since the 1970s, that this mistrust reflects the genuine failure of institutions in society. Mistrust of institutions now represents a default stance for many young people, who are more likely to trust individuals, particularly anti-institutionalist individuals, than institutions.

I think there may be a third part of the puzzle: a trigger. When an important part of your belief system comes into conflict with “consensus reality”, you’re likely to look for information that supports your beliefs. This might be a parent struggling to understand their child’s autism diagnosis and turning to “research” that leads them to vaccine skepticism, or a romantic rejection that sends someone towards the “men’s rights” movement.

In exploring these beliefs that align with your experiences or your understanding, this new belief system tends to chafe against aspects of consensus reality. What results is cognitive dissonance: the mental discomfort that comes from holding conflicting beliefs. The belief that vaccination is a corrupt conspiracy to enrich the pharmaceutical industry starts rubbing up against an existing belief that government regulation is generally well intentioned and benign. That conflict is uncomfortable, and one way to alleviate it is to research and find information that claims government regulation is generally overbearing and designed to serve corporate interests and not the people. You’ll find this information easily, and enough of that information will come from left-leaning points of view that if you found your way into anti-vaccine beliefs from the political left, it could be smooth sailing into an RFK Jr.-like belief system.

Not every belief that conflicts with consensus reality will cause reality to fork. I manage to read the New York Times, wincing only every fourth article, despite my haunting suspicion that Jeffrey Epstein’s death was not a suicide. Certain beliefs, however, seem designed to make reality fork and the Signal incident is likely to be one of these. Experts from previous governments, the defense establishment and the intelligence community are lining up to shout “this is not normal or safe behavior!” Accepting the Trump administration’s assurances that The Atlantic story is a hoax involves disregarding the perspectives of many people with solid conservative credentials, the sorts of military and intelligence backgrounds that generally command respect in US policy circles. But a quick read of right-wing media – I strongly recommend following The Righting, which rounds up right-wing media for left-leaning audiences every day – offers a view from the other side of this particular fork, where the so-called scandal is a damp squib.

Jay Rosen suggests that there’s a solidarity between those who encounter the Big Lie or another moment where reality forks and we choose to switch the path we’re on. I’d note that creating your own parallel reality can be more fun and rewarding that participating in one where your ability to have input is tied closely to expertise, social position and existing influence. In 2019, I wrote about QAnon as a radically participatory conspiracy, one in which you are encouraged to do your own research, create your own theories and participate in the construction of the new collective narrative. Solving the mysteries of the deep state is both deadly serious and a fun, rewarding community effort for those who participate. Mis/disinformation researcher Kate Starbird makes a similar point, comparing patterns of right-wing information disorder to improv theater and comedy.

The sense of agency and the need for your participation for this reality to thrive are powerful incentives for people who feel ignored and disregarded. And the far-right under Trump not only appreciates participation in its bespoke reality: it rewards its best performers with recognition and real power. Dan Bongino went from an undistinguished career as a secret service agent to a local radio talk show host to an internet provocateur to deputy director of the FBI. Many of the figures in the Trump administration can trace their elevation to power to their efficacy in creating aspects of a reality that avoids cognitive dissonance with Trump’s belief that he was divinely chosen to disassemble the administrative state.

Not only does consensus reality’s reliance on certain forms of authority (professional or academic expertise, access to certain positions of power) mean that it’s hard to participate in the process of shaping reality, the journalistic process of discerning truth creates its own forms of cognitive disonance.

Consider the lab leak theory, the idea that COVID-19 emerged from a lapse in lab safety rather than from cross-contamination in a wet-market in Wuhan, China. As Sheryl Gay Stolberg and Benjamin Mueller explain in the New York Times in 2023, “Some Republicans grew fixated on the idea of a lab leak after former President Donald J. Trump raised it in the early months of the pandemic despite scant evidence supporting it. That turned the theory toxic for many Democrats, who viewed it as an effort by Mr. Trump to distract from his administration’s failings in containing the spread of the virus.” The lab leak theory sometimes served as a jumping off point for theories that blamed China for engineering a virus to destroy the world economy. Seeking to blunt the political impact of those theories, a group of scientists wrote in the Lancet in early 2020 that lab leak theories were conspiracy theories intended to demean and target Chinese scientists who had worked alongside counterparts around the world to identify and combat the spread of the disease.

Over time, evidence has swung from the wet market theory to the lab leak theory, with a new analysis from the CIA now favoring lab leaks as an explanation of existing data. I admit feeling a sense of cognitive dissonance reading an article by my friend Zeynep Tufekci. Tufekci has applied her formidable skills developed analyzing social media and politics to understanding the science around COVID, writing “We Were Badly Misled About the Event That Changed Our Lives”. She argues that the Lancet article I cited in the previous paragraph was drafted by a close collaborator of the Wuhan lab, trying to shield the lab from blame and hide his tracks. Not only did we get the narrative wrong, Zeynep argues, but the media and the public were systematically mislead about risky research conducted within the Wuhan lab.

Reading Zeynep’s recent piece I discovered that, without really thinking about it, I still had the wet-market explanation in my head as the most likely explanation for COVID-19’s origins. It felt uncomfortable to realize that something I had considered to be true might be untrue? Less true? Likely untrue?

It’s possible to look at this sort of cognitive dissonance – “People in authority told me the lab leak was a conspiracy theory, and now they say it’s the most likely outcome” – and see it as a trigger for forking off a bespoke universe in which the media is unreliable, following
political power, rather than a public service mission of discovering underlying truth. How else could we get something as important as the origin of COVID-19 wrong for so long?

But the process of uncovering truth is often a messy and long one. Reporters do their best to triangulate between rival accounts of reality put forth by eyewitnesses, government figures, academics, researchers and other actors. Narratives change over time, sometimes smoothly, sometimes in an abrupt leap, like the paradigm shifts described by Thomas Kuhn.

Searching for “wuhan lab leak” on the New York Times, I found articles that gave the lab leak theory a hearing as early as May 2021, where David Leonhardt’s newsletter noted that the hypothesis may have been prematurely dismissed: “It appears to be a classic example of groupthink, exacerbated by partisan polarization. Global health officials seemed unwilling to confront Chinese officials, who insist the virus jumped from an animal to a person.” Another 2021 story interviews a Chinese scientist, but is skeptical about claims about the safety of her lab experiments in Wuhan. In early 2023, the Times reports that the US Energy Department has determined that a lab leak was the most likely explanation for the pandemic. In June 2024, the Times ran another Leonhardt newsletter looking at evidence for the lab leak theory versus the wet market theory, and an oped advocating the lab leak theory.

In other words, I perceived a sharp shift in narrative about COVID-19’s origin because I wasn’t watching the slow accumulation of evidence and the ongoing journalistic process of analysis and repositioning. My initial reaction might have been to mistrust media for getting a story badly wrong and then abruptly changing stories – looking at the accumulation of stories over the last few years, it looks like the Times, at least, has been closely tracking scientific and intelligence community assessments that one narrative is more likely than another.

Whether you read the meta-story of the lab leak as a condemnation of journalism’s errors, or a confirmation of the slow process of revealing complex truths probably has to do with the consonance or disonance with other core narratives in our own bespoke realities. Like many people on the left, I have reasonably strong confidence in the New York Times and processes behind “mainstream” media, so a reading in which journalism slowly finds truths is more comfortable than one in which journalism follows government diktats – I might read this story differently if my worldview centered on the unreliability of mainstream journalism.

What’s useful about this particular conception of “post-post-truth”, where we:

– understand the term “post-truth” as dismissive and recognize that truth systems emerge from the acceptance of one set of authorities in favor of another

– understand that countering misinfo for one individual is unlikely to change political dialogs, as we live less in individual bespoke realities and more in separate publics, with different presumptions and information sources

– see the role of cognitive dissonance pushing people with one set of incompatible beliefs to a different, less dissonant belief system

– accept that the process of determining truth through journalistic means is slow, imperfect and can lead to shifts in narrative

– understand that processes of determining truth can feel exclusionary and that other processes can feel welcoming and participatory?

My main hope is that this lets us see problems of forking realities as more understandable, if not necessarily more solvable. (My friend Erin Kissane has a wonderful quote from Ursula Franklin on her website: “Not all problems can be solved, but all problems can be illuminated.”)

When we find ourselves in disagreement with a friend over apparently different views of the world, it may be helpful to look at the sources or processes we each consider to be valid and see if we can find mutually acceptable paths towards agreeing on a truth. We may think through beliefs that have forced a friends’ reality to diverge from our own and may be able to muster some compassion in understanding those triggers are often responses to traumas. We can remind ourselves that journalistic methods towards truth lead to conclusions that can change over time as information emerges and question our own certainty about truths we hold dear.

For me, the challenge that’s hardest is the one Farrell puts forward: thinking of the dynamics of truth through the lens of publics, rather than the lens of people. The Trump administration is aggressively promoting interpretations of events that force a rejection of mainstream journalistic approaches to truth and invite participants into co-construction of a collective narrative. Events like SignalGate might point to truths that are simply so apparent that they cause cognitive dissonance for people who’ve accepted other parts of the Trump narrative – they may be an exit ramp, rather than a fork. Or they might be a compelling invitation to construct a narrative consonant with these uncomfortable facts, a narrative that pulls two rival publics even farther apart.

I have been working through issues about mis/disinformation in preparation for a talk I am giving at Amherst Cinema, the introduction to a screening of “Don’t Look Up” on April 1 – if you’re local to the area, please come by and support independent cinema. Many thanks to Erin Kissane, Nate Kurz and Jean-Phillipe Cointet, all who pointed me to resources and helped me think through these issues.

Header photo by Lola Audu, CC BY-2.0

The post “Post-post-truth”, or how the SignalGate story is forking reality appeared first on Ethan Zuckerman.

 •  0 comments  •  flag
Share on Twitter
Published on March 27, 2025 15:15

March 13, 2025

Jay Rosen and Taylor Owen: Can journalism survive Trump? Can democracy?

Jay Rosen and Taylor Owen close out the first day of “Attention: Freedom, Interrupted”, in a live taping of a podcast (Machines Like Us) for the Globe and Mail. The topic is the evolution and collapse of journalism in democratic societies… and it would hard to imagine a better pair of conversants for this topic. Jay teaches at NYU and writes a legendary blog called Press Think, and Taylor is a wonderfully talented media scholar who leads one of the best labs studying online spaces.

Taylor’s intro talks about the collapsing trust in the media in Canada and the US, the closure of many local news outlets, and the shift of trust from journalists to influencers, like Joe Rogan. He asks, “How do we make our way out of this strange post-truth moment?”

Asked to describe the problem journalism is facing, Jay quips, “The problem is there’s so many problems.” We’ve lost enormous headcount within journalism, as much as 77% of spending in US newsrooms. Journalism is much weaker than it was, and the business model problems of journalism have not been solved. While there are exceptions like the New York Times, journalism has been shrinking for thirty years.

Another problem has to do with the authority of the press. The press remains attached to “things there are basically… over.” When you’re producing the world every 24 hours it’s very hard to step back and reset the company, Jay explains. Journalists based their work on a model of politics in which the US had two major parties, which competed every four years, did business similar ways, looked and acted the same way but believed different things. That allowed the press to stand apart from both and provide a balance between the two.

Now that the Republican Party has gone “off the rails in a direction we can call anti-democratic”, it created an asymmetry. The Democratic Party is still a recognizable political party, but the Republicans are not. The press still has not adjusted to this shift. Owen suggests we need to start further back than 2016 to examine the collapse of journalism and the transformation of American politics – Trump is the middle of the story, not the beginning.

Jay got interested in “the savvy style of political journalism” some years ago – it’s a focus on who’s ahead, who are the winners and the losers. Who are the spin doctors and what tactics have them come up with? This journalism appealed to many people in Washington DC, NY and in many state capitols… but it’s enormously alienating to folks who are not interested in politics as a game.

This style is easy, Jay says, and portable – you can take it with you from one election to another. It creates relationships between reporters, spin doctors and campaign staff. Every four years, everyone goes and lives in Iowa together for the primary. The savvy style works for those interested in inside views of politics, but it tears viewers away from their fellow citizens, who have much weaker connections to political processes. The political world covered by the savvy style is so much smaller than what journalists should actually be covering.

Taylor asks Jay about another term he often uses: the view from nowhere. This is a way journalists advertise that they have no alliances, no ideology, and that they are “politically innocent”. You should believe us because we are nothing more than observers of the world. Jay thinks there’s something missing in this view: the ability to connect with the audience. Journalists generate authority with the view from nowhere, but they end up above and disconnected from the needs of their readers. We can understand the principles that led to the view from nowhere, but we can also see how it makes a form of journalism that’s hard for readers to relate to or believe in.

Owen notes that there’s now a feeling that journalism is too subjective. Is part of the solution to move back to the view from nowhere? Jay notes that if there are problems with objectivity, it’s possible that it’s not going to be solved by an excess of subjectivity. Instead, journalists should tell us where they’re coming from: that they have values, interests, histories, commitments, like everyone else. This is, in part, what influencers do – they tell us how they look at the world. Journalists still need to commit to high standards of verification, something that influencers don’t need to do. But verification does not mandate the view from nowhere – we don’t need to be robots or gods, we can be embodied individuals with histories, and with high standards for transparency and truth-telling.

How does savvy journalism lead to the Trump moment? Jay suggests we consider “verification in reverse”. Verification gives us confidence in a journalist. But when you take something that’s been verified and raise doubt about it, you raise confusion, argument and attention. That release of energy can power political movements… and that’s how Trump powered his rise to politics, through doubt over Obama’s birth certificate. It didn’t matter to Trump that journalists had verified the certificate by going to the government office in Hawaii – reversing the process of verification generated energy, doubt and attention, the forces that gave his campaign momentum. Owen wonders “Is this the first time we saw reality breaking” the way that Trump so regularly breaks it?

Rosen explains that journalists have no good answer for verification in reverse, without abandoning their commitment to verification. Journalists should have been able to admit to themselves and the public that something new, something we haven’t seen before, was unfolding with Trump. But journalists and the newspapers they were embedded within had strong incentives to hold onto the view from nowhere and the savvy style.

It took the US press four or five years, Jay says, to say that Trump was lying. It shouldn’t have taken that long. In 1976, Gerald Ford was running against Jimmy Carter and said something odd: he said that eastern Bloc countries were not captives of the Soviet Union. People reacted with disbelief – are you denying Soviet influence? Ford tried to explain that he was making a point about the resilience of people in those countries, but the controversy haunted his campaign and may have contributed to his loss. When Trump lost the first time, the fact checker at the Washington Post said that Trump had 30,000 lies and misstatements during his presidency. We’re dealing with a different animal here – Ford made a strange comment, and the whole press system jumped on him for clarification and comment; Trump overwhelmed the system to the point where the regular rules could no longer apply.

Why wasn’t the press able to face Trump for the 2024 campaign, Taylor asks Jay. Reporters needed to become pro-democracy, Jay tell us. But when he told this to newsrooms, reporters said, “You’re asking us to be pro-Biden.” Jay explains that what he meant was that we need some sort of new rule book for this new moment in politics. The danger, if we don’t, is that we lose something more than an audience or an industry – we might lose democracy itself.

The Trump re-election is a repudiation of journalism… but it’s a repudiation of expertise of all sorts: the intelligence community, universities, experts of all sorts. “Trump ran against the authority of knowledge itself.” What Americans call “the big lie” – that Trump won the 2020 election – has created a litmus test in which anyone who works with him has to accept this false parallel reality. Those who decide to work with him come up with reasons why they believe the election was stolen. Like in a criminal enterprise where loyalties are reinforced by shared criminality, people connect because they’re implicated in believing a shared lie.

Jay’s dissertation, years back, was about the “public” and the idea of journalism that informs the public. Owen wonders if we’ve lost a public. Jay says, rather, it’s impossible to believe that there is a public.

No one worries that affluent and powerful people won’t have journalism that informants and empowers them – lobbyists who make the healthcare industry run will pay $1000 a month for sophisticated news reporting. The question is whether everyone else is going to be able to have meaningful, verified information – and the current answer is “definitely not”. This takes us from a public to a mass, a de-evolution in who we think the press is serving.

The disparity between information spheres feels like a trap we can’t escape, Taylor observes. When we try to talk about these problems, Jay notes that we end up with terrible abstract terms like “post truth” and “parallel realities”. These are hopelessly elite terms that are also devoid of hope – if we are post-truth, what are we doing talking about a future of journalism or of citizenship?

After Trump’s re-election, Jay decided to take three months off because he realized he didn’t understand how to write and talk about the current situation. This interview is his return to the public sphere, and he admits he still doesn’t know what to do. He’s dropped Twitter, leaving behind 315k followers. He’s now on BlueSky, and trying to figure out a new way of writing. “I don’t want to keep saying what I’ve been saying for forty years.” Blogging was a different way of writing, Jay observes, and social media was a new form of writing as well. We’ve had to learn how to write for new media multiple times.

Taylor notes that Canada is arguing about whether to continue having a public broadcaster. Jay notes that there’s significant research that shows that countries with a public broadcaster do much better at preserving democracy because there’s a common set of facts for citizens to rely on. “Anybody with a public broadcaster that big and influential should feel very fortunate… I think you’re lucky and you would be crazy to ditch it.”

”Institutions like a public broadcaster are large and hard to turn around,” Taylor notes. “And they’re harder to revive,” notes Jay. One of the problems the Democratic Party has in the US is the need to defend institutions, many of which are broken… but you’re not going to fix flawed institutions by burning them down. Trump’s MAGA movement is powered by burning down institutions… “once you realize that, it seems more realistic to have pro-democracy journalism.”

Journalism is a social practice – reporting the news – and Jay believes it will never die, because it’s necessary to modern civilization. But the media and the news business may or may not continue to exist… which is a problem because the media controls flows of attention. The practice of journalism is fundamentally human and will persist, but we need to discover, again and again, how to support it.

Taylor asks whether we should see journalism as a solution to the problem of failing democracy. Jay suggests that we need much, much more for modern democracies to work. Lippmann, in Public Opinion, notes that we can’t expect the press to tell us what’s going on if we don’t have a government that tells us what’s going on. Without basic information, it’s impossible to tell whether the government is doing what it should be. In response to Lippmann, governments created institutions that document labor statistics, other key measures of governmental success… and those statistics are now being destroyed by Trump to bring us back to a world where it was impossible to verify what the government was doing. “In his dinosaur brain, he knows that anything that connects him to accountability has to go.”

The post Jay Rosen and Taylor Owen: Can journalism survive Trump? Can democracy? appeared first on Ethan Zuckerman.

 •  0 comments  •  flag
Share on Twitter
Published on March 13, 2025 15:24

Focus on the player, not the puck: Finnish approaches to combatting electoral interference

(Ongoing coverage of Attention: Freedom, Interrupted at McGill University in Montreal. Liveblog – I will get things wrong – feel free to correct me if I have misrepresented you.)

How does a nation become resilient to foreign election interference?

Atte Jääskeläinen, director of Finnish foundation Sitra, argues that it’s part of Finland’s national character. Finland gets a lot of credit for educating students to be critical consumers of media. But he notes that it’s a longer story “starting in the 19th century when Finland had ideas of being independent. It’s always been about Russia”. Even though Finland had a defense agreement with the Soviet Union, “everyone knew why Finland had a large army… it wasn’t for war with Sweden.”

The story of the nation creates the ground for being resilient to international interference. Jääskeläinen worked with Jessikka Aro, a Finnish reporter who was active in exposing the Russian troll factory in St Petersburg in her book, Putin’s Trolls. The pressure she received from Russian actors was so severe that she eventually had to leave Finland for Switzerland. But she was ultimately able to document a pattern of disinformation that’s been influential around the world, not just across Finland’s 1300km border with Russia.

For Robert Fife, Ottawa bureau chief for the Globe and Mail, disinformation operations have forced reporters to look around the world and deeply into local communities. Disinfo often targets Chinese speakers, Punjabi speakers and other groups who often aren’t reading the Globe and Mail, but WeChat, Indian-language media and other sources of information that routinely distribute state sponsored disinformation. The most powerful weapon to counter disinformation is transparency, Fife argues, but to be transparent and effective, Canadian reporters have to better represent what Canada looks like – it speaks not just English and French, but Chinese and Punjabi as well.

Mark Scott of the Digital Forensic Research Lab of the Atlantic Council suggests that we may overfocus on subrosa actions by state disinformation providers. His research on whether AfD received undue amplification by X during the recent elections suggested that the algorithm did not overweight AfD leaders (other research finds other conclusions) but that it did give Musk enormous reach. That isn’t clandestine election influence – it’s the classic example of a media owner putting his finger on the scale. Phenomena like the Trucker Convoy, a 2022 protest movement in Canada over vaccine mandates for cross-border drivers, leveraged social media groups that emerged organically out of lockdown protests over two years – they were not created by troll factories, but they were real, but invisible to journalists, Scott argues.

Jääskeläinen reminds the audience that the Nazi regime not only figured out how to ensure that Hitler’s speeches were always reproduced on radio, but circulated inexpensive radios – one third the cost of normal radios – that could only receive Nazi stations. Authoritarians have always been anxious to control the media, and open societies have worked to ensure the tremendous power of broadcast media is responsibly wielded. It’s clear that this responsibility hasn’t translated into online spaces.

Scott suggests that foreign interference is not taken seriously by media, which overfocuses on shiny things like bot farms and troll factories, but doesn’t pay attention to the networks that develop between AM radio, traditional media, domestic and international extremism in the years between elections. Foreign interference is one part of an immense and complex ecosystem, and it’s challenging to understand and to report.

Jääskeläinen offers a hockey metaphor for understanding misinformation: the puck doesn’t score – it’s the player. So focus on the player, not the puck. We see huge volumes of disinformation, but often it’s being shared by a small set of actors, and understanding them and their motivations may be the effective way of understanding what unfolds in a complex ecosystem.

The post Focus on the player, not the puck: Finnish approaches to combatting electoral interference appeared first on Ethan Zuckerman.

 •  0 comments  •  flag
Share on Twitter
Published on March 13, 2025 13:41

Taylor Owen: Canadians now see the US as the most serious disinfo threat

Taylor Owen, one of our hosts at “Attention: Freedom, Interrupted”, offers a brief and powerful talk on the Information ecosystem leading to the 45th Canadian federal election. He references a commission led by Justice Hogue which investigated foreign interference – particularly from China, India and Russia – in Canadian elections. Justice Hogue came to the conclusion that the threat to Canadian electoral integrity was not foreign interference but disinformation.

Owen and his team studied the Canadian elections in 2019 closely, fearing that Canadian elections would be much like the disinfo-plagued US elections of 2016. They weren’t – Canada had less polarization, higher levels of trust in journalism and higher consumption of journalism.

But it’s all gotten worse since 2019, Owen warns. In 2019, broadcast media like CBC had a moderating effect on misinformation shared online. But that moderating effect requires high consumption of journalism and high trust in that media, and both those are falling.

Additionally, Facebook and Instagram both turned off news for Canadian users two years ago. On a daily basis, links to news organizations have decreased 11 million a day, about half the traffic to those organizations. A quarter of local Canadian news outlets no longer share content on social media at all. Most disturbingly, most Canadians haven’t noticed – Canadians still tell pollsters they get their news from Facebook and Instagram.

Owen warns that Silicon Valley companies have changed their status. It’s not just performative alignment with Trump: major platforms are ending the ten year era of “trust and safety”, turning moderation over to crowdsourcing. These platforms are moving from minimal transparency to complete opacity. These US government as well as US platforms are participating in the persecution of disinformation researchers. And we’re no longer worried about ideological segregation within platforms so much as we are worried about platforms becoming tightly aligned with political points of view.

Conservatives in Canada are seeing increased political engagement on X, while Liberals and other parties see their engagement flat or shrinking. Owen tells us this might reflect differing willingness to engage on the platforms, or algorithmic boost. But there’s reasons to worry that automation is playing a role. Owen shares the “Kirkland bots”, an apparent manipulation campaign in which thousands of accounts wrote positively about a Conservative political rally in the small Ontario town of Kirkland. Liberals accused conservatives for running a bot campaign; conservatives accused liberals of running a false flag campaign.

Owen’s lab thinks that this was likely the action of a single person, trying out a new botnet. He points out that bots can be bought for as little as $0.20 and can be linked to powerful AIs to run disinformation campaigns at scale.

Influencers are worth significant attention as well. These individuals are only accountable to their audiences and their funders and their reach can exceed that of other actors. Regulations to oversee digital advertising are being subverted when foreign governments can pay influencers in an entirely opaque way. Owen references money given to Tim Pool, who advanced the narrative that Canada was a failed state and needed to be taken over by the US.

Owen concludes with a dark warning: Canadians now see the US as emerging as a more serious disinformation threat than other nations. We’ve seen the US forwarding Russian disinfo in the UK and Germany, and we know that the US and Canada have intertwined media environments. During the COVID pandemic, 50% of COVID disinfo in Canada came from the US. Canadians are now more concerned about covert influence from the US than from other countries

The post Taylor Owen: Canadians now see the US as the most serious disinfo threat appeared first on Ethan Zuckerman.

 •  0 comments  •  flag
Share on Twitter
Published on March 13, 2025 09:00

Ethan Zuckerman's Blog

Ethan Zuckerman
Ethan Zuckerman isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Ethan Zuckerman's blog with rss.