The AI Slop Problem
Mark Zuckerberg said a few months ago that AI is ushering in a third phase of social media. First social media was used to connect with family and friends, then it became a platform for content creators, and now creativity is being further unleashed with new AI-powered tools. That’s a pretty rosy view, and unsurprising coming from the creator of Facebook. Many people, however, are becoming increasing concerned about what the net effect of AI-generated content will be, especially low-grade content (now colloquially referred to as AI slop).
One thing is clear – AI-generated content, because it is so easy and fast, is increasingly flooding social media. AI’s influence takes two basic forms, AI-generated content, and recommendations driven by AI-powered algorithms. So an AI might be telling you to watch an AI-generated video. Recent studies show that about 70% of images on Facebook are now AI-generated, with 80% of the recommendations being AI-powered. This is a fast-moving target, but across social media AI-generated content is somewhere between 20 and 40%. This is not evenly distributed, with some sites being overwhelmed. The arts and crafts site Etsy has been overrun by AI slop, causing some users to abandon the platform.
We are already seeing a backlash and crackdown, but this is sporadic and of questionable effectiveness. Etsy, for example, has tried to limit AI slop on its site, but with limited success. So where is all this headed?
We need to consider the different types of content separately. Much of AI-slop is obviously fake and for entertainment purposes only. They may be cartoony or obviously humorous, with no intent to pass as real or deceive. Some content is meant to entertain (i.e., drive clicks and engagement), but is not obviously fake. Part of the appeal, in fact, may be the question of whether or not the content is real. Other content is meant to deceive, to influence public opinion or the behavior of the content consumer. This latter type of content is obviously the most concerning.
There are also different types of concerns or potential negative outcomes. One of the biggest concerns is that AI-generated content can be used to spread misinformation. This has both direct and indirect negative effects – it can spread false information and influence public opinion, but it also degrades trust in accurate information or responsible sources. So true information can be dismissed as possibly fake. The combined effect is that we no longer know what is true and what is not. Without any way to objectively referee which facts are reliable and which are likely fake (and yes, it’s a continuum, not a dichotomy), people will tend to just hunker down with their social tribe. Each group has their own reality, with no shared reality to bridge the gap.
There is also the Etsy problem – low-quality content is crowding out anything of value, and consumers are buried in slop. I use Etsy, and so have encountered this myself. It takes a lot of cognitive work to separate out real work, especially art, from the flood of AI content. Highly cognitively demanding work is unsustainable – most people will not do it for long and will look for the less work-intensive path. This may mean abandoning a platform, or throwing up their arms and saying it’s hopeless to tell the difference, or just giving in and not worrying if something is AI or not. This is a problem for non-AI content creators, and also a problem across the board. Mental AI-fatigue will affect everything, not just low-grade AI artwork. Etsy-fatigue can also influence how much mental energy we have for political AI content (studies do show that mental energy is fungible in this way).
There is also the middle ground, not low-grade AI slop or deliberate deception, but AI used as a legitimate tool to create high-quality art or other content. This is the use I think can be valuable, making content creation better or more efficient. The problem with this content is not really for the end-user but the issues of ownership and displacing human artists. For me, this is where the real dilemma is. I would love for the big video game companies to be able to double their output because of efficiencies gained through AI, and I also want to see how the latest AI can enhance certain game features (like interacting with AI-driven characters, or open-ended generative content). But these advances are being held back by the other concerns with AI, many of which are legitimate.
There are several approaches to the issue that I can see. One is to simply let the free market sort it all out. Users are having somewhat of a backlash against AI slop, and companies are responding. We will see how well they can manage the issue, but if the last few decades are any guide I don’t have a lot of hope that big tech companies will do what’s best for the end-user, rather than their own bottom line. Likely some individual platforms will push back heavily against AI, perhaps even creating AI-free social media platforms or websites.
A second approach is to craft some thoughtful legislation to try to wrangle this beast. The most important fix would simply be transparency – if AI-generated content had to be labeled as such, with heavy penalties for passing off AI content as real, this could significantly help. I would also like to see a conversation about how algorithms recommend content. It may also be feasible to make the use of AI-generated fakes for political persuasion illegal.
Both of these approaches, however, require a third approach – developing the technology to detect, label, and filter AI-generated content. A truly effective app to do this could be massively useful, and I think highly popular.
My biggest concern is that governments will use AI to enhance their ability to control their populations. This is part of the “information autocracy” problem. If you control what information your population sees, you can control what they think, and you can control what they do. This is already a problem, but AI-generated content and AI-driven algorithms can make it orders of magnitude more effective. Even without authoritarian governments, large corporations can use the same technology to influence their consumers. Or they can use it to promote their political views. A populace, both entertained and overwhelmed by AI slop, would be especially compliant.
The post The AI Slop Problem first appeared on NeuroLogica Blog.
Steven Novella's Blog
- Steven Novella's profile
- 248 followers

