Jump to ratings and reviews
Rate this book

Language Machines: Cultural AI and the End of Remainder Humanism

Rate this book
How generative AI systems capture a core function of language

Looking at the emergence of generative AI, Language Machines presents a new theory of meaning in language and computation, arguing that humanistic scholarship misconstrues how large language models (LLMs) function. Seeing LLMs as a convergence of computation and language, Leif Weatherby contends that AI does not simulate cognition, as widely believed, but rather creates culture. This evolution in language, he finds, is one that we are ill-prepared to evaluate, as what he terms “remainder humanism” counterproductively divides the human from the machine without drawing on established theories of representation that include both.

 

To determine the consequences of using AI for language generation, Weatherby reads linguistic theory in conjunction with the algorithmic architecture of LLMs. He finds that generative AI captures the ways in which language is at first complex, cultural, and poetic, and only later referential, functional, and cognitive. This process is the semiotic hinge on which an emergent AI culture depends. Weatherby calls for a “general poetics” of computational cultural forms under the formal conditions of the algorithmic reproducibility of language.

 

Locating the output of LLMs on a spectrum from poetry to ideology, Language Machines concludes that literary theory must be the backbone of a new rhetorical training for our linguistic-computational culture.

281 pages, Kindle Edition

Published June 24, 2025

17 people are currently reading
131 people want to read

About the author

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
4 (40%)
4 stars
4 (40%)
3 stars
1 (10%)
2 stars
1 (10%)
1 star
0 (0%)
Displaying 1 - 2 of 2 reviews
Profile Image for Daniel.
365 reviews28 followers
October 16, 2025
As you delve into this review, remember to take it with a grain of salt—much of the book went over my head.

Sometimes, concepts that seem essentialy linked turn out be together for merely historical or technological reasons, and can be separated, causing a certain amount congnitive dissonance. This book argues that this is what has happened with the production of meaning and human cognition.

LLMs might not be intelligent in a human sense, but they can be used to probe the existing landscape of meaning, and generate new meaning by moving across paths in that landscape. Negating their meaning-producing capabilities, defending those capabilities as the sole province of human cognition, is an ultimately doomed rearguard action.

Viewing language primarily as a way to refer to external things and facts is too reductive. Instead, the essence of language is poetic: new meanings arise by juxtaposition and other non-referential ways of equating terms. Reference is an (admittedly useful) function built on top of that. Yeah, hallucinations are annoying allright, but they don't dent the power of LLMs as poetic machines.

Poetic machines, and ideology machines. Ideology in a wide sense, encompassing the worldview of the culture whose texts the LLM was trained on. Much has been written about how LLMs are poisoned by existing biases in society, but the flipside of that is that they are great computational tools to explore those biases.

The final part of the book is a grim prophecy. The deskilling of intellectual work that seemed unassailably "creative" is upon us. And, while we're still able for now to see the awkward seams where language machines are being grafted onto culture, those seams will likely become much more difficult to perceive in the future. So better pay attention now.

Links:

We Need To Talk About Sloppers

post
Profile Image for Jeffrey.
290 reviews58 followers
October 29, 2025
Weatherby’s Remainder Humanism introduces an ambitious framework for understanding how AI and digital culture reshape what it means to be human. His central claim as I read him, that modern critiques of AI remain trapped within a residual humanism that privileges cognition over culture could have real force.

Yet the construction of this argument depends on a fundamental misreading of the very thinkers he marshals as examples, particularly Timnit Gebru and Emily Bender. As he puts it, “Bender has given the stochastic parrot framework a theory of meaning by subtraction[… ]The very ‘critique’ here cedes enormous power to these systems, undermining the explicit denial of meaning” (32–33). This rhetorical move reframes their ontological claim that meaning requires embodied, temporally situated interpretation into a semantic paradox of his own making, allowing him to accuse them of inconsistency while sidestepping the philosophical depth of their position.

I find it difficult to understand why Dr. Weatherby would make such a backhanded move, except that his broader argument depends on creating a vector through which he can reassert a claim already made, or more precisely, to position himself as transcending a debate he has first misrepresented.

Perhaps I’m being uncharitable, but the reframing of an ontological critique as a semantic one is the kind of category error one might expect from a first-year graduate student, not a seasoned theorist. In recasting Bender and Gebru’s ontological account of meaning as a naïve defense of human exceptionalism, Weatherby replaces a question about the grounds of meaning with one about its circulation, shifting the discussion to a semiotic register so as if all they were merely asserting is that only humans can use or produce language, thereby reinscribing the very anthropocentrism he claims to dismantle!

But that is not their critique at all.

When Weatherby writes, “The ‘stochastic parrot’ critique backhandedly confers enormous power on LLMs, theoretically depriving them of language but ceding almost mystical power to produce meaning—bad meaning, but meaning nonetheless,” he elides a crucial element of Bender and Gebru’s position: the role of the ontologically tethered human who provides the context in which meaning arises.

These so-called “stochastic parrots” do not generate meaning autonomously; meaning only emerges when a human interpreter situated in time, culture, and embodiment engages the text and reconstitutes its significance. In Peircean terms, the interpretant is not the machine but the human participant who grounds the sign within lived experience. Weatherby’s omission of this tether allows him to transform an ontological critique into a semantic paradox, creating the illusion of contradiction where none exists.
Displaying 1 - 2 of 2 reviews

Can't find what you're looking for?

Get help and learn more about the design.