A veteran writing teacher makes a “moving” (Rick Wormeli) argument that writing is a form of thinking and feeling and shows why it can’t be replaced by AI
In the age of artificial intelligence, drafting an essay is as simple as typing a prompt and pressing enter. What does this mean for the art of writing? According to longtime writing teacher John not very much.
More Than Words argues that generative AI programs like ChatGPT not only can kill the student essay but should, since these assignments don’t challenge students to do the real work of writing. To Warner, writing is thinking—discovering your ideas while trying to capture them on a page—and feeling—grappling with what it fundamentally means to be human. The fact that we ask students to complete so many assignments that a machine could do is a sign that something has gone very wrong with writing instruction. More Than Words calls for us to use AI as an opportunity to reckon with how we work with words—and how all of us should rethink our relationship with writing.
Librarian note: There is more than one author in the GoodReads database with this name This profile may contain books from multiple authors of this name
A great primer on what in hell "AI" is and, most importantly, isn't. With decades of experience as a college writing instructor, Warner's been there and back -- and now THIS. Any teacher of writing at any grade level can relate, as generative AI has become a major problem. Yet another mess, as if education wasn't dealing with enough brushfires.
First off, Warner sides with those who dislike the term "artificial intelligence" because, well, there's nothing intelligent about it. Warner and others prefer to call it for what it is: automation. That's all, really. It spits out based on data that's pumped in, and while Warner tries hard to find uses for it in education and even plays with the automation in many ways himself, he's hard pressed to give it much beyond minor tasks to make our lives easier.
Why? Because writing is thinking and AI doesn't think. Because writing is feeling emotion and AI doesn't feel emotions and cannot reflect. Because writing is quirky at times and provides a snapshot of who we are through voice, and if there's one thing a machine based on algorithms cannot do, it's provide much in the way of voice.
It stands to reason, then, if writers use AI to generate writing, they are denying themselves the process of writing, thinking, feeling. You know, all the stuff that makes us HUMAN. And if they sneak it by their instructors, they are winning battles in a war they are sure to lose because, sooner or late, what they never learned by doing will be exposed.
I like telling students using generative AI to write papers for you is like watching workout videos on YouTube to get in shape. Good luck with that!
Diving even deeper, Warner warns us that it's the Musk-type 1%-ers who stand to make billions on this. The Zuckerberg types who ditch fact checks and create algorithms that reward extreme voices on both sides of the political divide (and the country be damned!) on FaceBook or Meta or whatever it's now called. What's more, creating infrastructure to build and support bigger and bigger AI facilities in rural areas (they need space) is having a huge effect on the environment (if anyone cares), especially water supply (and good luck to farmers in the same county).
Bleak, but revealing. Warner ends with advice under the categories of "Resist," "Renew," and "Explore." If everyone takes this as fait accompli and sits on their hands, it will only grow into a problem that can no longer be tangled with. Sound familiar? The same problem seems to exist politically in the sharply divided States of America (in fairness, I removed the "United"). It's act, or bear the consequences of doing nothing while everything goes to hell in a generative handbasket.
"Rather than seeing ChatGPT as a threat that will destroy things of value, we should be viewing it as an opportunity to reconsider exactly what we value and why we value those things."
More than Words goes the beyond the instrument of AI. It explores what human writing makes it human what problems AI content has. It is critical yet not pessimistic. It focuses on writing as a thinking process, and how valuable this is.
As a teacher this books helped keeping focus on my primary task: making students more critical, more knowledgeable, and more independent. I highly recommend it to teachers in any field and other people to get a better understanding of the value of reading and writing not supported by AI.
Being an extremely persuasive critique of the use of generative AI to supplant the writing process, Warner's book details why we write and why writing is thinking, feeling, and living. LLMs can do none of these things; they simply process and output syntax. Writing at its highest levels will not be replaced by AI (and it may even benefit from it), but its negative impact on writing pedagogy has been immediate. Students and teachers alike should know when the reject LLM usage, when to renew more human forms of learning and teaching how to write, and when generative AI may be something worth exploring. Warner—a prolific editor and teacher of writing—makes confident arguments for all of these. This is a must read for any writer or teacher.
I agreed with a lot of the content of this book but I had hoped it would offer more specific and structured pedagogical advice. It is more of a general manifesto on AI than a how-to guide, with a sprawling range of topics: the ethics of the tech industry, the corporate interests that underpin these innovations, the environmental impacts of AI, the history of technology in education, the proliferation of AI forgeries, the manipulation of search engines with AI, the narrow market for struggling authors, and Warner's own theories of writing pedagogy. The book offers a number of useful admonitions on AI: it is not, first of all, a genuine form of "intelligence" but rather "syntactic automation," generating words and stringing together plausible sentences into a probabilistic response; it doesn't simply "hallucinate," as we often hear—in fact, all AI output is an hallucination, a mindless assemblage of words according to stochastic calculations of tokens rather than through any deep reasoning or reflective process; whether its claims happen to be true or false, it is a "bullshitter", unable to verify or validate itself; however intelligent its words seem, it is always us readers who make sense of its automated language and project intelligence onto it. Therefore, Warner argues, if AI is able to do the task, then maybe that is not a task that humans ought to be doing to begin with. This is the philosophy that animates his whole view of AI and teaching: new developments in AI present us with an opportunity to reassess what writing really is and what thinking is truly valuable. In this, John Warner returns to his earlier writing instruction books, arguing that writing is not strictly about grammatical fluency or formulaic structure but, most fundamentally, writing is thinking. It requires deep thought, experience and expertise, judgement and reflection. Writing is more than the template five-paragraph essay.
Here and there I found some interesting and illustrative exercises. Discussing the outrage and fallout that ensued after Vanderbilt University issued an AI-generated condolence letter in the wake of a campus shooting, Warner picks apart AI's thoughtless boilerplate language. Asking ChatGPT to produce a condolence letter, he notes its platitudes about "incredible losses" (which are quite credible nowadays), and the "unimaginable grief and pain" (which, for the affected community, doesn't need to be imagined at all). I myself was startled by the absurdly, callously, sing-song writing (the alliterative "heavy hearts" and the "terrible tragedy" that has "befallen our beloved campus"; the excessively ornamental tricolons in "we encourage you to lean on one another for strength, comfort, and solace" and "we understand the collective shock, sorrow, and anger"). The flourishes feel out of place for such a sombre situation. The letter is shockingly vacuous ("the loss is an irreplaceable voice in our academic family"—how can a void be replaceable or irreplaceable?). Warner shows the unfeeling way AI generates rhetorically artificial prose and simulates a banal form of hollow commiseration. The loss of life is a pathos beyond words, and it is a revelatory experiment to see how AI manages this particular kind inexpressible horror—one that cows politicians with meager words of "thoughts and prayers". AI just produces rinky-dink phrases and school-boy oratory.
Another part I found interesting was his "right word, almost-right word" exercise: essentially, the teacher chooses a passage from a text (in his case, a paragraph from an essay by David Foster Wallace) and invites students to think about how a sentence would sound if a single word was replaced (imagine, for example, if he wrote the more humdrum "smell of skin" instead of the more grotesque "smell of flesh"). Asking ChatGPT to imitate a passage, Warner effectively shows the difference between virtuosic writing and soulless pastiche. David Foster Wallace's piece is evocative, even disgusting in parts, a text designed to elicit visceral reactions; the AI imitation is just an artless mimicry leaving no impact, similar syntax but without the strikingly idiosyncratic word choices. I learned a lot from this moment and I would have liked to see Warner discuss more of his own classroom exercises and talk through the demonstrable weaknesses of AI.
When I first read the title, I had thought it would offer more writing assignments and dissect more shortcomings of AI prose (and students' perceptions of AI prose). Instead, Warner zooms out and advocates for a big-picture ethical posture to AI—to resist utopian thinking and tech determinism; to renew our sense of humanity and the value of our own thoughts, experiences and creativity; and, finally, to explore AI, researching it, seeking out different opinions, not for the sake of personal efficiency but as an act of public service, trying to understand a consequential technology and its impacts on society.
I would recommend this book to anyone who writes for fun, or for work, or who is in academia. Or who has had existential dread about AI. I usually shy away from tech book since they are dated so quickly, but Warner does a good job addressing this qualm. His argument that ChatGPT cannot read or write (rather process text and fetch words, respectively) hit the mark for me. That writing and reading are uniquely human activities (connected to thoughts, emotions, experiences) so AI just cannot do the same. He is not doomsday about AI, rather he sees it as an opportunity for a reckoning about what makes us human and what types of writing are actually worth doing.
Not tips and tricks to use in class (though I dug out some of that, too), but rather a way to think and talk about AI and when to use it: AFTER the thinking drafting process.
4.5: I wish this book could be required reading for all the administrators and faculty blindly falling for AI without stopping to consider the wider ramifications of wide adoption in classrooms because “it’s here and it’s the future” etc. Writing is thinking is being human, and what’s the point of a college education if we outsource human tasks to machines? Also love the idea of replacing the term artificial intelligence with “automation,” a more accurate descriptor of what Chat-GPT is and does.
a true “preaching to the choir” read, but I appreciate Warner’s ability to give specific definitions to a general feeling of “AI bad” that I’ve been feeling the last few years as a teacher (and human).
I specifically think the point of centering the individual rather than “the average” is hugely important. a human’s job is not efficiency and optimization; our job is to fart around and find our community, to trust that anything worth doing is worth the time it takes to do.
“My students have been incentivized to not write but instead to produce writing-related simulations, formulaic responses for the purpose of passing standardized assessments. This happens not because teachers are bad or students lack the ability but because these simulations have been privileged in a system where ‘schooling’ is divorced from ‘learning.’” In today’s world, most American public school students have been denied the process of writing: the creativity, play, and wonder. In this book, Warner shares how A.I. threatens to destroy this, but shares that there is a way to make sure that doesn’t happen. He states, “writing is an embodied acting of thinking and feeling,” which A.I. cannot do. So, actually, ChatGPT and other chatbots are “bullshitters.” Writing is thinking, and A.I. cannot review, remember, or think at all. Warner reminds us that, if ChatGPT and other chatbots like it can generate work students are asked to created, we teacher need to rethink how we teach writing. It should involve practice, yes, but also imagination, play, and fun.
So much to love about this book! As a teacher of writing, I found myself applauding so much of what Warner shares. If we teachers think about the way writing should be taught, A.I. becomes a little gnat. A slight annoying issue that can easily be flicked away. A necessary read for ALL teachers of writing.
Excellent read for anyone who writes regularly or teaches writing. Warner’s theory causes a close examination and potential overhaul of how we’ve been teaching an assessing writing: if ChatGPT can do it, it’s probably not worth doing.
Anyone who writes understands that the finished product defines a process that is unique, messy, individual, gritty, overwhelming…and ChatGPT’s ability to spit out a finished piece or perfect syntax is not writing. It is processing.
4/5 because some of the insight about algorithms seemed a little off track and off focus. Ultimately a great read that I’d recommend!
This tackles the issue of AI from a number of directions. As an instructor, I read it wanting to learn how I might prevent students from using ChatGPT to complete essays (or even how to convince them that using it for that purpose is a detriment to their own cognition). Ultimately there is no clear answer, but it is clear that instructors have to move past urging students to complete formulaic essays that ChatGPT can churn out in three seconds. I would have liked more suggestions but it is an excellent jumping off point.
I needed this book. After years now of think pieces and trainings and webinars and discussions about the inevitability of AI and what it means for reading and writing that left me questioning my field, its work, its future, Warner’s book offered me something I have been looking for. Yes, it helps that he and I are, if not on the same page, at least in the same page range on these topics, but even without that, the questions he raises are some that I think have gotten lost in all of the noise of the bigger public conversations about generative AI.
Preaching to the choir on this one. I’ve been saying that if your school courses can be undermined by AI then there is something wrong with your courses, which Warner agrees and suggests is the result of years of standardization being the only metric used to judge students. Unfortunately AI is MUCH better as standardization than humans. ‘My students had been incentivized not to write but instead to produce writing related simulations, formulaic responses for the purpose of passing standardized assessments. This happens not because teachers are bad or students lack ability but because these simulations have been privileged in a system where “schooling” is divorced from “learning.”’
If you follow the ever-present news coverage of AI's march to dominance, nothing here might strike you as totally new or groundbreaking. But as a comprehensive examination through the lens of what it means to read, to write, and to think, it does provide a useful rubric for judging how, when and why we should (and mostly shouldn't) use AI to replace our own intellectual lives and learning practice.
Excellent book on AI and the truths we aren't told. I enjoyed the break down onto 4 parts and felt Warner did a great job keeping it timely and precise.
The amount of water and electric consumption is disheartening to say the least. I hope we can share this knowledge within our English classes as well as with the general population-- to write is to think. AI should never replace this. We must remain diligent and fight to maintain our human rights in the classroom.
Witty and insightful, I absolutely loved this book. It helped crystallize some of the more nebulous ideas of my dissertation. I also laughed out loud several times, because Warner is very much my sense of humor. Ordered more of his books and subscribed to his Substack!
This is an interesting book talking about what writing actually is, and why it can't be replaced by AI. The core argument is that writing is an embodied activity where you express and understand your own thoughts and feelings as you do it: the text is just the medium. Therefore, text generated by ChatGPT cannot quality as writing since AI cannot have thoughts and feelings. Arranging text != writing, which is a process, and not an output.
He poses some interesting questions about writing pedagogy, particularly around high school essays. If AI can generate a passable high school essay, is it really worth doing? Does this process actually help students engage in embodied writing that brings out thoughts and feelings? Warner answers the questions with strong "nos", and I agree. The last few sections were about how to think critically about integrating AI into our lives, but I felt like they were more like gestures rather than clear claims and recommendations.
I really like the first half of the book. It’s a thoughtful and clever explanation of AI from the perspective of a former writing instructor. I thought some points were oversimplified, but it’s written to a general audience. I felt, though, that the second half of the book wandered some and then ended with an apologetic whimper, wishing for a world without AI. The final framework—resist, renew, explore—had potential, but they were all minimal and unenthusiastic. Still, I found this a quick and interesting read. The funny stories and charm alone make it accessible and memorable.
In this era of AI hype or panic, it was nice to read a take that was skeptical, but felt balanced. I’m fairly familiar with what’s going on in the world of AI, so I didn’t learn a ton about it from this book, but I enjoyed reading Warner’s thoughts about what makes writing valuable and deeply human. He put into words a lot of the concerns I have about generative AI as someone who has spent my whole career working with words. He also made a great case for the value of humans continuing to write, even if it’s a struggle. I’ve worked in the world of online content for a while and have lost the joy of writing. Reading this reminded me of why I used to love to write and made me want to carve out some time to do more creative writing.
I thought I was going to be reading a book about how to think about writing in the Age of AI, and I am a teacher and a fan of Ethan Mollick’s work. This book is mostly why-AI-is-bad.
Today’s focus: More Than Words: How to Think About Writing in the Age of AI by John Warner. I finished the book at least a week ago yet have not written anything about it.
John Warner is a fellow composition instructor at the College of Charleston in SC. He has published a handful of books (nonfiction and fiction) and writes a weekly book review column for The Chicago Tribune as “the Biblioracle.” All of this comes from the back jacket of his book, More Than Words: How to Think About Writing in the Age of AI.
It is not generally my modus operandi to provide long-winded block quotes from authors, but I’ll make an exception here so that readers know precisely what to expect before purchasing and reading More Than Words. Per the final two pages of his introduction, here is what Warner wishes to accomplish in his book, chapter by chapter:
“What ChatGPT and other large language models are doing is not writing and shouldn’t be considered as such. Writing is thinking. Writing involves both the expression and exploration of an idea, meaning that even as we’re trying to capture the idea on the page, the idea may change based on our attempts to capture it. Removing thinking from writing renders an act not writing. Writing is also feeling, a way for us to be invested and involved not only in our own lives but the lives of others and the world around us. Reading and writing are inextricable, and outsourcing our reading to AI is essentially a choice to give up on being human. If ChatGPT can produce an acceptable example of something, that thing is not worth doing by humans and quite probably isn’t worth doing at all. Deep down, I believe that ChatGPT by itself cannot kill anything worth preserving. My concern is that out of convenience, or expedience, or through carelessness, we may allow these meaningful things to be lost or reduced to the province of a select few rather than being accessible to all. What I’d like to do for the reminder of our time together [this book] is use the capabilities of ChatGPT (and its ilk) as a lens to examine how we work with words as a way to uncover those things that can and must be preserved” (11-12).
I am finding it difficult to begin my review this book for a variety of reasons, not least is that I conscientiously recognize Warner’s perspective aligns with my own, and so in reading this book, and admitting publicly to reading it, I fear I have stepped into an echo chamber and am no longer entertaining a variety of stances on this topic. And truly, for every chapter of this book there was at least one moment when I laid the book on my lap, threw a fist in the air, and shouted, “Hell yeah! Preserve our humanity!” But perhaps awareness is enough. Perhaps, based on this awareness, I may yet convince myself to pick up some book that extolls the advantages of AI. Although…I doubt it. My heart rebels because I am angry, it seems.
Despite my bias, there were moments in the book where the topic-at-hand felt forced. Warner spends a chapter talking about the profitability and business of writing, writing as a profession/livelihood, and the world of publication, including the myriad avenues to publication. There is something in this chapter that grated my sensibilities because it felt deeply self-serving. Lofty. Egoic. Maybe whiny? This felt off topic to LLMs, yet I was magnetically drawn to the chapter! It’s almost as though a writer will use any excuse (in this case, ChatGPT and LLMs) to opine over the difficulty of making a living from writing. And my frustration with this is that I agree. I want to whine too. I want something to blame.
Thus, I am conflicted. I don’t want to like the book too much, yet I’m frustrated that there are parts I disliked. Make this make sense.
The truth is Warner puts onto the page in very human, often clunky and flawed language, many of my own thoughts on this topic. I have a friend, for example…let’s call him Jim. Actually, his name is Jim. Jim’s work is exceptionally technical and involves, like, math or something. Jim uses LLMs and AI to develop algorithms for efficiency in transporting oil to gas stations nationwide, or something—this is my very laywoman’s representation of what he does. Jim is bright and he is wise: he perceives AI as a tool to improve not only his work outputs but also his daily life. And Jim seems to believe that everyone should have access to these tools so that they may use them as he does, as an aid to improve human efficiency and thus quality of life.
One day, Jim read one of my short stories, and afterwards said to me, “If I could teach an algorithm the patterns and style and voice of your writing, then tell it to write your plot/story for you, wouldn’t that be helpful? It’d save you so much time!”
At that moment, I was speechless. I think I fumble-mumbled something about how I “choose language very specifically to suit the needs of the narrative moment.” But I wish I could’ve opened my mouth and had Warner’s words fall out.
To let GenAI write for me would be to rob me of one of my greatest joys. While the finished product (story, book, essay) provides satisfaction—even more so if it becomes published—the process of organizing whatever muddled thing occupies my mind gives me purpose. Using an LLM would rob my writing of its feeling, its humanity, and its capacity to help me make sense of the most complex aspects of my life: relationships, traumas, shadow self, emotions—all the things that make me human and an active participant in human interactions.
I did not have those words available to me the day Jim crushed my fledgling writer’s soul with that suggestion. Thanks to Warner, these words/ideas are available to me now, for the future.
Another thing Warner addresses that Jim overlooks is that not everyone is educated enough to use GenAI, including LLMs like ChatGPT, effectively. While Jim may be using GenAI to accomplish tasks that would otherwise waste the valuable resource of his energy and time, my students are using it to complete their coursework—and when they do so, they short-stop the learning process. Jim is over 40. Most of my students are GenZ-ers under 25. They hardly knew a world before LLMs; he did. He had the opportunity to develop his critical thinking skills without free, flashy, broadly available shortcuts to tempt him. In this sense, perhaps the largest contributors to the problem are corporations like Meta, Google, and Apple, who force-feed us more and more AI until it becomes ubiquitous. It is because of this my students think there is an AI tool for anything and thus use it in my classroom.
There may well be benefits to using Generative AI and LLMs. I promise to keep my mind open for them and identify them when I see them. I promise.
In the meantime, it’s tempting to restructure my Comp I class around the theme of Generative AI in the writing classroom and require students read this book as part of their literacy pedagogy. At the very least, it could encourage their skepticism of corporations selling them new AI products, which alone might serve as a foundation for critical thinking in the information age.
Fundamentally, however, as a writer I stand with Warner: “It is frankly bizarre to me that many people find the outsourcing of their own humanity to AI attractive” (7).
(all books get 5 stars) How do modern writers deal with the reality of AI? By understanding that what AI does is not writing. Writing is a uniquely human activity that has to do with thinking and feeling, two things AI can never do. If AI is threatening to our modern academic institutions with its ability to reproduce syntactically and grammatically correct language, that is because what we are teaching students to do in classrooms has little to do with writing.
I'm a big fan of simple, clear prose (especially when you are talking about complex subjects) and Warner utilizes that here. Although he is an expert on teaching writing, he does not need to use jargon to make his points. He also manages a nice balance between pragmatic acceptance of AI and caution. This makes the book accessible to all readers - including high school students - and I think that's a really worthy aim for this topic. Recommended.
One might draw parentheses around "in the Age of AI" when it comes to John Warner's excellent book. If there is one thing that is certain, generative AI has made it necessary to think about writing in general, as the assumed ubiquity of AI has implied definitions of writing that are certainly unsatisfactory from a pedagogical standpoint, and stand as evidence of the marketplace's power to (try to) shape our destiny. But this isn't just another example of capitalism's dominion. Many uses of AI ask us how much of our humanity we are willing to relinquish? The answer is demoralizing for many of us, yet Warner does provide a framework which he details in the last section of the book: Resist. Renew. Explore.
Warner starts, however, at the "beginning". He eschews "intelligence" as a synonym for "automation" --the real function of AI. I'll admit to a strong confirmation bias, but Warner puts AI through its paces to offer a well-considered and informative critique that I found incredibly helpful in quieting the bile that rises in my throat when it seems everyone has just obeyed our AI overlords in advance. He begins with an accessible explanation of what ChatGPT is and what it is doing when activated. While it is informative, it also serves to remind us that, at some level, we must understand how technology works rather than just allow ourselves to be uniformly awed (or galled) by its "magic." He is openly critical of the propaganda put forth by AI advocates who stand to gain financially (e.g. Sam Altman), but carefully debunks their claims rather than resorting to panicked invective.
Some of the chapter titles read like tongue-in-cheek clickbait, but it adds to Warner's overall sense of humor, which pops up throughout the narrative. To be sure, we are reading a very human writer.
Chapters 3 to 9 offer a more personalized view--almost a mini-memoir of Warner's own life as a writer-- but peppered with rather significant points about semiotics and rhetoric that are a heckuva lot more reader-friendly than most of what is written about semiotics and rhetoric. On a personal level, Chapter 6, "Writing is Feeling" touched me the most, and I think mileage will vary on that depending on the life experiences of the reader. I wasn't quite prepared for tears in encountering one of the most perfect meditations on grief I've ever read. I won't quote it here, but it is on p. 84 (hardcover). It underscores that this is very much a book about being human.
Chapter 7, "Writing as a Practice" felt a bit less useful and more of a (gentle) mouthing-off against the "one key thing" mentality that prompts us to enthusiastically adopt the shiny thing du jour. His diplomatic takedown of Gladwell and Duckworth's themes felt more gratuitous than other parts of the book, but that may be because I needed no convincing at the outset.
Writing teachers (and teachers that use writing) will find chapters 11 to 14 particularly useful, especially if they are interested in having conversations with their students about AI--or rather, about writing. The title for Chapter 16 privileges an anecdote that Warner uses to address one of the most important points of all: writing as intention.
Importantly, Warner encourages constant education, but measured by our own specialities and areas of focus. We cannot possibly read all the things about AI (my Substack feed overwhelms me every day), but it is important to push back at our own confirmation bias as well. I appreciated that Warner notes that he is almost "more obligated to read [Ethan Mollick] because I disagree with him.' (275). There's hope if we engage with thoughtful voices like Mollick, Marc Watkins, and others. Warner says we must foster community:
"Our communities inevitably must contain both those with whom we agree and those with whom we differ. As long as they are willing to see themselves as a member of the community with the well-being of the community in mind, they should be welcome." (275).
I'd like to print that out banner-size and hang it in a few places...
From the morally questionable beginnings of the founding of AI, the degradation of labor (and human-ness), to the careless implementation of automated grading, Warner is clear that we are leaning toward a Faustian bargain when it comes to AI. As a teacher, I was particularly struck by this:
"Writing is meant to be read. Having something that cannot read [AI] generate responses to writing is wrong. It is a moral betrayal of our responsibilities to students." (240) Far too often in discussions of AI I have heard "efficiency" used as a synonym for "pedagogy" and they are certainly not the same thing.
But Warner is also pragmatic: "There is no wishing away AI at this point, meaning it must be grappled with and done so in a way that preserves our humanity." (128) He allows for the limited use of LLMs in processing text (not reading, not writing): "Only humans can read. Only humans can write. Don't let anyone tell you otherwise.." (123)
AI has made it necessary (possible?) to critique our values when it comes to a lot of things, but especially writing. Most educational systems are founded on valuing product over process, so we can't be that surprised when we find that students are using ChatGPT to "cheat." Efficiency is key in the systems we uphold. If we want to truly have our students embrace the "messiness of learning", we have to stop honoring that which privileges standardization and the mechanization of education. The second part of Warner's framework is "renew" and he makes a more-than-convincing case that we can refuse to assimilate into some sort of algorithmic Borg, and instead embrace the human processes of reacting, observing, analyzing, and synthesizing as cause for celebration, rather than erasing them in the name of efficiency.
I'm surprised at the glowing reviews of this book. As a lifelong lover of human-generated textual content from novels to essays to marketing to technical content, I've been thinking about writing (and reading like a writer) for most of my life. I enjoy reading 19th century literature, I can keep up my side of the conversation with any English major, and I'm (just) old enough to remember the pre-digital age vividly. However, I've also been an enthusiastic user of personal computers for 35 years, and have been employed by several of the big tech companies that are now part of the AI revolution. Maybe my background makes me less impressed by Warner's jeremiad against the "automation" of writing. He's not wrong about the value of human thought and feeling and about the ability of written words to both communicate and generate thought and feeling. But his points, though important, are also well worn, and not fresher for being stated strongly. And he spends a great deal of time trying to support his points by making technical arguments he's simply not qualified to make.
To be more specific, Warner spends several chapters insisting on the "toaster" interpretation of artificial intelligence: that artificial intelligence is no more intelligent than any appliance, so that any interpretation of its output as smart, conscious, or perceptive in any way is flawed at its core. I concede the point — for now. But I believe this viewpoint will quickly become dated, and it's a dangerous base to rest an argument on. The fact is that the capabilities of AI are expanding so fast that the very scientists and engineers who are working on it are often surprised and often cannot explain its accomplishments. And although there are many qualified scientists who agree with Warner that AI merely simulates intelligence and cannot be thought of as the product of some sort of cybernetic mind, others disagree, and believe that AI will soon become truly intelligent and impossible for human beings to predict and control. For the moment, it's easy to take an AI-generated text and point out its shortcomings - the blandness, the clichés, the not-quite-rightness, and contrast that with the comparative richness of a human production. But regardless of whether AI ever becomes literally intelligent, it is improving so fast that the differences will no longer be apparent. Critics of AI need to take this into account and not assume, as Warner does, that the flaws of current AI are so inherent that they will always exist. I do not believe they will. Every example Warner provides of inadequate AI output will be corrected and improved in future systems. Warner will disagree, but he does so not because of an argument but because of his axioms: that artificial intelligence is not real intelligence, and that human intelligence can never be simulated. He cannot prove these things, and I am unwilling to consider them proven.
The result of Warner's approach is a book that is sometimes tedious to read, as the first 60 pages of the book are the equivalent of the author buttonholing you and insisting over and over that AI isn't what you supposedly think it is. Better are the sections that come from Warner's own experience as a teacher. But unfortunately, the book doesn't live up to its subtitle How to Think About Writing in the Age of AI. Warner simply argues that we need to think about writing the way we always have, and not believe that AI can replace the human obligation to be creative, as a student, a teacher, or any other writer. He's right, of course, but anyone except a tech bro would surely agree. And his arguments about the nature of AI and his complaints about ChatGPT? If recent history is any indication, they'll be obsolete in two to five years.
Like Amazon, avocados, and Colleen Hoover novels, my reflexive reaction to any conversation about generative AI is to wince and turn away. I don't teach writing myself, but I work for a nonprofit literary arts organization whose whole mission is to teach writing (StoryStudio, shout out!), and for that reason I'm terrified of generative AI. I'm also angry at tech bros like Sam Altman who used copyrighted material to train his large language model, ChatGPT. And I'm worried AI is a shortcut for so many youths these days who don't seem to need too much convincing to take shortcuts (old man yells at cloud!). Because it's so distasteful, I've largely avoided going much deeper than surface-level knowledge about generative AI. The extent of my experience with ChatGPT is the one time I asked it to give me a list of 1990s grunge band names. What it gave me was so hilariously bad (Mudstain! Soggy Flannel! Gravel Gaze!), I've never been back. AI may be stupid, but it's still ubiquitous, and so still very concerning.
So John Warner's new book More Than Words: How to Think About Writing in the Age of AI is a soothing balm; a book that will help demystify AI and gently talk you off the ledge. If you're a Chicago book person, you're probably familiar with Warner. He writes as the Biblioracle in the Sunday Chicago Tribune (as books coverage has dwindled, his column remains a stalwart). He also writes about books and writing in a terrific companion Substack titled The Biblioracle Recommends.
More Than Words truly meets the moment in terms of explaining what AI is, what it is not, and most importantly, how writing can and will still thrive in the age of AI.
Warner writes: "Writing is thinking. Writing involves both the expression and exploration of an idea, meaning that even as we're trying to capture the idea on the page, the idea may change based on our attempts to capture it. Removing thinking from writing renders an act not writing."
There is a lot to love in this book, but that quote to me is the central takeaway. Though what ChatGPT does *resembles* writing, of course, what ChatGPT does IS NOT writing. What ChatGPT does is placing tokens in syntactically correct order. Writing requires thought. And more thought. And pain. And then some more thought. Despite its name, artificial intelligence does not think. So artificial intelligence does not write.
Further, what ChatGPT does is DEFINITELY not creating art. Art requires feeling. And obviously, AI has none. "What I want to say about writing is that it is a fully embodied experience," Warner writes. "When we do it, we are thinking and feeling. We are bringing our unique intelligence to the table and attempting to demonstrate them to the world, even when our intelligences don't seem too intelligent."
How to teach writing in the age of AI, how to pushback (resist?!) against the most nefarious uses of AI, and maybe even some positive use cases for AI (if we're careful) related to writing are all discussed in this book, as well.
I needed this book badly and I can't recommend it more highly to you if you care about books and writing, as well.