AI: The Good, the Bad, and the Ugly
We’ve been hearing probably more than we stomach about AI lately, so I figured it’s the perfect time for a pest like me to add my voice to the cacophony. And, while my title relies on an overused cheeky application derived from the classic 1966 Sergio Leone film, I feel it well warranted here. But before diving into my tripartite opinion, I’d like to start with some personal history.
AI & Me, a Like StoryI first became aware of work on Artificial Intelligence during my time in graduate school (1982-85). I was pursuing a degree in the field of Cognitive Psychology with a focus on problem-solving. At the time, AI was being revived from a couple of decades of dormancy primarily due to the growing ubiquity and computational power of computers. My goal was to study the cognitive underpinnings of problem solving using learning by metaphor. For example, at the time it was common to teach that the atom was somewhat like a solar system, giving students a recognizable model to adapt from. My research led me to have some fascinating discussions with grad students in computer science leading me to consider a joint project across the two fields. That’s when I came up to speed regarding AI and was captivated. In the end, I tossed it all aside for a job paying good money (a story for another day).
However, that was not my last waltz with AI. I returned to graduate school while working full-time (another story for yet another day), this time in the cognitive field of speech production and speech recognition. My mentor had written a study showing people can detect if speakers were smiling (without the emotional component) while listening to recorded speech samples. It boils down to facial expressions morphing our vocal apparatus which result in auditory changes to phonemes. I managed to mimic that effect in computer generated speech. Fun stuff, right? Anyway, another graduate student in computer science was interested in my work and wanted to ‘teach’ his neural net-based computer model. So my knowledge of AI broaden and I was again fascinated with the AI field due to this new angle.
All of this to say that, from the outset, I was very interested in and felt positive towards AI.
The GoodWhile maintaining a steady growth curve of advances, the last decade has seen AI catapulted into the public consciousness as never before. From my point of view, this isn’t due to any earth-shattering advancement in technology, so much as a shift in the goals of its use – and I’ll leave the discussion of use for the Bad and Ugly parts.
So let me make a brief list of some of the many areas AI has been used for, which are separate from this relatively recent shift. These applications went mostly unrecognized by the general population, occasionally popping up as small items on slow news days. AI had been leveraged in almost all branches of the sciences, often with highly positive impact. AI has been trained to recognize toxic chemicals on a molecular level, predict tumor-killing cells, design new drug candidate treatments for cancer and Parkinson’s, improve crop rotation schedule for increased sustainability, aid in discovering new synthetic materials, reveal structures in astronomical phenomena such as black holes, assist in answering complex mathematical problems, predict geological fault slips, facilitate both motor movements and neural receptors for prosthetic devices, and – best of all – decode dog vocalizations. (Regarding this last item, there’s a Far Side cartoon depicting a scientist decoding dog barks – all the dog’s in the neighborhood are shouting, “Hey!”)
Remember that preceding list of work in the sciences is far from exhaustive, but I think you can readily see in those fields AI is a formidable tool. And here’s where I’d like to point out a commonality in that list: AI is being used as a tool for people in the field, not replacing them. No one is being told to clear out their cryogenic chambers because their job can be done better – sorry, more cheaply – by an AI application. AI, here, is playing the same role microscopes, calculators, Bunsen burners, computers, and Igors which scientists have relied on to further their understanding of the universe. So AI good, Frankenstein’s creation, Now if only that were true across the board.
The BadUsing data scraped from the web, AI apps were created with a predominant purpose to replace people in the creative fields. Look, it’s a good day when I can draw a reasonably decent stick figure. I have to rely on digital tools to help me create graphics and such. So I understand the appeal. Being able to say, “Make me a painting where pigs with wings are migrating north over Iowan cornfields in the style of Edward Hopper,” and nearly instantly having a professional-looking poster of said porcine tableau is undeniably cool, despite some of the pigs having more than four hooves. I can probably Photoshop that out. So, it’s difficult for me to fault the users. However, the companies behind these apps are well deserving of my scorn.
The first thing that riles me is the insulting disparity between the targets or these apps when compared to the prevalent business biases against them. The targets I’m specifically referring to are those people in the arts. And to ensure it’s understood I mean this across all of the arts, I’ll use the term creatives. I’m not fond of the term, but using artists tends to conjure images of those using a medium such as paint. So we’re talking, currently, about anyone working in a field with output that can be rendered digitally. So writers, artists, musicians, singers, animators, actors, fashion designers, etc. I also specified currently, because if you consider 3-D printers, add modelers and sculptors to a degree; and I might also say, architects are on the horizon as well.
The discrepancy I refer to is this: The artistic fields targeted by the creators of these apps are the same ones that are grossly underfunded and often ridiculed as career choices. The truth is, for the rank and file creatives, these fields are most certainly are not lucrative ones. That’s always been the case, but the combination of ridicule and educational defunding while these are the first fields targeted to make millions through job replacement is beyond insulting and demeaning. This process will make creatives a rarity. I’d like to point out, historically, when dictatorial regimes took power, the first groups they targeted were the creatives and intellects – the ones who help people’s minds expand and grow in unexpected ways. I’m not insinuating by any means this is the goal, but I am saying the end state of a population’s stunted mental growth is the same.
Also Bad, is the corporate attitude for AI. I firmly believe, both from personal experience and following major corporate decisions, the people at the highest levels in any given company (say, at least 90%) lack vision, creativity, and above average intelligence. Ironically, if anyone took the time to study exactly what they do, they’d be prime candidates to be replaced by AI – how’s that for cost savings? I am sure you’ve noticed what happens in the movie industry when someone creates a unique breakout hit often after struggling to find a single company willing to take a risk. Soon, other companies’ uninspired, dullard leadership pushes the button to the conveyor belt of clones and sequels and clone sequels. I see that happening now with the AI apps. It’s the big, hot corporate thing now, so they are going to do their best to cram it into everything they have. It doesn’t matter if it’s useful, people want it, or will lead to earnings growth, while the rest of us are left wondering, “Who asked for this?”
The last Bad thing I want to touch on is a late addition. I saw an ad for an app which basically dumbs down books for the readers by simplifying the text. While I cannot stop cringing over this and revolt at the aspect of what that entails, I will avoid passing judgement. My experience is when such things are introduced, should they flourish, it’s always a mixed bag of positives and negatives when viewed in hindsight. I can’t help wondering though, how untextured our speech would become if the conduit of literary quotes transformed into daily sayings and catchphrases were suddenly cut off. I try to avoid being the “in my day” guy, unless it’s funny.
The UglyThere are two parts to The Ugly, the beginning and the end.
In the beginning, the apps I’ve alluded to in The Bad, were trained on data, often unfiltered, stolen, and lifted without permission of the creator, and has led to output which has been plagiaristic, biased, and racist. (Seriously, if you scrape the internet how could you not consider the sewer-level dregs you’d be swallowing?). On the plagiaristic note, I’ve had arguments online with so-called tech-bros. All of them spout the same twisted legalistic logic protecting the founders’ immoral choices for training materials. “If the source material was taken from a pirated site, well, company XYZ didn’t steal it – the information is now in the public view.” Apparently, the crime of receiving stolen goods is not a thing anymore. Or, “this is no different than a songwriter or a musician being inspired by another artist’s work.” Intentionally copying is not inspiration. And here’s the point I usually hit back with that strips away to the root: Why don’t these AI apps use source material from a paid creative? You know, hire people. Simple. The AI companies want fast and cheap (a.k.a. free) and don’t care how they get it. Make the millions first, then use some of that to pay for lawyers. After all, how would a creative (like an indie author, for example) earning a pittance be able to fight back legally? It was never about public domain or giving people tools to create, it was and is about stealing from people as an easy way to make millions.
Part two is the end and that has me a bit unsettled. I’ve focused on the arts, but AI is making rapid inroads into may other fields including programming, customer service, quality control, etc. The reason for this speed isn’t great leaps in technology, it’s because the output doesn’t have to be good, it just has to be good enough and often the good enough bar is pretty low. (Remember our overly hooved flying pigs.) Having a problematic AI program is usually far, far less expensive than paying for salaried employees along with their overhead.
So what concerns me is the potential for a tsunami of unemployment. In the past when technological leaps targeted workers, it was often in a limited field. And I don’t want to minimize the suffering those job losses caused, but when considering society broadly the hits were eventually absorbed as, theoretically, new fields opened up. But now we’re looking at broad job losses and AI itself in the corporate world doesn’t generate new jobs. What concerns me isn’t whether or not this can be halted or slowed, but that no one – to the best of my knowledge – is talking about contingency plans. I’m a firm believer in disaster planning and I’m not seeing it here.
AfterthoughtYou know which TV Sci-Fi series I enjoyed watching when it was first released? Star Trek: The Next Generation (TNG, as they call it). The future human society will no longer be preoccupied with making money. Money will no longer exist. Individuals will focus on the betterment of themselves. I mean, that is the utopian dream, isn’t it? The underlying assumption is technology – AI, robotics, advances in physics, biology, etc. – would bring this about. I can certainly buy into that. But now I keep puzzling over one thing: How do we get there? How is the transition made from people needing to work to support themselves to people no longer needing to work? If there is a future where technology can do everything we want it to do for us, how does a planet of 8 billion or more working people make that change in economic terms?
And that brings me to my fellow authors and readers. Do any of you know of anybody, in either fiction or nonfiction, who is getting the conversation started for the everyday details regarding going from where we are to a utopian future? All I’ve ever read to date is what technology will do, not what the people, governments and other institutions involved during the transition need to do. And I feel the focus needs to be more on the here and now. Start with the premise that AI in 20 years will have the capacity (used or not) to replace 85% of the non-manual work force (potentially expand it to include a large percentage of the manual workforce if adding robotic AI). I sincerely believe we need to address this now because even if laws are in place, there is no guarantee that those laws will hold and what the impact of non-participating countries might be.
Okay, so now that I’ve ranted I feel better.
Or do I?


