In the movies, robots can be terrifying. In reality, thinking machines are disrupting the world in ways that are even more disturbing than in Hollywood fantasies - but they also have the potential to change our lives for the better.
In this stirring, visionary work, acclaimed roboticist Dr. Ayanna Howard explores how the tech world’s racial and sexual biases are infecting the next generation of Artificial Intelligence, with profoundly negative effects for humans of all genders and races.
Drawing on cutting-edge research, and her own experience as one of the few Black women in the field of robotics, Dr. Howard shares how she navigated bias in her own coming-of-age as a roboticist. She also reveals how the world of computer programmers, which largely lacks women and Black people, is producing thinking machines that too often think like their flawed creators.
The danger of bias in our AI-powered machines has never been greater. Governments are using supercomputers to track COVID-19 patients. AI is being employed to monitor Black Lives Matter protests. Voice recognition systems have been rolled out that can’t hear female voices. Dr. Howard delivers a stirring warning about the risks of AI and robots - but also offers an uplifting message about empowerment and where we need to go next.
I was soooooooo on board with this book, until the author starts with the whole Men’s Rights Activists claptrap about how “women experience things EMOTIONALLY whereas men experience things rAtiOnAllY,” as if trans people hadn’t been invented back when this was published in the ancient pasts of the year of 2021. The author cites periods from her life to explain how she arrived at this conclusion, but all the evidence shows is that she was shamed into the gender binary. Her father ended every debate with her as a child whenever she “showed emotion” and followed that up with telling her she could talk about this with him again when she “calmed down.” A male classmate in college criticized her for not showing emotion like a “normal” woman so she took acting classes to better emulate the model of womanhood that men expected. This is social abuse she is reacting to, this is not her reading research papers (that don’t exist) that tell her that binary gender socialization is an incontrovertible fact of nature. This is her being groomed by men from a young age to only react in ways men find acceptable. What men refer to as “rationality” is men DESCRIBING HAVING EMOTIONS. That’s all. Men don’t default operate from a place of science, facts, or logic. Women aren’t born too emotional to handle their feelings. We are berated and hurt over the course of years until we make concessions to patriarchal standards of behavior. These are social biases and historical biases based on moronic hunter vs. gatherer prelapsarian notions of how one kind of genitalia body “is supposed to behave” compared to another kind of genitalia body.
If you have the patience to finish a book about biases that can’t confront or examine its own biases, godspeed then I guess? There is still a shit ton of worthwhile stuff in here about racism, but there are other sources that cover biases in technology. There’s even better writing by the author I’ve found about how we shouldn’t inflict the phenomenon of gender…on robots. As for people? Well I gave the rest of chapter three about this men are from mars women are from venus nonsense to correct herself and she did not and I am over it. Maybe she backtracks after that, I truly don’t know but I won’t intentionally listen to more of the hurtful crap I’ve already had to hear my entire effing life.
This subject is not new to me so I guess that might have affected my rating.
This is a nice summation of how artificial intelligence can be racist or sexist or otherwise biased if the people programming it are and since the vast majority of such programmers are white males… well, you get the picture. Such biases can be extremely harmful or even lethal. This is a take on this subject the needs consideration.
She also discusses the current state of ai and includes some insightful analysis of where she believes it’s all going. Good stuff.
It was l well-written story with beautiful language without technical complication for someone who is totally not from the field. I believe too many personal experiences and feelings about racial biases could take away some good ideas and information the author was so skilfully communicating. Saing so, it was interesting to realize, how easily it could be translated to a totally new field as AI is. Easy reading, would recommend for a vacation :-).
I was granted complimentary access to Sex, Race, and Robots on Audible via Audiobookworm Promotions in exchange for an honest review as part of my participation in the blog tour for this title. Thank you to both Dr. Alyanna Howard and Audiobookworm Promotions for the opportunity! This has not swayed my opinion. My thoughts are my own and my review is honest.
Sex, Race, and Robots - How to Be Human in the Age of AI is a collection of thoughts on robots and AI in history, in our modern world, and in the future, and how the biases of those creating these machines and systems have become intertwined in how the robots perform and how the AIs make their decisions. It's thought-provoking, informative, and overall a very enjoyable read.
As a woman who has always been drawn to technology myself, I found it refreshing to hear a woman's perspective on this topic. I've been the girl in the high school electronics lab pulled aside to ask if I was there because I was interested in electronics or boys (I wasn't the flirty type and I mentored the younger classes.) I felt pushed out and ultimately pursued other degrees both times I attempted a bachelor of engineering. Attempt #1 I was 1 of 16 girls in the first-year class and 12 of the others were international students. Attempt #2 I was 1 of 6, and 3 of us switched to other degrees by the half-way point. During the one work experience semester I did before transferring the second time the only other woman in the office was the secretary. (And that's the story of how I spent 9 years in undergraduate studies and ended up with the vastly different majors of history and computer science.)
Dr. Howard touched on how AIs learning from records of interactions in a historically patriarchal society have unintentionally learned to attribute genders to professions, which becomes a problem when they're used to filter job applicants. How the sensors on self-driving cars, which rely on light reflection, don't register darker skin tones as well as paler ones, or how they fail to identify children simply because they're usually tested during school hours when most children are safely indoors. How speech recognition software responds best to male voices speaking in predominantly white dialects and accents because that's primarily who's designing and testing the programs. How search engines return images of startup CEOs when asked for rebellious white men, and mug shots when asked for rebellious black men, because those are the images and attributed descriptors they have been fed. The field of engineering in the western world is still very much a white man's playground, and because of that the robots and AIs being built are indoctrinated to be both sexist and racist, even if that's not the intent. This reminds me of a concept taught in sociology and psychology called hidden curriculum: the accidental lessons taught alongside the intentional ones. A child who grows up only seeing male doctors and female nurses, especially in an environment that also belittles nurses as less skilled or less intelligent, grows into an adult who doesn't trust female doctors. This is what we're doing to our robots and our AI systems.
In short, this is a book about ethics in robotics and AI and how we need diverse minds and voices in their creation in order to avoid passing on our own biases to the non-human workforce of our modern world. It's insightful and very well written, and I would absolutely be interested in reading (or listening to) more from Dr. Howard.
The audio performance by Amandla Stenberg was perfect, too. She was easy to listen to, had no trouble with technical terms or all of the various names of scientists and authors referenced. I usually listen at 1.5x speed and that felt like a normal conversation pace.
One of my favorite books on bias in AI. I love the objective view on bias — acknowledging that it’s a natural behavior that we cannot escape — while still trying to formulate constructive solutions.
Even more, because ... “No one will escape [untouched] by the issue of bias in AI. No matter how privileged you imagine you are, there is always someone with a bias against you. And when that bias comes from a robot, wrapped in silicon and steel, it could be an unstoppable force.” ... addressing bias in AI should be everybody’s problem!
This book is a great read if you are interested in learning about AI and its possible future. Read this, and then watch "The Social Dilemma" and you will want to go off grid ASAP.
The interesting thing is this book highlights the impossibility of not using technology now that it is here. The question becomes how to use it ethically.
It was highly entertaining and educational for a non-fiction book (High praise from a fiction only connoisseur.).
This book is full of insightful and thought-provoking ideas and information, though I have to admit it started stronger than it ended. She lost me by reiterating the doomsayers mantra of "AI may take over and destroy humanity." The rest of the book was good.
i’d give 3 but bonus one is for this book at least attempting at giving us some tips what we should, could?, do: take responsibility, dont be lazy, dont get comfortable and, more than anyting, do not be stupid
Fascinating book. It is insightful but also an inspired biographical look at being both a woman and a woman of color in technology. Great read for anyone interested in AI and those students who are forging their path in computer sciences/robotics.
Found it a really interesting & easily digestible book on how our own biases are being fed into current AI & the ramifications of it. Also eye opening first hand account of the challenges faced by a woman of colour in tech. if I did however find her suggestions of how we improve AI a bit too ambitious & lofty e.g. that people should stop working dead end jobs & find something they are passionate about, while a nice sentiment, is not necessarily attainable or practical advice for the masses of people that will be displaced by AI/robots
I enjoyed learning the author's personal story. However, this book pushes the big-business tech-progressivist party line at the heart of social and environmental stressers while wrapping the message in the humble brag of a successful engineer author who falls into several oppressed social categories.
It's a misleading tautology to argue that tech problems have social and/or ecological -- not technological -- foundations. Gun advocates argue that guns don't kill people; people kill people. Car companies make the same argument over car deaths. (Incidentally, this book puts the rate of car deaths at just under 1/100 in the US, but according to Wikipedia, it's 12.9/100,000, or 1/7752.) Dr. Howard argues that AI doesn't reinforce bias; people programming and using AI reinforce bias.
Borrowing from the playbook of centralized power institutions (industry, state, and patriarchy), the book recommends that the best way to address fears about AI is to individually be better people. Afraid of AI? Change the gender of your AI voice speaker to a male. Afraid of climate change? Turn the lights out and use CFLs. Afraid of police violence? Only walk in certain neighbourhoods and dress in a certain way. Afraid of being sexually assaulted? Wear less sexy clothing and carry an assault whistle. This reinforces and distracts from the systemic factors that perpetuate these problems while giving people a placebo fidget spinner to make them feel like they're doing something.
Moreover, this approach suggests the general public is at fault for not doing enough or not being good enough people. It shifts the blame away from those who develop and profit from the technology, allowing them to continue with business as usual while everyone else struggles to find fixes until they grow weary of the topic and feel defeated. This is a compound blow to the public, who not only suffer most from tech problems, and not only shoulder the blame for not doing enough to fix those problems, but who are often forced to pay industry through taxation and subsidies for the development of the tech and then again through their pocketbooks when society tells them they need to purchase or subscribe to the technologies.
I agree that the problems created by AI are probably not all that novel. I also know that hardware robots (as opposed to software bots) are just tools with no eyes, no brain, and no sinister secret plots against humanity. But technological tools continue to perpetuate power disequilibria. There are far more problems than this book mentions.
Yes, robots can pick coffee. But for logistical reasons this is only possible if we breed sub-species of coffee trees to grow so that they may be picked by machine: as full-sun, high-output shrubs monocropped in too-closely-planted neat rows on soil intended to be obsolete for planting within a decade; only if we remove all biodiversity from the coffee plantations with chemical pesticides; only if we replace ecosystem services with chemical, fossil-fuel-derived fertilizer input. For economic reasons, mechanized coffee picking is most practical only if the required mechanical inputs, such as metals and energy, are subsidized by state military-backed "free trade" exploitative extraction; only if the same "free trade" regulations allow cheap exportation of coffee beans from equatorial countries, "adding value" to those beans with subsidized sugar and milk; only if we can ship the used waste materials to Malaysia and the Philippines. In other words, it's only practical to have robots do the agricultural work of himans if we accept the requisit environmental and social devastation for keeping the upper caste the upper caste and sweeping the problems under the rug.
Yes, AI could take over most executive-level work, but since executive workers are the ones with the funding and decision-making power, it's not likely. Instead, AI and robotics are exclusively replacing non-executive jobs. Moreover, AI algorithms are being trained for free by the public or low-pay employees. We solve CAPTCHA image recognition problems that could be used to train self-driving cars to recognize buses or bycicles. Translation AI uses the work of real human translators without payment for the added use of their work while taking work away from them as AI takes over the field. Our personal information, online activities, and media uploads are fed in as big data. People who used to earn money with creative writing are now paid to churn out and mindlessly test AI prompts in the same way that artisanal creators in the 19th century became mindless industrial pin-makers on assembly lines. Just as with assembly line work, these jobs can be shifted anywhere in the world with the lowest wages.
Yes, AI might take jobs away from immigrant gig economy Uber drivers. But there is no certainty that AI driving algorithms will be safer. (I have completely disabled the $10k driving AI software in my car because it's nearly crashed the car multiple times.) Also, people who drive for work are not just going to retool as robotics professors as the author recommends because these jobs are already held and guarded by people who have paid years of study and tens of thousands of dollars into the system that keeps them employed. Meanwhile, the rich get richer, and externalized costs continue to pile up on the environment, society, and future generations.
I wonder what Amazon's incentives were to produce this Audible original. It probably helped that the book didn't recommend unionization. Unionization could actually address some of the work problems caused by systemic bias. What does the author recommend instead? Employees should work harder to find missed opportunities and take the initiative more often. Create your own job title and you'll be happy. Go ahead; I'll wait. After all, there are examples of upper-caste people who have created their own job titles, and they're presumably happy.
There were many other conspicuously absent recommendations: More decentralized community economy and regulatory institutions (see Elinor Ostrom); Less consumerism; Fewer expectations that we should be able to get anything we want delivered any time of the day or year to our house within 24 hours; Fewer expectations that we shouldn't have to do certain jobs because the work is less prestigious; Less reliance on technology that's owned and controlled by centralized power bodies; More recognition of the social, economic and ecological value of technologies that are easy to access and cheap, such as ecosystem services; More self reliance and community reliance; Less government subsidization of the rich; Fewer lobbyist-written regations; More artisenal hand-crafting; Less mass production and planned obsolescence; Less colonial extraction of industrial input materials. You get the point.
As Charles C. Mann phrased it, the author is clearly on Team Wizard, not Team Prophet. I switched teams to the prophets after pushing through many of the above arguments and realizing that centralized power systems largely use technology to maintain those power systems. Dr. Howard sets the bar pretty low when it comes to addressing the use of technology to reinforce bias-based institutions that perpetuate social caste divisions and ecological harm: If things go well, we will be allowed to maintain our humanity. She doesn't really ask what that means, and although she calls out instances of power abuses, she doesn't critically challenge the systems of power that enable those abuses. Just as she argued in favor of Reagan's military Star Wars program because she liked its branding, she argues in favor of robots and AI because she likes the marketing hype, and she has become monetarily and personally invested in more of the same.
Sex, Race, & Robots is insightful, entertaining, and educational – a great combination for any non-fiction book. As a science fiction fan, I enjoy a good Artificial Intelligence (AI) story. And this book couples well with that because it explores the ethical issues surrounding real-life AI. In order to dive into that subject, the book starts by highlighting some of the ethical issues humans still struggle with in regards to other humans. The question that was most on my mind was, how can we as inherently flawed humans with inborn prejudices build truly impartial yet compassionate AIs?
Dr. Howard presented several areas where humans struggle to attain just and impartial assessments of other humans, sometimes drawing on her own life experiences to highlight these issues. They range from gender inequality to race and ethnicity prejudices to cultural ignorance. I especially liked the example of training an AI to recognize images of wedding dresses. The AI did well with white wedding dresses but when presented with wedding dresses from India and other countries, marked them as costumes. This example really summed up a lot of things for me. The knowledge base as well any innate prejudices the AI programmer has really feeds into how that AI perceives the world around it.
The book also lays out pros and cons of having AIs in our everyday lives. Is it worth it? Is it inevitable? Is it necessary? If we don’t build it, someone else will, so perhaps we need to do it first and do it well, right? Lots of great questions & discussion in this section. I especially liked the part on self-driving cars.
The social media section in especially relevant because AIs are there now assessing what we enjoy and doing their best to feed us more of what we like. Or are they? Are they misinterpreting why we linger on this image or that image? There’s a lot more thought-provoking bits covered here.
The pacing was great as Dr. Howard doesn’t make it all super serious and doom & gloom all the time. Fun little anecdotes were tossed in. Over all, the book is very approachable while also remaining a thing of substance. Oh, and thanks for tossing in that discussion of the movie Revenge of the Nerds. 5/5 stars.
The Narration: Amandla Stenberg sounded a little young for me. I kept picturing the author in her young 20s, but other than that she did a great job narrating this book. Her pacing was perfect and she captured the tone and feelings of each section. There were no tech issues with this recording. 4.5/5 stars.
I received this audiobook as part of my participation in a blog tour with Audiobookworm Promotions. The tour is being sponsored by Dr. Ayanna Howard. The gifting of this audiobook did not affect my opinion of it.
I like AI, and this is a book that I wanted to love. Here's what I experienced instead.
The text felt very repetitive. Maybe there were nuances in the repetition, but if so they didn't stick with me. In response to job loss, her answer was to suck it up and work harder (my words, not hers). I was hoping for a more innovative approach, because that kind of advice is available for free from pretty much anywhere. She also advocates for becoming better humans (examining our biases) as we adapt to AI. It's a wonderful idea, but an impractical solution, at least on a really large scale.
The primary message about bias being built into AI is important. We need to understand how this has happened so that we can do better. Or at least so that we can demand better, since the vast majority of us are not in a position to address it directly.
Overall this book just didn't hold my attention, and by the end I was kind of zoning out. The stories about existing AI systems that have gone wrong were the most interesting parts and are definitely worth hearing, so I would have liked more about that.
Interesting, but doesn't go as deep into issues of bias as I hoped. I also have to agree with some of the criticism of the book as skewing solutionist -- assuming a technological or technocratic solution exists to most problems, regardless of the nature of the problem. I was also disturbed by the disconnect between the argument being made in the book and some of the real-life interactions with humans described by the author -- like when she says she told a kid that robots would come to his house and seek revenge for his (very typical, childlike) behavior, i.e. poking her expensive robot. What?! That is not remotely the appropriate response to a child's misbehavior. Is that not an example of the kind of human bias that the author is arguing that we need to address critically? Why was this anecdote in the book? Mixed messages.
One of the longest six-hour-audios in my life. (Oxymoron, anyone?)
Ayanna Howard works in tech; she's a person of expertise and also knows what it's like for minorities in this age. Her perspective was very informative.
Do I know more about the topic? Yes.
Will I continue panicking? Also yes.
I don't think her solutions are very applicable, they sorta need people to be good and care and stuff. I don't feel safe thinking about it.
Though this book was quite insightful, it veered too much into autobiography territory and spiralled into unrelated topics -sometimes the author wrapped it up neatly but sometimes it was plainly irrelevant.
I plan to read more about this, but I think this is quite comprehensive when you look at its size.
This book was doing a lot, which at times worked, at times meant that the transition between thoughts was a little rough. I loved the stories about the author’s journey to robotics research, and as a black woman who was in the field for a while, so much of this resonated. In particular I’d never thought about how the entire concept of qualification exams was fundamentally flawed and ripe with bias. I also thought that the messaging around this is what the real thing to be worried about with AI, bias and needing more ethics and diversity in the field was particularly strong. Mostly I think it just needed a little more editing and a little more of a transition between many of the concepts included in a single chapter. Overall I definitely recommend this book!
This was a very interesting read. I don't know much about AI, so I was a little afraid that I wouldn't understand things in this book, but the tech aspects were described in a good way so even a noob like me would understand it. And it was very interesting to get a glimt into the history of the internet and tech and how it's all changed in just a few years.
It was well written, if a bit repetitive at times, and meandering. I often had trouble connecting the conclusion to the opening question, for the road there was very long and winding, but also interesting.
I think I'll give this book a reread some time in the future, and see if I understand the things I missed this time around better then.
The author, perhaps rightly, assumes a lot about her audience. She spends a lot of time debunking ideas and approaches that I've never found particularly compelling. She moved slowly through material I found banal and familiar, then would quickly gloss over ideas that I'd never encountered before... As if those were the ideas that might bore her audience if explored too deeply.
She talks a lot about bias, but doesn't seem willing to admit how deep her own biases run. It's clear she believes deeply in solutionism and progress. She seems completely blind to these lenses.
Dr. Ayanna Howard brings up some real concerns, and I have the feeling that I would agree with her stance on many of them, but she spends most of this book undermining her own credibility with things like: anthropomorphization of autonomous vehicle algorithms using the Trolley Problem, and use of Isaac Asimov's Laws of Robotics as a starting point for AI ethics. These laws were introduced in a work of fiction in 1942 not as a serious suggestion for creating ethical robots, but as a literary device to create conflict and an interesting story!
So as a black man in this country, not do I need to understand how black people were screwed over in the past and how we're currently being screwed over right now, I also need to know how we are going to be screwed over in the future. Ayanna Howard does a great job blending her own narrative with current issues. She also writes about issues to show how they affect everyone no matter what sex or race all the while keeping a hint of optimism.
I really enjoyed the content, though at times I found the narrator's voice and tone difficult to follow. If the goal is to influence readers to carefully monitor our tacit tendency to believe mechanization, robots and AI algorithms are purely objective, you succeeded with me.
Weaponization of existing and pre-existing human biases has become all too real and threatening as our legal and social infrastructures continue to increasingly lag behind technology.
Didn’t love it, I felt like this book needed an editor. Definitely a lot of interesting stuff in here but it felt like the author threw in any random topic of interest. Was expecting a book about bias that has seeped into AI/robotics/tech, and there’s a lot of that, but then there are chapters on things like the power of introverts and mansplaining which made everything feel disjointed to me.
Howard shares ethical considerations of AI through her audiobook. She tells how AI (and people involved in AI) can be sexist, racist, and show biases and discrimination. She shares personal stories and makes them relatable to us. The audiobook seems like a good introduction to the AI fairness topic.
Such an important aspect of AI to be aware of, before it's too late. A unique perspective, part biography, part AI overview, part social commentary. Dr. Howard’s story is truly unique and inspiring. She describes herself as an “anomaly” and indeed she is. But with her and others blazing a trail, we will have a more diversity in AI / robotics, which will benefit all.
The author did an amazing job of analyzing the intersection of prejudice and AI in all its iterations and implications. I appreciate how she wrote about the technical aspects in a way that was accessible to a general audience. The interdisciplinarity baked into the analysis added depth and creativity to this truly unique work.
I listened to this book for Women's History Month and generally expected a discussion on how racism and sexism may affect her career in robotics and engineering. It had that and more! Sex, Race and Robots is a refreshing take on human bias and how it may impact not only our own future but the future of AI.
Addresses how AI, tauted by many as objective & non-racist, is in any cases more racist than current thinking. Also it details the obstacle course faced by Black women pursuing a career in AI or robotics. Additionally she serves as a roles model for any minority or female who wants to pursue a career in these areas.
I enjoyed this journey to better understanding of AI (Artificial Intelligence), ways to make better choices when interacting with AI, and what the future might bring. Important listening not just for Sci-Fi writers, but anyone who ever does a Google search.
This is an excellent introduction to some of the issues of AI and how we need to get ahead of them. There is a lot of personal experiences from the author, but she can "walk the walk" and the audio version is expertly narrated.