Jump to ratings and reviews
Rate this book

More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech

Rate this book
"Broussard argues that the structural inequalities reproduced in algorithmic systems are no glitch. They are part of the system design. This book shows how everyday technologies embody racist, sexist, and ableist ideas; how they produce discriminatory and harmful outcomes; and how this can be challenged and changed"--

228 pages, Kindle Edition

First published March 14, 2023

82 people are currently reading
4049 people want to read

About the author

Meredith Broussard

4 books102 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
189 (40%)
4 stars
183 (39%)
3 stars
78 (16%)
2 stars
11 (2%)
1 star
6 (1%)
Displaying 1 - 30 of 80 reviews
Profile Image for Kara Babcock.
2,106 reviews1,578 followers
April 2, 2023
Often people ask me what I would recommend if I am no longer recommending Invisible Women. Usually my response is the unhelpful, “Dunno, figure it out.” But really, the amount of books I read? There must be more books about technology and bias out there, especially in the four years since that one was published. So when I heard about More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech, I was excited to receive an eARC from NetGalley and publisher MIT Press.

Meredith Broussard brings her decades of experience as a data scientist and a Black woman in America to discuss design and data bias in tech, not only along the axis of gender but also race and (dis)ability. As the title implies, the book’s thesis is that the bias we can detect and quantify in tech (and in the social systems, such as companies, that build and maintain our tech) is not present by accident. It’s not just a “glitch” or a bug that we can squash with some crunch and a new release. It’s baked into the system, and solving the problem of bias will require a new approach. Fortunately, in addition to pointing out the problems, Broussard points to the people (herself included) doing the work to build this new approach.

When it comes to the problems outlined in this book, a lot of this was already familiar territory to me from watching Coded Bias, reading Algorithms of Oppression and Weapons of Math Destruction , etc. Broussard cites many high-profile examples, and often her explanations of how these systems work start from a basic, first-principles approach. As a result, techies might feel like this book is a little slow. Yet this is exactly the pace needed to make these issues accessible to laypeople, which Broussard is doing here. With systems like OpenAI’s ChatGPT making a spectacle, it is imperative that we arm non-tech-savvy individuals with additional literacy, inoculating them against the mistaken argument that technology is or can ever be value-neutral. Broussard’s writing is clear, cogent, and careful. You don’t need any background in computing to understand the issues as she explains them here.

What was new to me in this book were the parts where Broussard goes beyond the problems to look instead at the solutions. In addition to her own work, she cites many names with which I’m familiar—Safiya Umoja Noble, Timnit Gebru, Cathy O’Neil—along with a few others whose work I have yet to read, such as Ruha Benjamin. In particular, Broussard enthusiastically endorses the practice of algorithmic auditing. This procedure essentially upends the assumption that machine learning algorithms must be black boxes whose decision-making processes we can never truly understand. Broussard, O’Neil, and others are working to create both manual and automated auditing procedures that companies and organizations can use to detect bias in algorithms. While this isn’t a panacea in and of itself, it is an important step forward into this new frontier of data science.

I say this because it’s important for us to accept that we can’t put the genie back in the bottle. We are living in an algorithmic age. But much as with the fight against climate change, we cannot allow acceptance of reality to turn into doom and naysaying against any action. Broussard points out that we can still say no to certain deployments of technology that can be harmful. Facial recognition software is a great example of this, with many municipalities outlawing real-time facial recognition in city surveillance. There are actions we can take.

The overarching solution is thus one of thoughtfulness and harm reduction. Broussard directly challenges the Zuckerberg adage to “move fast and break things.” I suppose this means a good clickbait title for this review might be “Capitalists hate her”! But it’s true. The choice here isn’t between algorithms or no algorithms, AI or no AI. It’s between moving fast for the sake of convenience and profit or moving more slowly and thoughtfully for the sake of being more inclusive, equitable, and just.

I like to think of myself as “tech adjacent.” I don’t work in tech, but I code on an amateur level and keep my pulse on the tech sector. I think there is a tendency among people like me—tech-adjacent people invested in social justice—to write off the tech sector as a bunch of white dudebros who are out of touch. We see the Musks and Zuckerbergs at the top, and we see the Damores in the bottom and middle ranks ranting about women, and we roll our eyes and stereotype. When we do this, however, we forget that there are so many brilliant people like Broussard, Benjamin, Noble, O’Neil, Gebru, Buolamwini, and more—people of colour, women, people of marginalized genders, disabled people, etc., who care about and are part of the tech ecosystem and are actively working to make it better. They are out there, and they have solutions. We just need to listen.

Originally posted on Kara.Reviews, where you can easily browse all my reviews and subscribe to my newsletter.

Creative Commons BY-NC License
Profile Image for anarion.
78 reviews11 followers
November 12, 2024
everybody EVERYBODY who has or will be dealing with AI in their line of work must 10000% read this book and I am not kidding
Profile Image for Steve.
80 reviews1 follower
March 26, 2023
So when announced I won this in the giveaway on here, I was intrigued but not super phased thinking I had no skin in the game on this topic, I manage a group of people who work on retreading tires, so tech is involved in my job but not to the extent that this book would elaborate on. I was happily pleased I couldn't be more wrong. I'll admit, a bunch of stuff went over my head, I'm not educated past a high school diploma, but the author wrote this in a very intelligent manner, but not arrogantly and made it so the layperson could have a basic understanding the point she was putting in evert chapter. So as I was reading, I found myself amazed that white supremacy and racism was at the level of basic tech. I found myself thinking how, but we live in a world where it's so common, that my white, male perception sees it as normal sadly. I'm sure I missed a great deal from what she was conveying, but what I took mostly from it is A: I have privilege and need to recognize it and B: we as a world not only need to do better but she pointed out solutions that make it possible. And that she's not the only one looking at it. There are multiple organizations and other authors she gives credit to.
Profile Image for Rachel Hands.
21 reviews1 follower
March 24, 2023
A concise, accessible, and sharp look at some of the ways machine learning replicates and amplifies harmful biases, and what we can do about it. Recommended for anyone who wants to learn more about the human impacts of AI tech.
Profile Image for Audrey.
392 reviews
September 10, 2023
Meredith Broussard eloquently explains complicated ideas and is even adamant that these ideas aren’t actually difficult to grasp; rather, there’s a stigma that we shouldn’t know or care about the inner workings of tech (especially AI). This is also evident in the way that AI systems are a black box whose inner workings are concealed from the general public. Additionally, this means that many AI programs are rife with racist, sexist, and ableist equations. Our idea of “fair” is not the same as mathematical fairness, which does not take into account context or situational awareness. Broussard explains, with evidence and anecdotes, the bias behind AI (and the people who make it). We need to take a step back from our technochauvinist view and realize that we don’t need to rely on tech for every solution, and in fact, AI is causing more problems than providing solutions. Tech is not the magic answer. We need humans with digital literacy who are fighting to limit and control the power and influence of AI on people’s lives, especially those historically disadvantaged whose oppression continues under AI systems programmed on a harmful past.
Profile Image for Thom.
1,805 reviews74 followers
November 30, 2023
Nabbed two books from the library on this topic; this turned out to be the better one for me. The author is a data journalist with a focus on AI ethics, delivering perfect placement for this subject and topic. I enjoyed this book, a quick read when I had time to sit down with it.

Chapters focus on individual topics, with explanations that were sometimes a bit too much (binary, ascii and database coding for the chapter on binary gender options). The portions that focused on technochauvinism were awesome - I've not heard the term before but it was perfectly apt. My other favorite portion was reading about accountability efforts, both during and after algorithmic testing.

The other book was audio, Ruha Benjamin's Race After Technology: Abolitionist Tools for the New Jim Code. I look forward to reading and comparing these books, but the reader of the audiobook (Mia Ellis) was too much after half an hour. Will have to be a physical or ebook version.
Profile Image for Renette Lee.
12 reviews
February 15, 2025
In summary, More Than a Glitch explores the intersection of bias in data and technology, particularly how issues of race, gender, and ability are amplified by AI and ML. The central thesis of the book is that AI and ML are far from neutral or unbiased solutions; instead, they often reinforce existing societal biases and enable discriminatory practices. Broussard argues that these biases are not mere glitches in the system, but are systematically embedded into technological frameworks. She calls for a redesign of these systems to foster a more equitable world, emphasizing the urgent need for algorithmic accountability and a reconsideration of who holds responsibility for biased outcomes in tech.

One of the book’s key strengths is its accessible approach to what I think is a complex topic. I'm a newcomer to data ethics and governance, and the author effectively explains key terms such as "technochauvinism" and substantiates her arguments with well-researched examples e.g. biased facial recognition software, predictive policing, flawed medical algorithms, etc. Broussard also offers a comprehensive look at the work of other influential thinkers in the field such as Joy Buolamwini, providing a broader context. Broussard's journalistic background helps her craft a compelling, engaging narrative, making technical topics easier for a general audience to digest. I have to add that this is a double-edged sword, as some chapters felt overly critical, were rather repetitive and reiterated similar points in earlier chapters without adding new depth.

Overall, the book is engaging and informative, and provides a thorough introduction to the ways AI and ML replicate/ amplify social biases. As a newcomer to the topic myself, I found the book insightful (although I cannot compare it extensively to other works in the field). Broussard’s accessible writing style makes this book an enjoyable read for anyone seeking to understand the ethical dilemmas posed by AI/ML in today’s world.
Profile Image for Gavin Volker.
46 reviews
April 2, 2024
For me, books are either 5 stars because they cover one very specific topic really well, or many semi-specific topics really well. This book is definitely the latter as it has so much varying content that all fits under the umbrella of machine bias. Meredith Broussard eloquently brings up examples relating to every chapter and flows naturally into supporting data to show that such cases are not flukes. Looking back, it was genuinely surprising to see that she had so many stories to bring up considering the breadth of topics. Broussard would treat the reader as a layperson at the beginning of every chapter and transition into effortlessly explaining complex topics. This is best characterized in chapter two when she describes a simple linear relationship between two variables and three pages later, she has you understanding what a trivariate multidimensional nonmonotonic relationship looks like.
Profile Image for Alexander Smith.
257 reviews80 followers
May 21, 2023
I would first like to acknowledge this book was sent to me by Dr. Broussard's book publicist as I had reviewed her first book Artificial Unintelligence: How Computers Misunderstand the World. I am happy to report that I think this book significantly responds to a lot of the critiques I had previously.

In that review, I accused Broussard of overstating the faults of computers where the faults were in the hands of developers and software engineers. As it was a titular implication, this struck me as seriously misleading to what the book argues. Namely it appeared that she was implying that inductively, if some problematic developers exist, then computation as a whole contains the problems of those developers. This, of course, would be highly reductive. She makes clear throughout this book that her intent wasn't to induct that all computational development is problematic because of a few short sighted, cognitively dissonant developers. Rather, the problem is how much we systematically use overtly problematic interpretations of computation over systematic human intuitions which can be verified, even with data.

Further, I suggested that in many cases she seemed to be rather unclear about what precisely was wrong in her cases, and how it was fundamentally technological. Specifically, she claimed that calling something a "bug" was dehumanizing of the ways in which that "bug" causes harm. My counterargument is that calling a sickness a "bug" does not dehumanize the person who is affected by it. In this book, I think she adequately addresses my critiques with clarity without giving in. Starting with reframing the "bug" (which is often a viral thing) as a "glitch" instead. Glitches are often suggested to be a one-time accident, where "bugs" are often observably systematic kinds of computational errors. While minor, this is rhetorically more appropriate I think. "Bugs" actually are a problem hence why almost every open source project keeps track of them in documentation. Glitches on the other hand are one time "oopsies". Broussard wants us to be clear that these are not one-time "oopsies" but systematic cultural issues in software.

In this book, she doubles down on her argument, but she does so with much greater nuance which I appreciate. Here, she squarely articulates that the core take-away of her last book was "technochauvinism" is a state of mind of those who imply using technology as the go-to solution for problems in society, but that technology itself has potential to solve serious issues nevertheless. However, core to this slight change in tone, is that she makes very clear that computation in fact does have limits in how it can interpret things. Computation, in this context, quite literally contains binaries built in leading to all sorts of difficulties that must be audited. She dedicates an entire chapter towards to explain how automated auditing is necessary for the future of computational implications.

This book is an impressive update from her last work. However, something else that is critical to this change is not just the nuance she gives to computers, but the fact that she proves case after case of a plethera of intersectional issues related to technology, and where and how they occur in relation to human decisions. Being an intersectional Black woman, some of these cases are less "academic" but are close and personal to her experience, and that in of itself warrants giving some of these stories more credibility than, say a peer reviewed quantitative paper about precision and recall of a particular machine learning algorithm.

On one hand, this is a particular perspective that should be heard. On another hand, many of these subjects have been discussed before, which Broussard recognizes. She makes a point throughout the book to give credit where credit is due, pointing to several critical communication books that influenced her arguments. Probably about half of this book has been said before. So if you are looking for particularly novel arguments about AI and injustice, this isn't the book. However, if you are looking for an intersectional voice on the subject and why intersectional voices provide a nuance that other authors seem to lack, this is the book to read. There is rhetorical value to hearing Broussard say these things over say, John Cheney-Lippold or Kate Crawford. The brutal pains and plain writing about them make Broussard's book more visceral and real than the highly academic rhetoric of others.

Another point which some might be wondering I want to take a few sentences on so for potential readers. I went into this thinking, "More than a Glitch? But what about Glitch Feminism?" I assumed that to some degree this book was going to discuss the limitations of that strand of feminism (which some suggest should be done with more rigor). However, this book makes very clear it is not responding to glitch praxis. It's explicitly about systematic marginalizations and harms of technology. While I think this book could have responded to glitch feminism, I'm glad it didn't. The characterized "glitch" here is not about technological exploitation, but rather a status quo of patterns of computational "glitches" which create systematic bias in race, sex, gender, identity, etc.

In conclusion, I would suggest this book to undergraduates over most other literature on the subject, including Artificial Unintelligence. I think there are a lot of problems with more recent critical AI studies, but I think this one gets to the point and offers genuine and fair responses to what computation does and does not do. It trusts its audience to understand deeply technological issues in plain language in a way that I think most critical AI literature does not.
Profile Image for Catalina Vieru.
128 reviews3 followers
April 4, 2025
This book dives into how technology can be biased, especially against women, black people and those with disabilities. Tech isn't just neutral algorhythms, but often reinforces existing inequalities.

From a feminist viewpoint, this book is a game-changer. Broussard breaks down how things like facial recognition, loan approval systems and medical tools can discriminate against marginalized groups. She uses an intersectional approach and doesn't just point out the problems; she calls for a complete overhaul of our tech systems. She believes that just making tech more inclusive isn't enough if the algorithms still treat certain groups unfairly.

In short, More Than a Glitch is a must-read for anyone interested in feminism and technology. Broussard's work is a strong example of the feminist commitment to challenging and changing oppressive systems from an intersectional perspective - especially important in the new era of AI.
Profile Image for hannah ⭐️.
76 reviews2 followers
June 29, 2025
3,5 ⭐️

good entry book if you wanna get into the biases of AI and big tech with lots of examples (mostly from the so called US). if you’re already familiar with social justice issues and can spot and apply them to different areas this might not seem as eye opening.
would have loved to go more into depths on technical terms.
buttt there’s lots of recommendations for further reading so maybe i’ll get more info there.
Profile Image for Simisola.
6 reviews
February 26, 2024
I think that this book did a pretty good job at explaining different biases that are rampant in computer technology. The anecdotal examples that she used were also quite powerful. However, I think that she could have used more relatable and examples of basic issues that people deal with computing, like general access in education.
Profile Image for Emma Engel.
256 reviews5 followers
August 6, 2024
Feminist book club pick for August 2024! I think this book has a lot of interesting things to say but to be honest, most of it went over my head as I have very little knowledge in AI & tech in general. While what the author is talking about is obviously very important for the general public, I felt like this book was less for the general public & more for people in the tech world.
Profile Image for Deb.
923 reviews
August 7, 2024
Really interesting and informative. A little more technical than I was expecting.
Profile Image for putri.
70 reviews32 followers
Read
December 5, 2024
thorough and clear headed insights towards the ethics of technology. i really love how the book went to the extra length of explaining the painstaking process of understanding some technicality behind computing. just a really great and information-dense book despite how relatively small it is
Profile Image for Olive Rose.
12 reviews
September 23, 2025
Anyone who conducts research or works with AI should definitely read it. Anyone else should also read it.
Profile Image for Paz.
64 reviews10 followers
April 21, 2024
This book doesn’t add any new take on an issue already intensively reviewed, and again, it is all about the US, so…
Profile Image for Cait Herdman.
252 reviews6 followers
June 15, 2024
Technology scares the living daylights out of me. I don’t understand the advances in technology and I become quickly overwhelmed in conversations around it.

Meredith Broussard not only makes the concept of technology approachable, but also the concept of bias in the technological world - sexism, racism, ableism - you name it.

I, like many people in modern society, absolutely understand the ways in which policing and medical technologies unjustly discriminate against people of colour, however things as simple (and frequently encountered) as automatic soap dispensers never even crossed my mind before picking this book up.

This is an exceptional educational tool that delivers really complicated information is a digestible way. I cannot count the amount of times I put this book down just to explain what mind boggling information I was learning to whoever was close enough to listen.

Meredith is as well written as she is spoken.

HIGHLY recommended.
3 reviews
December 21, 2023
I recommend this book for anyone interested in the AI and ML field. The organization of the context of the book is well done, providing a high-level overview of what Artificial Intelligence and Machine Learning are and then moving into how bias exists in these fields. Many examples were given, some of which were often repetitive, though it opens your eyes to the problems of computer science advancements and the difference between innovation and social progress. The repetition made it hard to stay focused for latter parts of the book, but the majority of it was very interesting.
Profile Image for Brian Clegg.
Author 159 books3,159 followers
March 28, 2023
In some ways this is a less effective version of Cathy O'Neil's Weapons of Math Destruction with an overlay of identity politics.

Meredith Broussard usefully identifies the ways in which AI systems incorporate bias - sometimes directly in the systems, at other times in the unjustified ways that they are used. We see powerful examples, for example, of the hugely problematic crime prediction systems where it's entirely clear that these AI systems simply should not be used. A useful pointer is what a 'white collar' crime prediction system would do (and why it doesn't really exist). We get similar examples from education, ability issues, gender rights and medical applications.

What I'd hoped would make the difference from earlier books were solutions, when Broussard brings in the concept of 'public interest technology' and outlines a 'potential reboot'. Again, there is some interesting material, though it can seem to be in conflict with other parts of the book. Earlier Broussard argues powerfully that it's not enough to fix bias in AI systems, because the systems have no understanding of the circumstances - this will always need human input. But under public interest technology, we are told 'algorithmic auditing shows great promise for decreasing bias and fixing or preventing algorithmic harms'. Algorithmic auditing is doing exactly what was said earlier wasn't really possible - 'examining an algorithm for bias or unfairness, then evaluating and revising it to make it better.' In the end, this is the kind of problem where the devil is in the detail - and there is little evidence here of solutions that are aware of this, just as proved the case with the way that GDPR in the EU adds layers of bureaucracy without doing the job.

The book sometimes make statements as fact that don't seem backed up. In part this is because it is so intensely US-focused. There's no attempt to look at different cultural settings. So, for example, early on Broussard makes the statement 'People also consistently overestimate how much of the world is made up of people like themselves.' Yet 'the world' is not the US. Data from the UK, for instance, shows consistently that white people significantly underestimate how much of the UK population is white.

The content sometimes sets things against each other that either don't ring true or don't really go together. For example, talking about a shoplifting incident involving the theft of watches worth $3,800, Broussard states 'It's not a good idea to prosecute shoplifting that is this low in dollar value.' Really? To a small business, losses like that can be catastrophic. We are then told that retailers are partly to blame for shoplifting by introducing self-checkouts to reduce costs. Really? But shops that sell $3,800 worth of watches rarely use self-checkouts - and for many customers, self-checkouts are very useful. Why should they be disadvantaged because it makes it a bit easier for criminals?

Without doubt an interesting book, but it doesn't add much that is useful to the discussion that hasn't already been said.
Profile Image for Margaret Schoen.
397 reviews23 followers
January 17, 2023
This is a review of an ARC from Edelweiss.

3.5 stars - interesting and thought-provoking, but longer than it needs to be.

The learning software we use at my school allows you to search for students by last name, but requires that you type in at least three letters to start a search. But this doesn't work for all students - many languages and cultures have two letter last names. Students with those names are effectively invisible to the email system. How could this come to be? A programmer designing the system was trying to set up an efficient way to search and ignored or simply missed the cultural difference that made the two-letter solution impractical and in some ways, discriminatory.

Problems like these have become legendary in tech- automatic soap dispensers that don't respond to people with darker skin or mortgage lending programs that discriminate against BIPOC applicants. A software algorithm that produces a racially biased result, whether intentional or not, may be viewed as a glitch in the system - a small problem that can be tracked down and fixed. Broussard argues here that the problem is larger - more than a glitch, in other words - and is intrinsic to what she calls technochauvinism, the belief that a computer-based solution in inherently better than a human-based one. As she states towards the end "tech is racist and sexist and ableist because the world is so," and there is no level of bias that can be deemed acceptable in programming.

It's an interesting point, and one that can be spelled out in a good long essay. I'm not sure it needed this entire book, which gets repetitive in parts. She has many good examples of why her point is true, but each example didn't necessarily need its own chapter. And she occasionally gets side-tracked, as in the chapter about medical algorithms where she gives an overly-descriptive account of how she used software technology to replicate the AI analysis of her mammogram. Still a thought-provoking an interesting read.
Profile Image for Jacqueline Nyathi.
897 reviews
March 19, 2023
Broussard’s very smart and accessible book has given me new ways of thinking about technology, and new frames for understanding and expressing those thoughts. I was particularly interested in the chapters on disability and on medicine and AI, which are two areas that intersect with my professional training. Broussard covers a lot of ground in other areas too, including racism and other kinds of discrimination in tech, mathematical fairness vs. social fairness, technochauvinism (that computers will fix everything and create utopia), machine bias, AI ethics, cognition, statistics, the justice system, tech regulation, tech and algorithmic auditing, algorithmic justice and accountability, facial recognition and its application and misuse, and a great deal more. She outlines why she believes we should not rely so much on technology, and gives many excellent examples to show where, how and why tech has failed. In that same excellent chapter on medicine and AI, she shares how she used her own medical records to test diagnosis by AI, with intriguing results which have implications for us all.

I learnt a great deal from this book, and I’m so glad I decided to read it, in spite of initially being slightly intimidated by the subject. There are things that will probably be mostly of interest to those working in the fields Broussard covers; however, as tech, algorithms and AI are now part of our daily lives and will only become more so in the future, you will find this a very relevant and timely book.

Meredith Broussard is a data journalist and academic whose work focuses on AI investigative reporting and ethical AI. She is an associate professor at the Arthur L. Carter Journalism Institute of New York University and is research director at the NYU Alliance for Public Interest Technology.

Thank you to MIT Press and to NetGalley for this DRC.
Profile Image for Kelly.
81 reviews4 followers
February 28, 2024
This book tackles a swath of important topics we should all be comfortable discussing at this turn of the AI decade.

This book explicitly assumes of its reader that we agree that racism, sexism, and ableism exist. Despite my alignment with this, the book at points tries to construct arguments as if to convince me to believe these things exist, rather focusing solely on how these issues manifest in the technology we use. I would have liked to see more focus and depth on the topics at hand, as this book felt rather like a summary of our historical issues and current issues with technology perpetuating injustices. There were also some personal anecdotes that, while compelling, shifted the narration from a more research-y tone to a more memoir-y tone, which was a bit distracting.

I also thought that about the same amount of time was given to each chapter. Some topics, such as facial recognition and healthcare, feel more relevant than other topics and could have used longer chapters.

Overall, this was a timely read, especially around the Databases chapter about how gender has physically been encoded as binary in many database systems. Recent news have suggested that the US might introduce a new category for Middle Eastern / North African descent in our forms, as we don't have one currently. This would involve database migrations for many government servers. It does feel that we expect data to fit the tools we've chosen instead of the other way around. I hope technologists get on board with the necessity of these changes and understand the importance of questioning these machine learning models erupting left and right and to assume things don't work off the bat, using auditing to build a case when they do.
Profile Image for Serenity.
36 reviews
January 13, 2024
A wonderful addition to the tech justice canon.

The book starts with an introduction to machine learning and social issues, making it an accessible entry point to thinking about tech justice. However, the audience for this book encompasses both beginners and people who have a background in technology and AI ethics.

In the introduction, Broussard defines the concept of technochauvinism.
"Technochauvinism is a kind of bias that considers computational solutions to be superior to all other solutions."
For most of the middle of the book, she takes multiple deep dives into contemporary situations and examines the role that technochauvinism plays in creating and exacerbating the problem at hand. The book focuses on facial recognition, predictive policing, algorithmic grades, video remote interpreting, coding gender in databases, race "corrections" in medicine, and AI cancer detectors. The final section of the book explores some ways to combat technochauvinism, how to hold algorithms accountable, and when to put the algorithms aside and let humans make the decisions.

I really enjoyed the way that this book was structured. Every chapter focused on a different way that technology and algorithms interact with race, gender, and ability, through case studies of specific people's experiences. More Than A Glitch stuck a perfect balance of being informative about the technical/algorithmic problems while understanding the real impacts those technologies/algorithms have on people. I didn't want to put the book down as I was equally interested in learning about the topics and invested in the people and their outcomes.
Profile Image for Katie.
844 reviews14 followers
October 19, 2024
Broussard’s area of expertise (and bless her for pointing out that our expertises are in areas and that just because you know your shit in one area doesn't mean that you should be relied on in all areas – even ones that are nearby) are data journalism and public interest technology. From that background she unpacks how our societal biases are in the bedrock of the technology surrounding us, especially in algorithms that affect everything from if you’re approved for a mortgage, to police profiling, medical care, and educational technology.

I’m not rating this a five star book only because there were certain sections that I felt Broussard covered the same ground multiple times and that made for a tougher read, even though I thought her authorial tone was an excellent blend of explaining and educating while also being warm and letting her humor in – I bet she’s a really good professor at NYU. But it left me with lots to think about, other areas i want to know more about, and also helped me give good and constructive feedback to the project grants I was reviewing last month regarding ADA Plans and Compliance – specifically how few public history organizations are actively working towards the 2010 updates regarding universal design and digital presences.

Also, I hate educational surveillance technology now more than I did at the beginning of my reading and I really didn’t think that was possible.

full review: https://faintingviolet.wordpress.com/...
Profile Image for Emily.
208 reviews
January 28, 2024
"More Than a Glitch" offers an insightful exploration into the intricate and often overlooked world of bias in artificial intelligence. Meredith Broussard, with her exceptional brilliance, dives deep into the concept of 'technochauvinism' - a term that may be new to many but represents a phenomenon palpably present in our tech-driven society. This book sheds light on the subtle yet significant ways in which AI, despite its neutrality claims, can and does perpetuate biases.

While the book's editing, at times, leaves something to be desired, Broussard's passion for the subject matter resonates throughout the text, making it a compelling read. Her arguments are not just theoretical; they are grounded in real-world implications, making the book not only an academic contribution but also a wake-up call to those involved in AI development and usage.

"More Than a Glitch" is more than an exposé; it's a call to action, urging readers and tech practitioners alike to recognize and address the inherent biases in AI systems. Broussard’s expertise shines through, making the book an invaluable resource for anyone looking to understand the intersection of technology and society. The book's powerful message and insightful content make it a must-read for those interested in the ethical dimensions of technology.
Profile Image for Ali.
1,797 reviews157 followers
June 17, 2023
An accessible, general overview of the ways that technology entrenches bias, with a strong focus on the United States. This isn't looking at large language models or recent technology as much as well-entrenched technologies used in education, policing and health. It is frankly depressing that much of this has been known for close to a decade now - that facial recognition usually is trained only on light-skinned faces; that using historical data to predict outcomes usually means leaning into racial stereotyping if past racism is not corrected for (and it is never is, and possibly can't be). At times here it felt like there was little new here if you have been reading on this topic, but the examples were very recent, even when the technology is not.
This is the first book I've read which also very much focused on accessibility, and the ways in which technology makes many things worse even as others are better. This was a welcome focus, and I was a little shocked by how bad many implementations are.
Profile Image for Alexandra.
1,043 reviews41 followers
February 5, 2025
It's quite easy to forget that computed values are based on assumptions made by people. This book will help you remember.

“Technochauvinism is a kind of bias that considers computational solutions to be superior to all other solutions. Embedded in this bias is an a priori assumption that computers are better than humans. Which is actually a claim that the people who make and program computers are better than other humans.”

“Overfunctioning is my primary coping mechanism.”

“Every system is going to be wrong. The researcher has to decide how wrong and what kind of wrong is preferred.”

“Currently there are about 21 different mathematical definitions of fairness. Interestingly, these definitions are mutually exclusive. It is mathematically unlikely that any solution could satisfy one kind of fairness and also satisfy a second criteria for fairness. So in order to consider an algorithm fair a choice will have to be made as to which kind of fairness is the standard for this kind of algorithm.”
Displaying 1 - 30 of 80 reviews

Can't find what you're looking for?

Get help and learn more about the design.