The promise of artificial intelligence is automated decision-making at scale, but that means it also automates risk at scale. Are you prepared for that risk?
Already, many companies have suffered real damage when their algorithms led to discriminatory, privacy-invading, and even deadly outcomes. Self-driving cars have hit pedestrians; HR algorithms have precluded women from job searches; mortgage systems have denied loans to qualified minorities. And often the companies who deployed the AI couldn't explain why the black box made the decision it did.
In this environment, AI ethics isn't merely an academic curiosity, it's a business necessity. In Ethical Machines, Reid Blackman gives you all you need to understand AI ethics as a risk management challenge, then to build, procure, and deploy AI in an ethically (and thus reputationally, regulatory, and legally) safe way, and do it at scale.
And don't worry, we're here to get work done, not to ponder deep and existential questions about ethics and technology. Blackman's clear and accessible writing helps make a complex and often misunderstood concept like ethics easy to grasp. You will understand ethical concepts while barely knowing you are taking them on. More importantly, Blackman makes ethics actionable. He tackles the big three ethical risks with AI—bias, explainability, and privacy—and tells you what to do (and what not to do) to mitigate ethical risks.
With practical approaches to everything from how to write a strong statement of AI ethics principles to how to create teams that effectively evaluate ethical risks, Ethical Machines is the one guide you need to ensure you're using utterly unbiased, totally transparent, and remarkably respectful artificial intelligence.
Ethical Machines is an at times tedious, but nonetheless rewarding read. Author Reed Blackmon does his level best to make the topic upbeat and readable without bloating the text with unnecessary fluff. But the subject is at base, difficult. And although Reed does a commendable job. He’s only mortal. And (as previously mentioned) the book is a tad bit laborious.
Not that that’s a bad thing.
Just sayin’
Given all that.
This is a profoundly useful book.
With enough detail and depth to make the work totally worth it.
Reed begins the book by dispelling the notion that ethics are subjective. If that sounds easy/fun, give it a try. Reed makes it look easy, and (almost) fun. But in all honesty, it’s neither.
Reed differentiates between ethical beliefs and ethical truths.
Reed posits that ethical beliefs are subjective.
But ethical truths are not.
Furthermore, Reed asserts that ethical truths are (a-priori) discoverable. And although I cannot actually summarize his argument here (or at all), I was convinced. And before you go spouting off in the comment section. Just read the book please.
Reed then proceeds to ethical structure verses ethical content.
In brief:
Ethical structure pertains to the framework or system that guides ethical thinking and decision-making. It encompasses the principles, rules, and frameworks used to evaluate the morality or ethics of actions, behaviors, or choices. Examples include utilitarianism, deontology, virtue ethics, and ethical relativism.
Ethical content, on the other hand, refers to the specific moral values, principles, or judgments applied within a particular ethical framework.
Ethical content can vary based on cultural norms, personal beliefs, and the context of a specific ethical dilemma. It involves the concrete ethical judgments made within an ethical structure.
In sum, ethical structure provides the framework or methodology for making ethical decisions, while ethical content deals with the specific moral values and judgments applied within that framework.
Both aspects are essential in ethical reasoning and play a role in shaping our understanding of what is right or wrong in various situations.
Thank CHAT GPT for most of that ☝️
Anyway.
Reed posits the the main (big areas) of interest in AI and machine learning (ML) ethics are: bias, transparency, and privacy.
Biased AI/ML systems (already in wide use) have been found to recommend that black people receive longer jail sentences than white people with equivalent records, and get rejected for bank loans more frequently than equivalently qualified white people. And there’s more. So much more.
Black box (nontransparent) AI/ML do all the above and more, without affording the users (including the victims of afore mentioned bias) explanation regarding WHY it made the recommendations it made. However biased, racist, sexist or whatever.
And privacy violating AI/ML collect data on you, me and just about everyone else, to use for whatever, regardless of harm, without explanation.
Sound good?
Now let’s consider the fact that AI/ML are designed to BIG THINGS i.e. to scale up. And now all of the aforementioned AWFUL is amplified.
By a lot.
Statistical power amplifies small effects via comparing large data sets (in this case ENORMOUS data sets).
Processing power amplifies small effects via applying large cycles of iteration (in this case, fuckin’ TERAFLOPS worth of iterations).
And just like that.
The SUPER banality of SUPER evil just got born.
Reed briefly cites Facebook’s famous and all but forgotten Cambridge Analytica scandal.
For those of you who need a refresher:
Cambridge Analytica was a political consulting firm that improperly harvested data from millions of Facebook users without their consent.
This data was then used for political advertising and targeting during the 2016 US presidential election and the Brexit referendum in the UK.
Now.
Sit back and ponder (if you will) just how fucking bad the 2016 US presidential election (TRUMP) and the Brexit (???) tuned out.
And while you can’t blame that ENTIRELY on Cambridge Analytica and Facebook. We can say that fucking around and finding out where that type of thing goes would (probably) not be fun.
Speaking of no fun.
This is about as much as i would care to go on about this (very very good and useful, but kind of laborious) book. Definitely read this if you’re even A LITTLE interested in this topic.
Blackman is involved with one of the biggest tracking machine the World has ever seen. Bigger than the STASI, it deals with every aspect of the career of a large portion of the World's population. Apparently, now that he is rich, he bought god-like characteristics: he can be totally unbiased, not just plain unbiased. And, as he is a true gentleman, he won't work, he will hire people to do the programming for him. And, you have probably guessed it, he will hire equally totally unbiased people. And while the reader believes the crap, the algorithm is tuning day-by-day what you see to fit the narrative ”Totally Unbiased” people push, people from Microsoft, Google, Facebook, Twitter, and the list keeps growing.
As a big advocate of humane technology, I found this book particularly useful thanks to the balance between theory (”Content”; ethical implications on bias, explainability, and privacy) and practice (”Structure”; implementing AI ethical risk mitigation program in business). Specifically, I like the proposed way of crafting an organization’s AI ethics statements, which can be accomplished by (1) thinking about the worst-case scenarios that you should never tolerate (”ethical nightmares”), (2) connecting why these cases matter to your mission/vision/purpose, and (3) articulating how to achieve the goals and avoid the nightmares. Meanwhile, the need for having ethicists in an organization to oversight company’s decision-making process is a very powerful suggestion the author has repeatedly emphasized throughout the book. In fact, some of the entrepreneurs I know recently mentioned that “we need a philosopher in a company in addition to developers”; I believe the importance of these professionals, ethicists, philosophers, or lawyers, should become more and more important in coming years. I wish I could see more references to the other materials to solidify the author’s messages though; I literally felt like I’m listening to a presentation for executives, which is clear and actionable for shareholders but lacking evidence and not necessarily representing stakeholders’ interests.
This is a wonderful book about AI Ethics. The author of the book has been discussing the philosophy of Ethics for more than 20 years at the writing of the book and it shows.
The book is a short treatise on how executives can add AI ethics to their corporate goals. There are many points to consider and dos and donts that adorn the book and make it useful for the busy business person.
At the same time, the book is missing an important aspect - It does not explore enough the motives that guide companies to ignore ethics, knowingly or unknowingly. For example: - What would be the likely impact of adopting these ethical standards on the profit margins and market size? - Why should a company do anything that is not mandated by government or noticed by the public? - Is it a level playing field if other players can make bigger profits and bigger market share by ignoring these concerns? After all, if ethics were such a big benefit,Facebook and Google would have followed them.
What we need is a treatment of AI and Ethics as part of reality of what can be achieved, rather than a utopian ideal.
O título do livro é enganoso (e isso PODE ser bom). Não espere que o livro apresente detalhes técnicos sobre questões técnicas de como aferir questões éticas relacionadas a algoritmos, IA ou robótica. Ao invés disso, o livro foca em questões mais relacionadas à epistemologia e aplicações práticas da ética não apenas em produtos, mas ESPECIALMENTE nas organizações. A escrita do livro parece uma consultoria (coach) dada pelo autor a um stakeholder que deseja implementar um programa de ética em sua empresa. Portanto, se você vai ler o livro esperando um aprofundamento técnico, esqueça. Poucos exemplos também são dados. Contudo, é um livro que vale ser lido pela temática e por trazer à tona a necessidade de ir além dos modelos e algoritmos ao discutir racismo algoritmico ou ethical AI.
Nothing striking or exciting. First chapter establishing how ethics should be considered concrete rather than flexible was intriguing, but I was not fully sold on the viewpoint. I do understand why it should be at least considered ethics could be concrete for the purpose of this book, but the book by itself was not sufficient for me to resolve that inquiry fully (as it is not the main point of the book). I might want to look for ethics book for that endeavor. Other than that, following chapters were not personally new or mindblowing. Potentially helpful read for those not working or researching closely in the field; if so, skimming the headings may be sufficient.
This book is written for a particular audience, of which I am not a member. The author consults for executives looking to implement AI or to create an ethical AI program within their organization. The book covers what would be the material in most empirical research ethics courses today. And there are a series of mental leaps rather than step-by-step logic to get you where you need to go as fast as possible. Definitely a business book, and I may recommend it in the right environment but deeply unsatisfactory for a social scientist and AI practitioner.
I recently read a lot about ethic and AI. There's a lot to discuss as it's a complex and highly conceptual field. This book helps to clarify some point and it's easy to follow and full of food for thought. Recommended. Many thanks to the publisher and Netgalley for this ARC, all opinions are mine
Speaks about different aspects of ethics in AI and how we can sew it in to the daily operations of the business. I like how methodically the content has been laid out starting from common misunderstandings about ethics. A must read for senior executives and engineers who are venturing in to AI domain.
More for if you are starting a business or are in a business that does AI that makes decisions that closely affects human lives. It was none the less interesting and gives some context for people in the field.
Dry, but generally comprehensive for what it is. Could have gone into more detail on the statistical tools on offer, but he's more of an ethicist by trade, so that's excusable.
A helpful guide on building responsible AI from an ethicist’s/ philosopher’s perspective that is equally useful to technical and non-technical audiences.
This is an interesting book on the ethical implications of AI. Of course this question is of the utmost importance to society in this moment, when AI is the solution most considered to augment most jobs. Whether you know about them or not, AI implementations are already in the loop in everyday experiences. Blackman makes a couple of important points. First, “AI ethics can be split into two domains: AI for Good and AI for Not Bad. The former is an attempt to create positive social impact. The latter is about ethical risk mitigation.” His stipulation here is that, amongst those looking to implement AI, most are trying to solve a problem. The ethical implications must be considered in addition to the goal of the implementation. Second, “There must be an individual or, better, a set of individuals with relevant ethical, legal, and business expertise, who determine which, if any, of the mathematical metrics of fairness are appropriate for the particular use case.” Having a fully implemented governance strategy is crucial to ethical deployments going forward. No doubt, there will be much written on this topic in the near future. Blackman's effort is a good early step.