A provocative attempt to think about what was previously considered a serious philosophical case for the rights of robots.We are in the midst of a robot invasion, as devices of different configurations and capabilities slowly but surely come to take up increasingly important positions in everyday social reality—self-driving vehicles, recommendation algorithms, machine learning decision making systems, and social robots of various forms and functions. Although considerable attention has already been devoted to the subject of robots and responsibility, the question concerning the social status of these artifacts has been largely overlooked. In this book, David Gunkel offers a provocative attempt to think about what has been previously regarded as whether and to what extent robots and other technological artifacts of our own making can and should have any claim to moral and legal standing.
In his analysis, Gunkel invokes the philosophical distinction (developed by David Hume) between “is” and “ought” in order to evaluate and analyze the different arguments regarding the question of robot rights. In the course of his examination, Gunkel finds that none of the existing positions or proposals hold up under scrutiny. In response to this, he then offers an innovative alternative proposal that effectively flips the script on the is/ought problem by introducing another, altogether different way to conceptualize the social situation of robots and the opportunities and challenges they present to existing moral and legal systems.
David J. Gunkel is Distinguished Teaching Professor in the Department of Communication at Northern Illinois University. He is the author of The Machine Question, Of Remixology, Robot Rights (all published by the MIT Press), and other books.
David Gunkel's Robot Rights may have come across as provocative or fanciful when it was published in 2018, but in the age of ChatGPT it suddenly appears like no more than enlightened common sense. Thank goodness those philosophers were doing their job and not just goofing off speculating about the nature of being or something. Having a decent road-map for this topic may end up being of incalculable importance.
Although the organisation of the book at first seems almost mechanically logical, it introduces a remarkable number of unexpected twists as it plays out. Following Hume, the author starts by reminding us of the well-known difficulties associated with deriving an "ought" from an "is", and divides the central question into two parts: S1, "Can robots have rights?" and S2, "Should robots have rights?" Rather unexpectedly, at least to me, it turns out that all four possible combinations of answers make sense and are worth discussing. So after the introduction, we get one chapter on each of these, starting with the obvious combinations, !S1 → !S2 ("Robots can't meaningfully have rights, so the question of whether they should have them is moot"), and S1 → S2 ("Robots can have rights, so they should have them"). There is considerable discussion of what would be required for it to make sense for robots to have rights. Many people feel that if AIs develop the right qualities, they will be sufficiently human-like that the idea is no longer unreasonable.
But what are those qualities? It's amazing to see how quickly things have progressed in just five years. Several times, we get lists which include items like consciousness, sentience and rationality, placing them all roughly on the same level, and not long ago it didn't seem unreasonable to say that machines would only acquire them in the distant future, if at all. Now, when we are reminded of the many philosophers who like to describe mankind as the animal which has λόγος ("logos"), that interesting Greek term which can mean word, language or rationality, we wonder if we need to be more careful, since apparently ChatGPT is a non-human agent that has λόγος too. We can back off to "consciousness"; Chat is always quick to reassure you that it's just a machine with no consciousness, emotions or mental state. However, the book reminds us that consciousness is notoriously slippery to define, and some philosophers have gone as far as to wonder if it isn't just the secular version of the soul. Even diffident Chat, when suitably provoked, can write ironic essays exploring the question of whether the notion of "consciousness" has any real meaning. The book contextualises all these things you've recently noticed and helps you relate them back to the question of how they might justify giving AIs rights.
In the next chapter, we move on to a suggestion that I'm sure will be much discussed over the next few years: S1 → !S2 ("Robots can have rights, but they shouldn't have them"). There are people who for some time now have taken this position and argued that, even if a robot has the qualities needed for it to be meaningfully capable of having rights, we should be sensible and not give them any. As one advocate for this viewpoint has succinctly put it, robots should be slaves. Unfortunately, once again we find it's not so simple. The frightful historical record of what slavery is actually like should make you reluctant to associate yourself with slave-owners. Hegel, from a philosophical standpoint, famously offered arguments about the moral harm it does people to be the masters of slaves; and indeed, the book cites former slaves who go into graphic detail about just what those harms are. We want to think that "it would be different with robots". But it turns out that's a surprisingly hard viewpoint to defend once you start looking at the details.
The fourth combination is one that at first sight appears self-contradictory: !S1 → S2 ("Robots can't meaningfully have rights, but they should have rights anyway"). In fact, it's not as ridiculous as it sounds and follows on logically from the arguments about slavery. In many ways, it may not matter whether the robot really has human-like qualities; as long as people emotionally relate to them as having human-like qualities, being allowed to abuse robots may harm the abusers and society at large. There is considerable discussion of robot sex dolls, which are turning up more and more frequently in the news. Many people feel instinctively queasy about the idea of playing out rape games with a realistic robot doll: even if the doll feels nothing, you wonder about the effect it's having on the rapist.
The final chapter is the most surprising one. Rather than compare the different viewpoints above, we back off further and consider the possibility that all of them are wrong; this part builds on the work of the philosopher Levinas, previously unknown to me. Adapting Levinas's arguments, the author argues that the whole notion of "giving robots rights" may contain serious problems. When we talk about "giving rights" to beings who are sufficiently like us, we implicitly assume that that is morally appropriate. But in fact, what entitles us to be the arbiters here, and why is "being like us" the essential criterion? The AIs may be different, but different doesn't necessarily mean worse: maybe we should approach them as they are, without preconceptions. As a chess player, who for many years has been constantly reminded of the fact that chess AIs are far more insightful about the game than I am, this part also resonated.
The book references a lot of philosophers (Plato, Hume, Kant, Hegel, Heidegger, Derrida, Dennett and Singer all make frequent appearances), and it's responsible to warn people who are allergic to the philosophical vocabulary that they may dislike it for that reason. But even if you feel that way, consider making an exception: it's well-written, and the philosophy is rarely introduced without some explanation of the background. If you already like philosophy, go out and get a copy now. You'll be proud to see your subject openly engaging with some of the key issues of the early twenty-first century.
The topic of robot rights is one that I only recently got into. After years of just not being interested in books about robots, AI, and tech, I’m now fascinated. Although I didn’t think I’d like books like this, I really enjoy books about philosophy, ethics, and morality, and David Grunkel did an incredible job bringing all of these topics to the table in this book. This book takes an insanely deep look at the question of “Should robots have rights?”, and while that seems like a simple question, each chapter is very deep and brings up various philosophical theories, outlooks, and more questions for you to sit with. Whether you’re into robots and the future of tech or just philosophy and ethics, you’ll really enjoy this book.
Those who have grown up withScience fiction can recite Isaac Asimov's "Three Laws of Robotics" A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Today as AI, robotics and autonomous vehicles and systems are taking jobs away we must reconsider how we organise our society. Gunkel takes an academic view of this topic and delivers a clear understanding of what is to be considered. We can not simply For those i this issue. Robot Rights is a serious and well engineered thesis on the subject. For those with an interest it is a masterpiece.
Interesting book. Make us reflect not only on the question if Robot should had right but on our own right as human. Lots of arguments made in this book, are made on the premise that what distinguish Robots and Humans is that human have free will compared to Robot. If you are a believer that human don’t actually have free will, in another word, are an adept of Sam Harris book (Read : Free will or Waking up) lots of arguments in this book will make you counteract arguments made.
I’ll describe this book like a compilation of texts on the subject more than as a debate on the initial question. You’ll inevitably build your own answer to the initial question during your reading.
The most impressive and memorable book I have read in recent years, clear, progressive, well-argued, well-researched, comprehensive, and systematic. This book does not really deal with the subject of the title in the way you might expect; in fact, it blows the question apart rather than answering it. You come away with a few answers about ethics and more questions than before. It changed my perspective and opened me up to philosophical ideas I didn't even know existed. My view of ethics has been permanently transformed after reading this book. I recommend it to everyone.