Why robots defy our existing moral and legal categories and how to revolutionize the way we think about them.
Robots are a curious sort of thing. On the one hand, they are technological artifacts—and thus, things . On the other hand, they seem to have social presence, because they talk and interact with us, and simulate the capabilities commonly associated with personhood. In Person, Thing, Robot , David J. Gunkel sets out to answer the vexing What exactly is a robot? Rather than try to fit robots into the existing categories by way of arguing for either their reification or personification, however, Gunkel argues for a revolutionary reformulation of the entire system, developing a new moral and legal ontology for the twenty-first century and beyond.
In this book, Gunkel investigates how and why efforts to use existing categories to classify robots fail, argues that “robot” designates an irreducible anomaly in the existing ontology, and formulates an alternative that restructures the ontological order in both moral philosophy and law. Person, Thing, Robot not only addresses the issues that are relevant to students, teachers, and researchers working in the fields of moral philosophy, philosophy of technology, science and technology studies (STS), and AI/robot law and policy but it also speaks to controversies that are important to AI researchers, robotics engineers, and computer scientists concerned with the social consequences of their work.
David J. Gunkel is Distinguished Teaching Professor in the Department of Communication at Northern Illinois University. He is the author of The Machine Question, Of Remixology, Robot Rights (all published by the MIT Press), and other books.
A quick summary of my take, after reading a couple of sections:
1. This raises some interesting ideas 2. We have very obviously not existed in a purely binary "person"-"thing" ontology (e.g. animal rights now, the rights of slaves enumerated in the Torah, both of which make it obvious that there are some "things" that you can "own" but not do whatever you want with) 3. Damn this guy is hella longwinded and repetitive 4. I think this guy wrote something really revelatory when he published The Machine Question a decade ago, but this doesn't seem substantially different which makes it seem like he's just trying to profit off the AI hype https://en.wikipedia.org/wiki/The_Mac... 5. Maybe he's getting long in the tooth and has nothing new to say; it happens often with public intellectuals: https://scholars-stage.org/public-int...
I might revise this review later after more reflection, but I found this book somewhat disappointing.
Gunkel has a tendency to repeat himself while making his points and this became frustrating as he cycled through the same arguments a few times. For a book asserting itself as a discussion of robots, there was very little discussion of the actual status of actual, existant robots, AI, etc., and very much discussion about discussing them. He cited A LOT of other material and philosophers, but also sort of dismissed most of their arguments. Some of these were ruled out by others who took those trains of thought further, some were countered excellently by other authors, some were simply disregarded by this author. It's kind of an intensive literature review where the author tries to walk the audience through everything anyone has ever said about robot rights (or personhood, or thinghood, or some third category that is probably/maybe/can be equated with slave rights). He bangs on about the privledge of people to order the world and decide what matters in it based on what matters to them, and in comparison to previous definitions of "people" (or at least persons that "count"), this starts to sound like a valid and interesting perpective to evaluate the order of things...and yet, one can't help but wonder, if that's the route we're taking, why aren't we asking robots about robot rights? Why aren't we asking them how they classify things?
Very likely this is because (as Gunkel points out...repeatedly) much of what would make a "robot" (again AI, etc.) capable of giving us feedback on these issues (their own genuine feedback and not programming that parrots someone else's thoughts) remains in the future, remains speculative, remains a great big "if." Some consider these evolutions inevitable, and they may well be, but that doesn't help us get non-human perspectives in the here and now. And if they're not inevitable? Should we be worrying about giving robots rights?
This book (novella? Very long essay?) provides some great food for thought, asks some great questions, and has some great insights about critical reflection. But it does tend to repeat itself a bit and I think it could have been substantially shorter and still gotten all the main points across.
Edit: The more I reflected on this book, the more frustrated I became with it. Gunkle makes a lot of circular arguments without making any substantial conclusions (even his own offering of a "third" option is not distinctive from the earlier ones he wrote off as different versions of slave rights...it's just better because it's his I guess...). What this book did do for me is make me get back into reading Asimov, whose scifi ideas about robots far outstrip Gunkel's factual avoidance of talking about robots while talking about robots (and Gunkel cites so much fiction that I questioned whether his fictional examples outnumbered the factual ones anyway...).