Technology permeates nearly every aspect of our daily lives. Cars enable us to travel long distances, mobile phones help us to communicate, and medical devices make it possible to detect and cure diseases. But these aids to existence are not simply neutral they give shape to what we do and how we experience the world. And because technology plays such an active role in shaping our daily actions and decisions, it is crucial, Peter-Paul Verbeek argues, that we consider the moral dimension of technology. Moralizing Technology offers exactly an in-depth study of the ethical dilemmas and moral issues surrounding the interaction of humans and technology. Drawing from Heidegger and Foucault, as well as from philosophers of technology such as Don Ihde and Bruno Latour, Peter-Paul Verbeek locates morality not just in the human users of technology but in the interaction between us and our machines. Verbeek cites concrete examples, including some from his own life, and compellingly argues for the morality of things. Rich and multifaceted, and sure to be controversial, Moralizing Technology will force us all to consider the virtue of new inventions and to rethink the rightness of the products we use every day.
A very annoying book that keeps repeating itself. But it did discuss some interesting ideas on moralizing artificial intelligence in the later chapters.
This book argues that technologies are an intertwined part of our moral reasoning and this of our moral choices. Verbeek argues that we need a new view of the ethics of technology that takes this intertwining into account. He draws on Latour to articulate such a view and describes some ideas for designing moralizing technologies.
I found the book thought-provoking in that it raises a new perspective on the ethics of philosophy. Verbeek's work on mediation (and this concept in general) is important and interesting. However, beyond helping me think through my position, this book did not work for me at all.
The first problem is that most of the time the book just asserts what is being argued. For example, "If the ethics of technology is to take seriously the mediating role of technology in society and in people's everyday lives, it must move beyond the modernist subject-object dichotomy that forms its metaphysical roots". Why? This is mainly answered by Latour, but not in a way that is clear from the book.
Second, the ethical discussion of ultrasound is unconvincing. Claims such as "ultrasound is far from noninvasive in a moral sense" or "ultrasound isolates the unborn from her or his mother" are not discussed in depth and raise many questions for me. This case is used throughout the book, but never fully explained or argued so that a skeptic could follow the reasoning behind the claims.
The author is arguing that technology has challenged our perceptions of ethics and has begun to dictate our actions and beliefs in such a way that the moral separation between human and non-human objects is no longer a helpful function of Enlightenment thinking.
Technology, no doubt, has made ethical questions more prudent in our modern life. The author asks us to consider how an ultrasound ethically impacts humanity. We can see a baby long before it is separated from the mother, assess its value and identity. Before, a baby could not be detected to have a disease, but now that this is a possibility, the ethical question is on the table in abortion debates. Thus, technology, objects, have impacted human ethics in such a way that humanism as it stands may have limitations that need to be addressed heading into the future.
The author addresses Sloterdijk's _Rules for the Human Zoo_, a highly controversial take on humanism. In the 1999s, Sloterdijk was able to witness the burgeoning sector of technology and make a policy on ethics for the future. Sloterdijk and the author argue that morals should be expanded beyond just humanity and applied to nonhuman things, including animals and objects. The reason for this is that humanism and morals are ultimately rooted in linguistic conventions and can thus be expanded. This way, the moral dilemma surrounding technologies, some of which are inconceivable, can be addressed in the future.
The author raises some interesting questions, and I appreciated the moral dilemma presented. Ultimately, I feel as though the ethical framework surrounding objects is problematic because the moral framework would be coming from a human, either way. It is already a debate if morals are inherent or constructed (spoiler, I don't think morals can be logically defended as innate, given the existence of different cultures' perceptions of morality. Morality cannot be objective unless we ground those morals in a higher power above humanity in some way.) Therefore, this idea is limited in its assumption that humanism brings good, _objective_, ethical ideas to the table.
reading notes: * definition of intentionality of technology in chapter 3 questionable on pages 56,57 - (1) implication of intentionality circularly defined in the interpretation of the phenomenological definition of intentionality, (2) defending position against ethical theory introduces an external etymological definition of intentionality, (3) argument against Searle's "derived intentionality" relies on total isolation of the intentionality from human agents without considering possibility of indirect influence of the users of the technology to steer its morally qualifiable action's outcome [in the way not intended by the designers]. * consider technology as object, consider intentionality of designers and intentionality of users, they both coshape the technology itself but also the way technology "mediates" the intentionality of human agents - people using mobile phones to schedule appointments more spontaneously didn't take a life on its own but through the shared understanding of the technology, coming foremost from the users, ie. human agents' intentions.
This entire review has been hidden because of spoilers.
Only for geeks and wannabe philosophers, but a well-written and though-provoking exploration of the moral status of technology in the modern world, seeking to respond to the question "if technology structures our lives, what happens to moral responsibility?" Veerbeck defends well a position that technology reshapes our moral choices, but widens rather than reduces them; with new responsibilities arising for both designers and users of technology.
Verbeek focuses on technologies that have many users, and thus his treatment of power owes more to philosophers who see it as a distributed and emergent force; rather than a concentrated and political one. Thus his conclusions that we should engage with and experiment with technology read more easily for smart-phones, medical imaging, and household robots, than they do for chemical weapons, climate engineering or perhaps even artificial intelligence. And in my opinion this is where he makes a rare misstep: in rebutting philosophical views of technology that he portrays as rejecting technology as dehumanising, he leaves no room for discriminating approaches that reject certain forms of technology as unacceptably risky in moral or political terms.
Nonetheless, a stimulating and enjoyable read from my perspective.
This is a perfect supplementary course material for computing for Society. From a standpoint of an IT Software Developer, Business Systems Analyst or IT professionals in general, we have to start thinking not only about translating business requirements into a software requirement but we also need to take into account what are the implications or scenarios of designing and implementing such systems designs or software. Moralizing Technology through the mediation theory of Verbeek, allows us to think of possibilities (good or bad) and situations that can influence better design of software technology.
Future authors/researchers/practitioners who want to pickup this mediation theory and put it in a more applied science manner is how can managers or related stakeholders reward this generated scenarios.