Không có gì phải tranh cãi khi nói rằng công nghệ làm cho vấn đề giao tiếp trở nên dễ dàng hơn.
Thật đơn giản hơn khi muốn một ai đó ngừng lại trong khi chúng ta đang tham gia vào một cuộc trò chuyện trực tuyến. Chúng ta có thể unfriend thậm chí block họ chỉ sau một cái click. Chúng ta cũng có thể dễ dàng làm quen, kết bạn và chia sẻ với một người bạn mới hay thậm chí một người lạ những câu chuyện vui vẻ hài hước nhưng thật khó làm điều đó khi “mặt đối mặt” với nhau. Mọi việc thật đơn giản khi bạn có trong tay một chiếc điện thoại thông minh và internet tốc độ cao.
Tuy nhiên, cũng chính công nghệ đang khiến cảm xúc của chúng ta trở nên nghèo nàn một cách đáng sợ. Sự thỏa mãn trên mạng đôi khi chỉ tồn tại trong một thời gian ngắn và nó cũng có khả năng ăn mòn một trong những tính cách “con người” nhất của chúng ta: sự đồng cảm và thấu cảm.
Khi các phiên bản hiện đại hơn của các phương tiện truyền thông xã hội trở thành một phần ám ảnh trong cuộc sống hàng ngày, chúng mang đến những phiền phức nhất định. Công nghệ thay đổi và mạng lưới mở rộng, những cảm xúc, sẻ chia của chúng ta không còn nằm trong một nhóm giới hạn nữa mà nó đã vươn tới “bạn của bạn của bạn” - những người chúng ta hoàn toàn không quen biết. Không còn những buổi tụ tập trò chuyện mà là những cuộc trao đổi với tốc độ nhanh đến chóng mặt với số lượng người tham gia không giới hạn, nơi mà chúng ta không thể nghe thấy giọng của nhau, nhìn thấy khuôn mặt nhau. Sự đồng cảm và thấu cảm chắc chắn sẽ khó hơn khi bạn tập trung vào màn hình thay vì nhìn vào khuôn mặt của một ai đó.
Bạn sẽ không bao giờ biết những comment vô tình của mình có thực sự đang làm ai đó tổn thương. Có thể ai đó cần bờ vai của bạn để dựa vào hơn là một lời an ủi được gửi qua ứng dụng Messenger.
Trong cuốn sách Tương lai của cảm xúc, tác giả chia sẻ những câu chuyện cá nhân của riêng cô cũng như của các bác sĩ, doanh nhân, giáo viên, nhà báo và nhà khoa học về việc tiếp cận với những đổi mới và công nghệ nhưng không đánh mất đi sự đồng cảm và thấu cảm trong con người họ. Cuốn sách dành tặng cho bất kỳ ai muốn biết não bộ hoạt động thế nào và cơ chế xây dựng sự đồng cảm của nó khi tồn tại trong một thế giới bị ám ảnh bởi công nghệ.
A good book. The thing that sums up the book for me the quote "what many of us consider science fiction, the technology for is or very soon will be a reality ".
What is highlighted in this book the state of empathy. How as humans we seem to be losing it, and machines are constantly being built to be more empathetic. However is machine empathy the same thing we feel as a connection to another person, or is it used to further divide/manipulate us by the corporations/people behind the technology?
The book gives cases and arguments on the good and bad parts of increasing technological world we live in.
This is not a conspiracy theory or witch hunt of a book. It states what technology is available and in some case what is on the horizon, and how it could be used to enhance and enrich our lives or the to cause trouble such as manipulating our emotions against us or the control the way we think or see things.
For me it is an eye opener for much of the tech already being used and in some cases how it is being used. A good current informative and thought provoking book. Which fills me full of both hope and dread of what is the come in the future.
If you have a particular interest in the behavior of those on social media, the future of society and AI and how it is changing how society communicates, this may be for you. However, I had to add this to my #dropped list. Got more than halfway through and saw that much of this book read like a dissertation a tired student needed to finish for a degree. I appreciate the work that went into this and there are some interesting theories. However, if you live on the internet like me but don’t really struggle with the pitfalls that come with being on social media, this may not be of particular interest. Much of the book felt like an info dump and even more of the book told me what I can easily deduce. To be fair, the long term effects of the internet on anything is still up in the air, so it’s difficult to write a book about social media that is not filled with theories. Kudos to the author for tackling the subject, but this didn’t work for me.
When I used my Prime First Reads option on this book, I thought I was picking an insightful, conversational piece about how social media and anonymity affect human empathy. I wanted personal insights into cancel culture, online bullying, and ways we might combat such things without having to stop using social media entirely or rely on hypothetical technology with often-horrific ethical implications. More than that, I expected the author to pick a side and stick to it, not provide a teeter-totter of conflicting studies and "professional" opinions from varied sources. And to exercise the empathy she preaches.
Unfortunately, the most interesting and attractive thing about this book is its cover.
The writing is dry and boring; often, I found myself reading without absorbing because the words on the page were meaningless and repetitive filler. Some of the "study findings" and opinions are grossly generalized and therefore highly inaccurate of the human condition. For example, this book makes the claim that empathetic people are "happier, more self-aware, self-motivated, and optimistic" and "cope better with stress, assert themselves when it is required, and are comfortable expressing their feelings." That's a dangerously specific claim, and completely inaccurate of myself and everyone else I know with high levels of empathy. (But hey, nice to know we're meaningless and being labelled as unempathetic just because we're insecure or introverted...?) This isn't the only instance of the author or those she cites being decidedly thoughtless, cruel, or lacking in empathy while trying to preach the importance of being empathetic, either.
Mark Zuckerberg is accused of lacking empathy simply because he once stood for freedom of speech (by my understanding, he's more keen on censoring Facebook these days). His willingness to show empathy for the rights of even the people he despises or disagrees with is portrayed as a sign he lacks empathy for "victims" of his platform. (How a person can be "victimized" by a website they have full control to block people on or stop logging into is anyone's guess.) Likewise, at one point it's directly implied that a man whose facial recognition AI lacked empathy simply because he let it learn via the internet and as a result it had a higher concentration of popular and white characters/people in the database without including Uhura from Star Trek (who is a black woman). The discovery of this oversight is referred to as "calling him out" - one of the very mentalities indicative of dehumanizing instead of empathizing with others online! Shortly after, it's also basically stated that any accidental oversight in an AI which relates to social justice issues is caused by a lack of empathy or the developers not being good enough people. Yikes! This is the exact opposite of the "we need to treat each other better and give the benefit of the doubt when we're upset" moral I expected from the book based on its premise!
The 2016 election and this one time a guy on Facebook was rude to her are also mentioned far more than they should be and most of the interesting information is provided in brief glimpses or hindered by annoying filler about this, that, and the other study - often including repetition of things already covered. For the political stuff, I wouldn't mind if it were mentioned a few times, but it's frequent enough to make several chapters feel more like a collection of outdated, boring political-tech articles than part of a recently-published book. Yes, we get it, people were little curses to each other more than ever on social media in 2016. No, it's not when trolling and doxxing and other such practices began and, no, it hasn't gotten any better since then. How are we supposed to look to the future when so firmly focused on the past?
When the focus isn't so blatantly political or biased, however, there are some interesting tidbits. The author shares tales about robot companions, AI projects, VR methods used to aid everyone from veterans to children with Autism Spectrum Disorder, and the interesting psychological nature behind bonding with Siri and Alexa. I want to make sure I credit her for that, because these things are genuinely fascinating. The trouble is: these are fleeting moments between boring filler and, quite often, some of the most potentially interesting bits are only given a cursory mention before diving into more annoying statistics. I want more of the cool psychological explanations and less of the "in this article, in that study, during the one interview" bits (seriously, the bibliography stretches from 86% to 100% in the kindle version).
Overall, this book has no sense of direction - it meanders in spirals, more or less - and suffers from severe overload of statistical data. It feels at times like a fence sitter trying to figure out where they stand... or a student with no strong opinion grudgingly trudging through an assigned paper when they'd rather be doing something they actually care about instead. That clearly isn't the case, given how much love the author seems to hold for this project when naming a chatbot pal after the then-in-progress book and reaching out to contact all the people she encountered in person, so I'm not entirely sure what went wrong or where. I just know that I don't feel the passion. I, instead, feel dull and boring "technical" writing.
Well, somewhat technical. Despite the dry and semi-professional tone, the author frequently (and annoyingly) refers to herself as a millennial despite being at the oldest end of that spectrum and far from what the generation is normally used to describe, uses annoying made-up words like "phubbing" ('snubbing' someone to pay attention to a phone instead) and iGen (Generation Z), and relies too strongly on the meaning closer to 'compassion and sympathy' when referring to empathy. It feels at times like reading a "Hello, fellow kids" meme come to life or watching someone struggle to connect unrelated things by stretching the definition of their supposed common thread (empathy) until they sort of vaguely seem similar.
I had such high hopes, but ultimately this book left me feeling disappointed and underwhelmed. I'm glad I read it, for the few intriguing pieces and small selection of new knowledge I've gained, but also I'm mentally exhausted after trudging through such densely uninteresting writing.
I felt a kindred spirit as I read through the Author’s Note and Introduction. It seemed that Kaitlin Ugolik Phillips held some of the same views as I did concerning the use of social media and the dwindling use of traditional social skills. As I continued to read, however, the mood and thoughts changed, affecting the way I viewed the book.
The author begins by examining how empathy can be used in different contexts and how using technology we could increase that empathy. As the book moves on, technology and its positive uses are explored, from generating a better atmosphere in the workplace to helping kids in school to assisting those afflicted with physical and mental issues. While it is almost impossible to argue that these positive tools are not a boon to mankind, there are still issues that are not fully addressed in the book.
The author acknowledges that AI is not always correct when evaluating what users mean by the words they type, and later in the book controversial methods used by schools are examined. These methods are secretive, and students have no idea they are being examined for suicidal or depressive behaviors. While anyone can easily see the potential of good results, until AI examination can be near or at 100% correct in evaluations, it is unfair to potentially label a child with a title that could follow him or her for life.
Presenting a VR interpretation of a story that reflects only one view IS proselytizing, whether the author wishes to acknowledge that or not. In the author’s own words, people “…remember feelings. But those feelings can lead them to new perspectives, which is a sign of good journalism.”
All the while, I kept reminding myself of something said by teenager Welela Solomon, whose statement was close to how I felt. In discussions with my wife, I always maintain that AI is not bad, it can be a great thing but what matters is what we do with it. Ms. Solomon says “Tech is a tool. It does what we tell it to do.”
As the book progressed toward the middle, the “tech-obsessed” mantle was laid aside and there was more discussion on achieving empathy by using the available tech. Again, it is hard to argue against something that helps people. I was concerned at times that the author would present items as fact without providing any references or footnotes. For example, a statement that “Research shows that overweight women and women of color receive some of the worst health care in general, as too many doctors insist that any malady must be related to their weight or believe that nonwhite patients require less pain relief.” While an Internet search did lead me to a study concerning obese women patients, searching for a study involving race or ethnicity turned up an article from physiciansweekly.com, which contained the following: “Because it wasn’t controlled experiment, the study doesn’t prove that race or ethnicity directly impacts how ER clinicians treat acute pain.” I am not disputing the merit of Ms. Phillips’ statement, just that a few links or footnotes would have been appreciated, as they would have bolstered her points.
I was encouraged to see a chapter on ethics, and the admission that the “…empathy impact depends largely on who’s wearing the helmet” as well one of my concerns, that much “…depends in large part on who’s programming the content inside the helmet.” This is a great point, and determining who (which naturally leads to what) is programmed can be either a positive or a negative. I hoped this chapter would have been longer, as this aspect could lead us either to a utopia or the specter of a clockwork orange. As one group expressed, “What if the tech that allowed us to share our feelings could also manipulate them?”
In the court of ideas, Ms. Phillips and I agree on some points and differ on others. I recommend this book to everyone, no matter where you stand on the issue. Ai will be increasingly involved in all our lives, and instructions on how to think and apply that to be better in our lives look to be added to our everyday lives. While I wouldn’t consider this book to be all-inclusive, there are plenty of ideas and the book is very accessible which will allow anyone to begin understanding what the near future might bring. Five stars.
Disappointing. The book tried to connect too many ideas (each with one or two experts and rehashing of previously published books or articles) with little critical analysis (with the exception of the last chapter.) I should have skipped to the bibliography and read the source material instead.
This was a really interesting and engaging book, spoiled by the heavy-handed political examples. The author used them to illustrate her points, but she only chose examples on one side of the political aisle. Makes sense--those are the ones she herself will resonate with. But in so doing, she inherently vilifies everyone who disagrees with her as "non-empathic." Pretty ironic for a book that is supposed to be all about empathy for those who disagree with you.
“Our future will likely be even more tech focused than the present. We can’t control all the tech products that come at us, but we should assert some agency in how they affect our lives.” – Kaitlin Ugolik Phillips, The Future of Feeling
This book addresses an important topic: how to build empathy in the use of our technology tools. One needs to look no further than strings of increasingly incendiary comments on social media forums to obtain evidence of the problem. The author has investigated the various ways technology is being used to foster empathy, such as Virtual Reality, Apps, Bots, Games, and Artificial Intelligence. This book outlines many options, along with advantages and potential abuses.
The author presents the research results of others in a coherent manner. I think it requires a specific interest in the subject to fully engage in the material. It could have used additional focus on building interpersonal skills via face-to-face interactions, listening to understand the other person’s point of view, asking non-inflammatory questions to find out more, and transferring those skills to social media. I value the research results presented and feel it was worth my time reading it.
"The Future of Feeling: Building Empathy in a Tech-Obsessed World" is more a weaving together of technology and empathy than an actual argument for empathy. It may be important to recognize this difference before reading the book as it may very well impact your enjoyment of and appreciation for the book.
I appreciated Phillips's journey through a variety of areas of technology and the current movement toward utilizing these technologies in a more humanized, empathetic way. For the most part, these technologies and the experiments cited by Phillips are in their early stages and it's unclear where this is all going. It will be interesting to revisit this book a few years from now.
Phillips uses an awful lot of personal narrative in the book, an approach that was hit-and-miss for me as so much of the book is intellect-based that it almost felt like we'd hit editorializing anytime she would insert her own personal experiences.
While I'm not someone who works in a tech field, I also found much of the material here not particularly groundbreaking. I'd heard of quite a bit of this and at times found myself almost more skimming pages rather than deeply reading them.
For those with a particular interest in this subject, I think an appreciation of this book is more likely. However, as someone who received the Kindle version as a FirstRead it's a little more outside the realm of what I'd usually pick up but I appreciated the material and the food for thought provided by Phillips.
كتاب مستقبل المشاعر يتحدث عن تأثير التكنولوجيا على مشاعر البشر و المشاريع المختلفة لشركات التقنية فى محاولتها لمنع التنمر الإلكتروني و لكن أغلبها يفشل فى التقليل من هذا النوع من التنمر . من الموضوعات التى تحدث عنها الكتاب هى العنصرية فى مواقع التواصل الالكتروني و تأثيرها على الحالة النفسية للمراهقين عند استخدام مثلاً الfacebook .
ما يثير إندهاشى هو أن التكنولوجيا من الممكن أن تكون هى الحل من المشاكل التى تحدثها.و لكن الفرق هو ما هى القواعد و القيم التى تهدف إليها كل شركة ؟ هل هى شركة تريد الحفاظ على مستقبل إيجابي للبشر من حيث التواصل أم هدفهم الربح فقط و رضى بعض الحكومات و بعض السياسات .
كتاب رائع و مفيد لكل من هو مهتم بالتكنولوجيا و تأثيرها على البشر.
بعض الآراء لا أتفق مع الكاتبة فيها من حيث معتقداتى الخاصة و لكن لكل شخص رأيه . و شكراً للكاتبة على هذا الكتاب القيم عن مشكلة يمر بها كل سكان كوكب الأرض تقريبا.
It is just a collection of technology news reports related to AI, VR, and technological policies related fo human emotions... I expected the author could give deeper insights about the how human feelings and empathy evolved throughout these years... but what I could read were just a bunch of disorganised news stories, calling out the names of different researchers, different schools, different apps and so on... some cases are interesting to explore though, but its not worth the time for a book for getting to know just a few apps.
Felt at times the writer made this about her and wrote with a bias to support her own preexisting bias. If she focused more on the tech developments and less on her own experiences and ideologies, it may have been an enjoyable and informative read.
Don’t judge a book by the title on its cover! What I thought was going to be a thoughtful analysis of the psychological and sociological effects of the age of social media on our emotions turned out to be a list of empathy-based VR apps and a regurgitation of other peoples’ research. While I appreciated the author’s clear earnestness, the book was dull, did not say much and was not what I was looking for.
"بحثٌ ثاقبٌ لما تفعله كُلٌّ مِن وسائل التواصل الاجتماعي والذكاء الاصطناعي، وتكنولوجيا الروبوت والعالم الرقمي لعلاقاتنا مَع بعضنا بعضاً ومَع أنفسنا. ليس هُناك شكٌّ فِي أَنَّ التكنولوجيا سهّلت التواصل فيما بيننا، وأصبح مِنَ الأسهل أيضاً تجاهُل شخصٍ ما عندما يتحدّث إلينا عبر الإنترنت. لماذا تجهد نفسك بفهم الغرباء - أو حتّىٰ المعارف - عندما يُمكنك التنمر عليهم أو حظرهم، أو فقط نقر زر "إلغاء الصداقة" وعدم النظر إِلىٰ الوراء مُجدداً؟! علىٰ الرغم مِن أَن ذلِكَ قد يكون مُرضياً لفترة وجيزة، فإنه مِنَ المُحتمل أيضاً أَن يؤدي إِلىٰ تآكل إحدىٰ أهم سماتنا البشرية : التعاطف. كيف سيبدو المُستقبل عندما يتلاشىٰ هذا الجزء الأساسي مِن أجل مُجتمعٍ مُسالمٍ وصحّيٍّ ومُنتِج؟! الجواب التحذيري ولكن المفعم بالأمل يوجد فِي هذا العمل عَن المشاعر المُهددة بالانقراض. تُشارك كيتلين أوجلوك فيليبس فِي مُستَقْبل المشاعر قصصها الشخصية بالإضافة إِلىٰ قصص الأطباء ورجال الأعمال، والمُدرسين والصحفيين والعلماء حول دفع الابتكار والتكنولوجيا إِلىٰ الأمام دون الخضوع للعزلة. هذا الكتاب مُخصص لأي شخص مُهتم بكيفية عمل أدمغتنا، وكيف يتم إعادة توصيلها بمهارة لتعمل بشكل مُختلف، وما يعنيه ذلِكَ فِي النّهاية بالنسبة إلينا بصفتنا بشراً."
2.5* this is one of those books where the author is a good writer and researcher, but doesn't actually fully learn or integrate their own work. a major weakness of this work is audience - who is this speaking to? to be honest, reads more like a personal investigation educating the author than with an audience in mind (affluent left-leaning white women as audience?). there are moments where the author seems to extend far more empathy to trolls and bad faith users, rather than to their victims, so that's uncomfortable throughout the text.
probably a better use of one's time to read the work of the folks Phillips interviewed for this book, rather than following along with her educational journey...
This book became more about using virtual reality to build empathy rather than what I expected, which was a book about understanding and building empathy through the technologies we use every day. The 2016 election came up far too often, as did a politically-fueled Facebook exchange the author had with a former high school classmate. The writing was dull and I found my mind wandering, often having to read an entire page over once I’d realized I was not processing the information I was reading. I’m giving up on this one.
Rather misleading title, as the book discusses the potential roles of virtual reality in improving our perception of others and our ability to understand their feelings. It comes with the typical features of journalistic research: flowing text, good amount of information, nowhere near expert work. I gave up half way, as the writing could only maintain that far my, admittedly limited, interest in virtual reality. I got the point from early on (VR affects empathy) and moved on after the middle of the book.
The authors lack of depth ruined what should have been a great book. Also the political point of view of the author had too much visibility. Are there any objective authors left?
I read this for my 2020 Reading Challenge and the prompt was a book involving social media. I have no idea how I even managed to finish this book. It was also my Amazon prime first read for January.
This book is about an extremely important topic - how is technology effecting our social relationships and feelings? Does this new technology make us more or less empathic? Is being able to post your thoughts and feelings online, so that anyone in the world can see them, as important to your social/emotional development as a one-to-one conversation with the person sitting next to you? Is a conversation with a robot with Artificial Intelligence programmed to emulate empathy helpful or is it encouraging you to live in an environment built on a lie (that AIs can feel). Does getting news about disasters around the global instantaneously make us more able to empathize with the disaster victims or does it lead to us to developing a thinker skin and pulling away from the horror? When we communicate with co-workers via text and E-Mail does it make us more or less likely to be honest with them? Is an honest and open conversation with an AI therapist the same, better, or worse than a conversation with a human therapist? Can or should a Bot be your best friend? How do we clean out the prejudices that are built into the algorithm’s that run facial recognition software, google searches, and resume reviewing software?
Let me digress for a moment. I am of the boomer generation so I was raised on violent and racist cartoons. My parents worried about how my development would be negatively effected by sitting in front of the boob tube. Did I become immune to violence by watching the coyote get blown-up by the road runner? Was I programmed to be racist by the talking magpies of Heckle and Jeckle? Did watching Combat make me hate Germans or did it teach me some of the horrors of war? I do remember being appalled, even when I was six years old, by the racism of the character Buckwheat in Our Gang shows - in fact this was probably one of my first lessons that racism existed. So did all of this boob tube education effect me negatively or positively? Obviously I am not objective enough to know.
Kaitlin Ugolik Phillips is posing similar concerns about Facebook, artificial intelligence, and facial recognition to those posed to my parents’ generation’s boob tube ones. Phillips answer to the question is modern technology making us more empathic or less empathic is yes.
There is no doubt that this new technology has many dangers built into it. The manipulation of the 2016 election, the anti-vacciners’ measles epidemic, and the rise of white nationalism are just a few, large examples of how we are being manipulated by this new technology. But are these any different than when TV advertising “makes” us spend billions of dollars on make-up, deodorant, toothpaste, and clothes?
Phillips asks a lot of good questions, she shares a lot of personal stories, and she introduces us to people who are both leading and questioning many of these new technological advances. I learned a great deal about areas of technology I was unaware of - which is even more interesting when you realize that until recently I ran a computer business.
Unfortunately, Phillips does not present her own conclusions about these topics. She talks about her “addiction” to Facebook but, otherwise, does not make a lot of “I” statements in reaction to what she is reporting. I feel that this is a major flaw in this type of book - while she was presenting the data I kept waiting for her analysis. Despite this, I am glad I read this book and learned quite a bit. Still, I constantly had a mental fight with the author in each chapter asking, in my head, “Phillips, what do you think?”
Fantastic book that explores questions I've been thinking about -- how can technology such as VR and AI be used to help us understand each other? Can we improve existing technology to be more emotionally fulfilling?
A recent book by Kaitlin Ugolik Phillips caught my attention because I’ve spent considerable amount of reflection in the last couple of years on precisely what the title touts, The Future of Feeling: Building Empathy in a Tech-Obsessed World. I even wrote a theological essay comparing modern surveillance capitalism (and governance) to the idea of divine omnipresence and omnipotence (with obvious qualifying aspects). So, I was very grateful to find this volume, although I read the book rather than listening to the audio book pictured here. So, what is this empathy in the title and how does it apply to technology. She quotes Futurist Jane McGonigal: “Empathy requires you to use your imagination in the same way It requires you to get your brain to simulate something you have no personal, concrete experience with.” (p. 58)
As one would expect, this book contains many qualitative stories, but what I truly appreciated was the significant amount of quantitative research cited and the introduction to two tools to combat toxic discussions (aka “use an algorithm to identify trolls efficiently). First, Faciloscope has an online site where one can paste in conversations from the web and measure the structure of a conversation (p. 36). One receives a chart and a phrase-by-phrase breakdown as to why it would be: 1) staging (setting our the ground rules for a conversation); 2) evocation (identifying possible relationships between the participants; and 3) invitation (direct solicitation of participation through questions and requests). Second, Google has some experimental code available to the user (once they have established a Google Cloud project) that one can use as an API on their website to help moderators identify potentially toxic conversations before the sparks burst into flame wars (p. 37). The latter uses “machine learning” to develop the criteria, though, and the author discovered that a sentence identified as toxic at one period may turn out to be softened with later input (p. 38). [Note: Although not specifically mentioned in the book, following up on these two tools led me to other projects working on the same problem.]
Another useful reference was to the Face the Future project, a website full of media and games to allow students to consider the implications of the future. One of the games to which The Future of Feeling: Building Empathy in a Tech-Obsessed World pointed was called FeelThat, essentially a FitBit for emotions (pp. 41-43). Although the product doesn’t exist, the videos used with the project imagine that one was wired directly into another person’s emotions in good experiences and even in a death experience. Another experiment of which I was unaware was that of Robyn Janz and the GALA (Girls Academic Leadership Project) using VR. Janz shares: “Here’s how I look at it: VR empathy is a fantastic experience built on irony. You go through something that is anywhere from completely joyous and euphoric to absolutely catastrophic and cataclysmic. How you come out of immersion is more to do with your subjective experience in life than anything else.” (p. 55) I also had not heard of Peggy Weil’s Gone Gitmo experience for Second Life (p. 84).
In covering computer games for so many years, I’ve heard a lot of talk about gamification. In this book, I read about Pymetrics and neuro-science-based games used by companies such as J. P. Morgan and Hyatt Hotels in the hiring process (p. 111). While not a game, I was fascinated by a medical simulation called SymPulse that gives electrical jolts to doctors and students so that they can empathize with Parkinson’s patients (p. 122). Also proving interesting to me was the Oculus Rift study of 2017 demonstrating that patients experiencing a VR experience of being on a beach had less stress and positive memories of a visit to the dentist (p. 129). But what I’d really like to get my hands on is “…the Thync, a triangular device created by neuroscientists at the Massachusetts Institute of Technology that is, according to marketing materials, “the first consumer health solution for lowering stress and anxiety.” The device electrically stimulates the nerves in the user’s face and neck that have been found to help regulate stress hormones. It’s touted as the lower-tech, lower-risk version of brain stimulation, and some research shows it can help treat epilepsy, anxiety, and depression.” (p. 186)
I loved the recounting of apparent successes in using technology to enhance rather than suppress empathy, but Philips doesn’t ignore the ugly, surveillance capitalism looming beneath the surface. For example, “many schools have started using internet filters and type trackers to detect when students’ search behaviors suggest depression or suicidal ideation; the education company Pearson conducted a ‘social-psychological’ experiment on thousands of unaware math and science students using its software, to see whether encouraging messages helped them solve problems; and one startup company, BrainCo, is raising millions in funding to create electronic headbands that will allow it to analyze students’ brain data.” (p. 56) And, while companies facilitating such experiments are given the right to collect data to “improve their product,” what does that really mean? Philips asks practical questions about the duration of keeping data and whom the ultimate keepers of the data would be.
The chapter on virtual reality suggested that VR experiences could lead to more empathy, but it really depended upon the way one thinks about change prior to the experience (p. 67). Philips cited a 2018 report from the Tow Center for Digital Journalism: “prompted a higher empathetic response than static photo/text treatments and a higher likelihood of participants to take ‘political or social action’ after viewing.” (p. 67) She also gave an account of a VR experience called I am Robot which put participants in the role of gender-neutral robots. Most experienced a lowered inhibition level that allowed individuals of both sexes to feel free enough to dance in the experience, even though their pre-experience statements indicated that would not dance (p. 75). But there is a cautionary word: “If VR experiences can trigger empathy in viewers, they can also trigger other feelings: stress, distress, overwhelm, exhaustion, anxiety, and, in cases where people with preexisting trauma may not have been adequately prepared, even symptoms of PTSD.” (p. 76)
I was glad to see Ms. Philips spend some time discussing how journalistic VR (or immersive experiences) can manipulate empathy. Her big question is how one can be transparent when taking some liberties from pure objectivity (p. 89). How can this be done when the goal is putting the consumer of the “story” inside the story? (p. 89) She gives specific examples from the world of journalism, then observes: “The threats of manipulation and subjectivity are more easily dismissed when it comes to advertising and nonprofit outreach. But in journalism it’s important to evoke empathy without coming across as if you’re forcefully extracting it.” (p. 90). And again, “’The invisibility of the journalist in VR can be a dangerous illusion in the consumption of media when viewers begin to analyze, relate to, and act on the stories they consume.’” (p. 91)
I was intrigued by the following criticism of such efforts: “Social-justice activists have termed this phenomenon trauma porn, a spectacle that makes viewers feel good about themselves while giving no benefit—and at times perpetuating harm—to the individuals and communities being depicted.” (p. 92) That’s an interesting observation with some validity, but it could be leveled at almost any media—including books and ordinary documentaries. Indeed, later Philips admits that in contrast to standard documentaries, “People don’t tend to come away from these pieces remembering facts and figures—they remember feelings. But those feelings can lead them to new perspectives, which is a sign of good journalism.” (p. 95)
The next chapter moves back to the idea of empathy in general. It speaks of harassment and empathy training. Why is this important? “Leaders rated as having high empathy by direct reports are 2.5 times more likely to set clear performance expectations, hold others accountable for maintaining high performance, and address performance issues in a fair and consistent manner, according to data from a DDI meta-analysis.” (p. 102) But it was in the discussion of medical empathy that I found this jewel, in turn quoted from Leslie Jamison: “’ Empathy isn’t just listening, it’s asking the questions whose answers need to be listened to. Empathy requires inquiry as much as imagination. Empathy requires knowing you know nothing.’” (pp. 119-120)
I was thrilled to discover about VR programs that reduced pain and stress experiences in patients where blood is withdrawn regularly and bandages on burn victims are frequently changed. Still, I worry about experiments with AI such as “Ellie,” an AI therapist currently being used in a pilot program with the U.S. military. The AI therapist “admits” initially in the session that “she” is not a therapist, but most soldiers feel safe talking to her. Right now, their data is private, but what happens when it is turned over to the military? (p. 142). Empathy toward AI or robots is a strange thing. I was fascinated by Philips’ account of a University of Washington study in 2012 showed that 98% of children thought it was wrong to shut a human being in a closet and 100% were okay with placing a broom in a closet, but only 54% felt it was permissible to shut a robot named Robovie in a closet (p. 144). She also cited a German study from 2018 which discovered: “when eighty-five people were given the choice to switch off a robot they had either just been chatting with or just been using in some functional way. Some of the robots also said out loud that they did not want to be turned off. The results showed that people preferred to keep the bot on when it protested. The participants seemed to be less stressed about turning off the bots that had only been helping them do a task, which wasn’t too surprising. But participants had the hardest time turning off robots that both were functional and objected to being turned off.” (p. 145)
Considering robots, I’ve also been forced to think about what Caleb Chung, co-inventor of the Furby, had to say. In a 2007 TED Talk, Pleo’s creator, Chung, said that he believed “humans need to feel empathy toward things in order to be more human,” and he thought he could help with that by making robotic creatures. As he told the producers of the radio show Radiolab for an episode titled “More or Less Human,” he’d made Pleo in a way that he thought would evoke empathy—giving it the capacity to respond to unwanted touch or movements by limping, trembling, whimpering, and even showing distrust for a while after such an incident. “Whether it’s alive or not, that’s exhibiting sociopathic behavior,” he said, referring to the way the tech bloggers attacked Pleo (p. 148).
At this juncture, Philips reminds the reader of an underlying theme in speaking of AI. She refers to a TED Talk by Danielle Krettek of Google’s Empathy Lab and summarizes it as: “…the things we worry about and fear when it comes to AI—that robots will destroy us all, or at least take our jobs—tell us a lot about how we feel, and what we fear, about humanity itself.” (p. 161) Also, more than once I’ve heard the story of the CIA testing real-time face-recognition technology and discovering that it didn’t pick up African-American faces. For example, on a picture of the original Star Trek bridge crew, it totally ignored Lt. Uhura (p. 171). I particularly liked the call for an algorithmic accountability group (p. 175).
The Future of Feeling: Building Empathy in a Tech-Obsessed World does exactly what a non-fiction book of its kind is supposed to do. It both frightens me and offers hope. Of course, only the future will determine which will dominate, horror or hope. In the meantime, Philips offers plenty of food for thought and discussion.
I found this book to be a very good read. Technology is only going to get more integrated into our daily lives even more so than it already is, so it’s important to stay mindful of our ever increasing dependence on technology.
A significant first step in an incredibly important conversation. The Future of Feeling helps lay the foundation for a framework to discuss how technological innovations should build and encourage empathy. Most of the early chapters merely summarize existing research and don't add original thinking to the topic or speculate much on the future, as it is a very slippery and complex area, but the last two chapters do raise some extremely important questions--questions that far too few people are asking. Phillips deserves major credit for tackling what can be an overwhelming subject and for candidly admitting that she doesn't have all the answers. We all would be well-served to dedicate more energy to further exploring her questions and insisting that the major players in technological development do likewise.
An interesting and well-researched look at how current and future technology--from virtual reality and AI to robots and chatbots--may be able to help us empathize more with situations and people we might not otherwise naturally encounter.
Review of The Future of Feeling by Kaitlin Phillips
I was looking forward to reading this book. Empathy is one of the basic principles of Client Centered Therapy which formed the core of my doctoral training in psychology. It also formed the basis of my work in the field of Counseling Psychology for many years.
The term empathic (not empathetic) in my training meant understanding the feelings of the client with whom you were working and conveying that understanding to the client in the course of psychotherapy. The relationship built on empathy was one of the key components of therapy.
The author toward the beginning of the book discussed concerns about depending on artificial intelligence to think for us. She went on to cite numerous studies about how close machines can come to humans in conveying empathy. She lost me right about there. To me, understanding the feelings of others and relating to them are among the most important skills people can learn. Empathy may be the key to making our way out of the morass in which we find ourselves in today’s society.
A machine can be made to sound like it understands what you are feeling. Yet machines do not care how you feel no matter what they are programmed to do. I lost interest in the exploration of how machines can sound empathic. Machines will never take the place of people understanding and valuing each other regardless of what we imagine they can do. That is about as far as I can imagine going in the journey suggested by this book.
In the era of alternative facts and spread of misinformation. Sympathy is not a virtue in our polarizing world.
The book have a good overall point to represent but it was very dry and lack organization. It might be good to be published as a series of articles not more.
I took so long to finish this book which I started on the last day of my last visit to my paradise (Fermilab- Batavia) because of the pandemic.