Yossi Hoffman
http://www.yossihoffman.com
“(Hofstadter adds that a dearly loved person can still exist after bodily death. The self of the "lost" person, previously fully instantiated in their brain, is now instantiated at a less fine-grained level in the brains of the loving survivor/s. He insists that this isn't merely a matter of "living on" in someone's memory, or of the survivor's having adopted some of the other's characteristics, e.g. a passion for opera. Rather, the two pre-death selves had interpenetrated each other's mental lives and personal ideals so deeply that each can literally live on in the other. Through her widower, a dead mother can even consciously experience her children's growing up. This counter-intuitive claim posits something similar to personal immortality—although when all the survivors themselves have died, the lost self is no longer instantiated. Lasting personal immortality, in computers, is foreseen by the "transhumanist" philosophers: see Chapter 7.)”
― AI: Its Nature and Future
― AI: Its Nature and Future
“In one extreme case, U.S. health insurance company UnitedHealth forced employees to agree with Al decisions even when the decisions were incorrect, under the threat of being fired if they disagreed with the Al too many times. It was later found that over 90 percent of the decisions made by Al were incorrect.
Even without such organizational failure, overreliance on automated decisions (also known as "automation bias") is pervasive. It affects people across industries, from airplane pilots to doctors. In a simulation, when airline pilots received an incorrect engine failure warning from an automated system, 75 percent of them followed the advice and shut down the wrong engine. In contrast, only 25 percent of pilots using a paper checklist made the same mistake. If pilots can do this when their own lives are at stake, so can bureaucrats.
No matter the cause, the end result is the same: consequential decisions about people's lives are made using Al, and there is little or no recourse for flawed decisions.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
Even without such organizational failure, overreliance on automated decisions (also known as "automation bias") is pervasive. It affects people across industries, from airplane pilots to doctors. In a simulation, when airline pilots received an incorrect engine failure warning from an automated system, 75 percent of them followed the advice and shut down the wrong engine. In contrast, only 25 percent of pilots using a paper checklist made the same mistake. If pilots can do this when their own lives are at stake, so can bureaucrats.
No matter the cause, the end result is the same: consequential decisions about people's lives are made using Al, and there is little or no recourse for flawed decisions.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
“Another myth is that tech regulation is hopeless because policymakers don't understand technology. In reality, policymakers aren't experts in any of the domains they legislate. They don't have degrees in civil engineering, yet we have construction codes that help ensure that our buildings are safe. The fact is that policymakers don't need domain expertise. They delegate all the details to experts who work at various levels of government and in various branches. The two of us have been fortunate enough to consult with many of these experts, and they tend to be extremely competent and dedicated. Unfortunately, there are too few of them, and the understaffing of tech experts in government is a real problem. But the idea that heads of state or legislators need to understand technology in order to do a good job is utterly without merit and reveals a basic misunderstanding of how governments work.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
“The surprising thing is not that chatbots sometimes generate nonsense but that they answer correctly so often.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
“While generative Al is modestly but meaningfully useful for a large number of people, it is more profoundly significant for some. An app called Be My Eyes connects blind people to volunteers who assist them in moments of need. The app records the user's surroundings through the phone camera, and the volunteer describes it to them. Be My Eyes has added a virtual assistant option that uses a version of ChatGPT that can describe images. Of course, ChatGPT isn't as helpful as a person, but it is always available, unlike human volunteers.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
Goodreads Librarians Group
— 311344 members
— last activity 1 minute ago
Goodreads Librarians are volunteers who help ensure the accuracy of information about books and authors in the Goodreads' catalog. The Goodreads Libra ...more
NYCIST
— 38 members
— last activity Dec 12, 2017 12:43PM
A group of people that school children associate with technology who also know how to reads.
Yossi’s 2025 Year in Books
Take a look at Yossi’s Year in Books, including some fun facts about their reading.
More friends…
Favorite Genres
Polls voted on by Yossi
Lists liked by Yossi






































