Yossi Hoffman
http://www.yossihoffman.com
“(Hofstadter adds that a dearly loved person can still exist after bodily death. The self of the "lost" person, previously fully instantiated in their brain, is now instantiated at a less fine-grained level in the brains of the loving survivor/s. He insists that this isn't merely a matter of "living on" in someone's memory, or of the survivor's having adopted some of the other's characteristics, e.g. a passion for opera. Rather, the two pre-death selves had interpenetrated each other's mental lives and personal ideals so deeply that each can literally live on in the other. Through her widower, a dead mother can even consciously experience her children's growing up. This counter-intuitive claim posits something similar to personal immortality—although when all the survivors themselves have died, the lost self is no longer instantiated. Lasting personal immortality, in computers, is foreseen by the "transhumanist" philosophers: see Chapter 7.)”
― AI: Its Nature and Future
― AI: Its Nature and Future
“The surprising thing is not that chatbots sometimes generate nonsense but that they answer correctly so often.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
“In a paper published in the philosophy journal Mind, Alan Turing described what's called the Turing Test. This asks whether someone could distinguish, 30% of the time, whether they were interacting (for up to five minutes) with a computer or a person. If not, he implied, there'd be no reason to deny that a computer could really think.
That was tongue in cheek. Although it featured in the opening pages, the Turing Test was an adjunct within a paper primarily intended as a manifesto for a future Al. Indeed, Turing described it to his friend Robin Gandy as light-hearted "propaganda," inviting giggles rather than serious critique.”
― AI: Its Nature and Future
That was tongue in cheek. Although it featured in the opening pages, the Turing Test was an adjunct within a paper primarily intended as a manifesto for a future Al. Indeed, Turing described it to his friend Robin Gandy as light-hearted "propaganda," inviting giggles rather than serious critique.”
― AI: Its Nature and Future
“... for the longest time, we have used the form of a piece of content to determine whether it is legitimate and credible, but that proxy is no longer available to us.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
“In one extreme case, U.S. health insurance company UnitedHealth forced employees to agree with Al decisions even when the decisions were incorrect, under the threat of being fired if they disagreed with the Al too many times. It was later found that over 90 percent of the decisions made by Al were incorrect.
Even without such organizational failure, overreliance on automated decisions (also known as "automation bias") is pervasive. It affects people across industries, from airplane pilots to doctors. In a simulation, when airline pilots received an incorrect engine failure warning from an automated system, 75 percent of them followed the advice and shut down the wrong engine. In contrast, only 25 percent of pilots using a paper checklist made the same mistake. If pilots can do this when their own lives are at stake, so can bureaucrats.
No matter the cause, the end result is the same: consequential decisions about people's lives are made using Al, and there is little or no recourse for flawed decisions.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
Even without such organizational failure, overreliance on automated decisions (also known as "automation bias") is pervasive. It affects people across industries, from airplane pilots to doctors. In a simulation, when airline pilots received an incorrect engine failure warning from an automated system, 75 percent of them followed the advice and shut down the wrong engine. In contrast, only 25 percent of pilots using a paper checklist made the same mistake. If pilots can do this when their own lives are at stake, so can bureaucrats.
No matter the cause, the end result is the same: consequential decisions about people's lives are made using Al, and there is little or no recourse for flawed decisions.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
Goodreads Librarians Group
— 320485 members
— last activity 1 minute ago
Goodreads Librarians are volunteers who help ensure the accuracy of information about books and authors in the Goodreads' catalog. The Goodreads Libra ...more
NYCIST
— 38 members
— last activity Dec 12, 2017 12:43PM
A group of people that school children associate with technology who also know how to reads.
Yossi’s 2025 Year in Books
Take a look at Yossi’s Year in Books, including some fun facts about their reading.
More friends…
Favorite Genres
Polls voted on by Yossi
Lists liked by Yossi






































