Goodreads helps you follow your favorite authors. Be the first to learn about new releases!
Start by following Arvind Narayanan.
Showing 1-10 of 10
“There is a striking parallel between the emergence of the modern state and the goals of the technology we have discussed in this chapter. In scaling society up from tribes and small groups, governments have had to confront precisely the problem of enabling secure commerce and other interactions among strangers. The methods may be very different, but the goal is a shared one. Although a maximalist vision for decentralization might involve dismantling the state, this is not really a viable vision, especially when others who share our democracy want a state. However, decentralization through technology is not necessarily in opposition to the state at all. In fact, they can be mutually beneficial. For example, assuming well-identified parties, transfers of smart property can use the block chain for efficient transfers and still use the court system if a dispute arises. We think the big opportunity for block chain technology is implementing decentralization in a way that complements the functions of the state, rather than seeking to replace them. It”
― Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction
― Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction
“Imagine an alternate universe in which people don’t have words for different forms of transportation—only the collective noun “vehicle.” They use that word to refer to cars, buses, bikes, spacecraft, and all other ways of getting from place A to place B. Conversations in this world are confusing. There are furious debates about whether or not vehicles are environmentally friendly, even though no one realizes that one side of the debate is talking about bikes and the other side is talking about trucks. There is a breakthrough in rocketry, but the media focuses on how vehicles have gotten faster—so people call their car dealer (oops, vehicle dealer) to ask when faster models will be available. Meanwhile, fraudsters have capitalized on the fact that consumers don’t know what to believe when it comes to vehicle technology, so scams are rampant in the vehicle sector.
Now replace the word “vehicle” with “artificial intelligence,” and we have a pretty good description of the world we live in.
Artificial intelligence, AI for short, is an umbrella term for a set of loosely related technologies. ChatGPT has little in common with, say, software that banks use to evaluate loan applicants. Both are referred to as AI, but in all the ways that matter—how they work, what they’re used for and by whom, and how they fail—they couldn’t be more different.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
Now replace the word “vehicle” with “artificial intelligence,” and we have a pretty good description of the world we live in.
Artificial intelligence, AI for short, is an umbrella term for a set of loosely related technologies. ChatGPT has little in common with, say, software that banks use to evaluate loan applicants. Both are referred to as AI, but in all the ways that matter—how they work, what they’re used for and by whom, and how they fail—they couldn’t be more different.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
“[All] modern chatbots are actually trained simply to predict the next word in a sequence of words. They generate text by repeatedly producing one word at a time. For technical reasons, they generate a “token” at a time, tokens being chunks of words that are shorter than words but longer than individual letters. They string these tokens together to generate text.
When a chatbot begins to respond to you, it has no coherent picture of the overall response it’s about to produce. It instead performs an absurdly large number of calculations to determine what the first word in the response should be. After it has output—say, a hundred words—it decides what word would make the most sense given your prompt together with the first hundred words that it has generated so far.
This is, of course, a way of producing text that’s utterly unlike human speech. Even when we understand perfectly well how and why a chatbot works, it can remain mind-boggling that it works at all.
Again, we cannot stress enough how computationally expensive all this is. To generate a single token—part of a word—ChatGPT has to perform roughly a trillion arithmetic operations. If you asked it to generate a poem that ended up having about a thousand tokens (i.e., a few hundred words), it would have required about a quadrillion calculations—a million billion.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
When a chatbot begins to respond to you, it has no coherent picture of the overall response it’s about to produce. It instead performs an absurdly large number of calculations to determine what the first word in the response should be. After it has output—say, a hundred words—it decides what word would make the most sense given your prompt together with the first hundred words that it has generated so far.
This is, of course, a way of producing text that’s utterly unlike human speech. Even when we understand perfectly well how and why a chatbot works, it can remain mind-boggling that it works at all.
Again, we cannot stress enough how computationally expensive all this is. To generate a single token—part of a word—ChatGPT has to perform roughly a trillion arithmetic operations. If you asked it to generate a poem that ended up having about a thousand tokens (i.e., a few hundred words), it would have required about a quadrillion calculations—a million billion.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
“The surprising thing is not that chatbots sometimes generate nonsense but that they answer correctly so often.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
“Another myth is that tech regulation is hopeless because policymakers don't understand technology. In reality, policymakers aren't experts in any of the domains they legislate. They don't have degrees in civil engineering, yet we have construction codes that help ensure that our buildings are safe. The fact is that policymakers don't need domain expertise. They delegate all the details to experts who work at various levels of government and in various branches. The two of us have been fortunate enough to consult with many of these experts, and they tend to be extremely competent and dedicated. Unfortunately, there are too few of them, and the understaffing of tech experts in government is a real problem. But the idea that heads of state or legislators need to understand technology in order to do a good job is utterly without merit and reveals a basic misunderstanding of how governments work.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
“While generative Al is modestly but meaningfully useful for a large number of people, it is more profoundly significant for some. An app called Be My Eyes connects blind people to volunteers who assist them in moments of need. The app records the user's surroundings through the phone camera, and the volunteer describes it to them. Be My Eyes has added a virtual assistant option that uses a version of ChatGPT that can describe images. Of course, ChatGPT isn't as helpful as a person, but it is always available, unlike human volunteers.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
“It’s true that companies and governments have many misguided commercial or bureaucratic reasons for deploying faulty predictive AI. But part of the reason surely is that decision-makers are people—people who dread randomness like every one else. This means they can’t stand the thought of the alternative to this way of decision-making—that is, acknowledging that the future cannot be predicted. They would have to accept that they have no control over, say, picking good job performers, and that it’s not possible to do better than a process that is mostly random.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
“Will society be left perpetually reacting to new developments in generative AI? Or do we have the collective will to make structural changes that would allow us to spread out the highly uneven benefits and costs of new innovations, whatever they may be?”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
“A good example is what’s happening in schools and colleges, given that AI can generate essays and pass college exams. Let’s be clear—AI is no threat to education, any more than the introduction of the calculator was. With the right oversight, it can be a valuable learning tool. But to get there, teachers will have to overhaul their curricula, their teaching strategies, and their exams.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
“In one extreme case, U.S. health insurance company UnitedHealth forced employees to agree with Al decisions even when the decisions were incorrect, under the threat of being fired if they disagreed with the Al too many times. It was later found that over 90 percent of the decisions made by Al were incorrect.
Even without such organizational failure, overreliance on automated decisions (also known as "automation bias") is pervasive. It affects people across industries, from airplane pilots to doctors. In a simulation, when airline pilots received an incorrect engine failure warning from an automated system, 75 percent of them followed the advice and shut down the wrong engine. In contrast, only 25 percent of pilots using a paper checklist made the same mistake. If pilots can do this when their own lives are at stake, so can bureaucrats.
No matter the cause, the end result is the same: consequential decisions about people's lives are made using Al, and there is little or no recourse for flawed decisions.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
Even without such organizational failure, overreliance on automated decisions (also known as "automation bias") is pervasive. It affects people across industries, from airplane pilots to doctors. In a simulation, when airline pilots received an incorrect engine failure warning from an automated system, 75 percent of them followed the advice and shut down the wrong engine. In contrast, only 25 percent of pilots using a paper checklist made the same mistake. If pilots can do this when their own lives are at stake, so can bureaucrats.
No matter the cause, the end result is the same: consequential decisions about people's lives are made using Al, and there is little or no recourse for flawed decisions.”
― AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference




