"Fascinating and insightful. . . . I cannot recall a book that has made me think more about the nature of thinking." -- Richard C. Lewontin
Harvard University
Everyone knows that optical illusions trick us because of the way we see. Now scientists have discovered that cognitive illusions, a set of biases deeply embedded in the human mind, can actually distort the way we think.
In Inevitable Illusions, distinguished cognitive researcher Massimo Piattelli-Palmarini takes us on a provocative, challenging, and thoroughly entertaining exploration of the games our minds play. He opens the doors onto the newly charted realm of the cognitive unconscious to reveal the full range of illusions, showing how they inhibit our ability to reason--no matter what our educational background or IQ. Inevitable Illusions is stimulating, eye-opening food for thought.
Di solito i manuali di psicologia applicata sono affascinanti durante la lettura e di difficile quanto sporadica applicazione nella vita contraddicendone l'intenzione. [non a caso non li leggo] Questo no. MPP costruisce una serie di esempi pratici e potentissimi di elencazione dei vari bachi finora identificati e divulgabili di un interessantissimo sistema operativo: il nostro. Alla radice di questo c'� l'ambiente dove ci siamo evoluti, un ambiente dove � sempre stato molto pi� importante (e per un tempo immensamente superiore alla durata della storia della cultura) calcolare d'istinto la distanza fra noi e una preda che non la convenienza di un assicurazione o di un acquisto. Chi avesse delle prevenzioni sui cognitivisti - a causa probabilmente di qualche grave incontro nel mondo del lavoro e li tacciasse di meccanizzazione dell'individuo, si tranquillizzi; MPP � un insigne professore al MIT e un fine divulgatore; che non sia un plagiatore di coscienze apparir� ovvio visto che lo scopo del libro � di evitare alle coscienze di farsi plagiare. A proposito: nel libro dedicato a Tversky e Kahneman si predice loro un prossimo Nobel. Previsione centrata. Ne feci una pure io (non documentata per iscritto ovviamente) questo libro � talmente ben fatto e risponde talmente bene agli scopi che non verr� pubblicizzato pi� di tanto. Previsione centrata. Recensione presente su L'Indice: http://www.internetbookshop.it/code/9...
I came to this book hoping to get a little more insight into the work of Tversky and Kahneman; instead I was treated to needlessly complex prose, poor layout, gee whiz logic and a final paragraph that was completely uncalled for and leaves me baffled as to the author's motives.
The book starts off well enough, Piatelli-Palmarini begins with some simple examples of how we, as humans, can fool ourselves. However, this is also where I began to see the problems that would come to plague this book: the writing style felt stilted and overly complex. Bits of logical, connective sentences seemed to be missing. On his second example, part of the reason became clear, The author put some of this information in an appendix at the back of the book. This forces you, the reader, to constantly flip to the back for every little detail. After a bit, this became annoying.
While the appendix provided the missing links, it did not help the writing style. Perhaps something was lost in the translation, but so many times I had to go back over the sentence to be sure I read the author correctly. This was not just language usage, but general logic was left out. This became really clear in the author's description of Bayes. So much was left out that I feel an unfamiliar reader will leave the book with an incomplete understanding. In fact, for almost all the cognitive illusions, I feel the reader will leave missing some of his points, or worse, find his arguments so obtuse, that she will discount the entire field. This is both dangerous and a disservice.
What's worse, the author spends nearly 7% of the book attacking one man, Gerd Gigerenzer. Additionally, the author tends to be completely dismissive of anyone who doesn't completely agree with him. When you are talking about cognitive issues, speaking in absolutes will come back to bite you. It also sets off all of my warning flags. This is someone with an ax to grind; not someone who is hear to teach.
The final paragraph though is the one that sealed the deal for me. He, out of no where, attacks biological evolutionists! Now granted there is some field or question on adaptation, but to dismiss almost an entire field, well, that takes a lot. It also makes me suspect the author's ability to think critically.
Wow! Talk about mind-expanding. Yes, our brains harbor certain biases of which we are not only unaware, but that we stubbornly cling to and persist in obeying. There are seven types identified, and if you learn to recognize them, you'll end up a much wiser and savvy person.
I do agree, the writing style leaves something to be desired. There were several explanations of math or logical concepts that I simply could not get until I consulted the Googler. And the organization of the book -- sometimes I wondered if I was re-reading whole sections. But...translation problems? The content is still fascinating, especially if you are new to this subject.
Clearly not the best popular book in cognitive economics
JDN 2456462 EDT 14:41.
I must respectfully disagree with the reviewer at Nature; Massimo Piattelli-Palmarini's Inevitable Illusions is not "the best popular book in this field". That title lies squarely on the shoulders of Thinking, Fast and Slow by Daniel Kahneman. (His name takes a long time to write, so I shall abbreviate it MPP.) Inevitable Illusions is decent, satisfactory; and maybe when it was written in 1994 it really was the best popular book available. But some of MPP's explanations are awful, and a few of them are just outright wrong.
It's not an awful book; it's very easy to read, and someone who had no exposure to cognitive economics would indeed learn some things by reading it. I do like the way that MPP emphasizes repeatedly that cognitive illusions do not undermine rationality; they merely show that human beings are imperfect at being rational. It's odd that this is controversial (doesn't it seem obvious?), but it is; neoclassical economists to this day insist that human deviations from irrationality are inconsequential.
MPP's explanations of the sure-thing principle and Bayes' Law are so singularly awful and incomprehensible that I feel I must reproduce them verbatim: "If, after considering all the arguments pro and con, we decide to do something and a certain condition arises in that something, and we decide to do that very thing, even if the condition does not arise, then, according to the sure-thing principle, we should act immediately, without waiting." It's much simpler than that. If you'd do B if A is true and also do B if A is false, then you should do B without needing to know whether A is true. To use MPP's own example, if you'll go to Hawaii whether or not you passed the test, then you don't need to know whether you passed before you buy your tickets to Hawaii. "The probability that a hypothesis (in particular, a diagnosis) is correct, given the test, is equal to: The probability of the outcome of the test (or verification), given the hypothesis (this is a sort of inverse calculation with respect to the end we are seeking), multiplied by the probability of the hypothesis in an absolute sense (that is, independent of this test or verification) and divided by the probability of the outcome of the test in an absolute sense (that is, independent of the hypothesis or diagnosis)." Once again, we don't need this mouthful. Bayes' Law is subtle, but it is not that complicated. The probability A is true knowing that B is true, is equal to the probability B would be true if A were true, divided by the probability B would be true whether or not A were true, times the probability A is true. B provides evidence in proportion to how much more likely B would be if A were true; and then that evidence is applied to your prior knowledge about how likely A is in general. Most people ignore the prior knowledge, but that's a mistake; even strong evidence shouldn't convince you if the event you're looking for is extremely unlikely. It's probably easiest to use extremes. If B is no more likely to be true when A is than when A isn't, it provides no evidence; P(B|A) = P(B) and thus P(A|B) = P(B)/P(B)*P(A) = P(A). If B is guaranteed to be true whenever A is true and guaranteed not to be true whenever A is false, then it provides perfect evidence: P(B|A) = 1, P(B) = P(A), and P(A|B) = 1*P(A)/P(A) = 1.
At the end of the book, MPP answers rebuttals from "cognitive ecologists" who (if his characterization is accurate) think that we suffer no significant cognitive illusions, and that's of course very silly. If this is not a strawman, it's a bananaman. A more charitable reading would be that we wouldn't suffer survival-relevant cognitive illusions in a savannah environment 100,000 years ago; but that's a far weaker claim, and proportionately less interesting. Life was simpler back then. Nasty, brutish, and short; but simple. We might have experienced illusions in the past (if the mutations to make us do better simply did not exist), but it's equally reasonable to say that we didn't. The point is that we live a much more complex life now, so heuristics that worked before don't anymore. MPP is of course right about that part. But he also sees illusions that aren't really there (meta-illusion?). For instance, he seems deeply troubled by the fact that similarity judgments are intransitive, when in fact this makes perfect sense. Being "similar" isn't sharing a single property; it's sharing a fraction of a constellation of properties. Jamaica is like Cuba in that they are small island nations in the Caribbean; Cuba is like the Soviet Union in that they are Communist dictatorships. Jamaica is not like the Soviet Union, because they don't have much in common. There is no reason we would expect this judgment to be transitive, and anyone who does think so is simply using a bad definition of "similarity". Similarity is more like probability; and from P(A&B) = 0.6 and P(B&C) = 0.5, you can't infer much at all about P(A&C). The probability axioms place certain limits on it, but not very strong ones. Suppose 60% of doctors are men with blue eyes, and 50% of doctors are Americans with blue eyes; how many of the doctors are American men? We could have 50% blue-eyed American men, 10% blue-eyed German men, and 40% brown-eyed American men. We could also have 10% blue-eyed American men, 50% blue-eyed German men, and 40% blue-eyed American women. So the number of American men could be anywhere from 10% to 90%. The fact that similarity judgments are not always symmetrical is more problematic, though even it can be explained without too much deviation from normative rationality. Why is North Korea more like China than China is like North Korea? Well, we know more about China; we have more features to compare. So while contemplating North Korea might just yield a few traits like "nation in Asia", "Communist dictatorship", "has nuclear weapons"—all of which are shared by China, thinking of China yields many more features we know about, like "invented paper money", "has used movable type for centuries", "has one of the world's largest economies" and "has over ten thousand written characters", which are not shared by North Korea. In our minds, North Korea is something like a proper subset of China; most things North Korea has are also had by China, but most things had by China are not had by North Korea. The only real deviation from normative rationality is the fact that we aren't comparing across a complete (or even consistent) set of features; if we were, we'd find that the results were symmetrical. Another false illusion is MPP's worry that typicality judgments are somehow problematic, as though it's weird to say that a penguin is "less of a bird" than a sparrow or a chicken is "less of a dinosaur" than a tyrannosaurus. No, of course that makes sense; indeed, the entire concept of evolution hinges upon the fact that one can be a bit more bird-like or a bit less saurian or a bit more mammalian. These categories are fuzzy, they do blend into one another, and if they did not, we could not explain how all life descends from a common ancestor. The mistake here is in thinking that concepts should have hard-edged definitions; the universe is not made of such things. It's a bit weirder that people say 4 is "more of an even number" than 2842, since even numbers do have a strict hard-edged definition; but obviously you're going to encounter 4 a good deal more often, so in that sense it's a better example.
Worst of all, MPP makes a couple of errors, one of which is offhand enough to be forgiven, but the other of which is absolutely egregious—to the point of itself being a cognitive illusion. The minor error is on page 130: "A sheet of tickets that give us a 99 percent chance of winning will be preferred to a more expensive sheet that offers a 999 out of 1000 chance." He implies that this is wrong; but in some cases it's actually completely sensible. Suppose the cheap ticket costs $1.00 and the expensive ticket costs $50.00; suppose the prize is $500. Then the expected earnings for the cheap ticket are 0.99*500 – 1 = $494, while the expected earnings for the expensive ticket are 0.999*500-50 = $449.50. It does depend on the exact prices and the size of the prize; if you are risk-neutral and the prize is $10,000, you should be willing to pay up to $90 for the extra 0.009 chance. Then again, if you're poor enough that it makes sense to be risk-averse for $10,000 (hint: you probably are not this poor, actually! If you think you are, that may be a cognitive illusion), then you might still not want to take it. Suppose your total wealth is $1,000, so $10,000 is a huge increase in your wealth and $50 is a significant cost. Even then, you should probably buy the expensive ticket. If utility of wealth is logarithmic, these are your expected utilities. Keep the money: log(1000) = 3. Cheap ticket: 0.99*log(11000) + 0.01*log(999) = 4.03. Expensive ticket: 0.999*log(11000) + 0.001*log(950) = 4.04. I actually think utility of wealth is less than logarithmic, so maybe you don't want to buy the expensive ticket; but it's at least not hard to contrive a scenario where you would. So maybe MPP really just meant to imply that people are risk-averse even when they shouldn't be, or something like that. Like I said, this error is minor. There's another place where I would consider it an error, but some economists would agree with him. He says that it is irrational not to always defect in a Prisoner's Dilemma, because you'd defect if they defected and defect if they didn't defect. Then, he applies the sure thing principle, and concludes you should defect. But that's not how I see it at all. Yes, if they defect, you should defect; protect yourself against being exploited. But if they cooperate... why not cooperate? You don't get as much gain for yourself, but you're also not exploiting the other player. How important is it to you to be a good person? To not hurt others? If these things matter to you at all, then it's not at all obvious that you should defect.
MPP makes another error, however, that is much larger and by no means controversial. On page 79, he writes: "If a mother's eyes are blue, what is the probability of her daughter having blue eyes? What is the probability of a mother having blue eyes, if her daughter has blue eyes? Repeated tests show that most of us assign a higher probability to the first than the second. But this is a mistake. A statistical correlation should be a two-way affair; it should be symmetrical." Now, as it turns out, these two probabilities in particular are equal, because the human population is large and well-mixed and as such the base rate of blue eyes doesn't vary much between generations. But as a general principle, such probabilities most certainly are not symmetrical, and indeed, the whole point of Bayes' Law is that they are not. (Thus, I must wonder if MPP's poor explanation of Bayes' Law isn't just a poor explanation, but actually reflects a poor understanding.) Suppose I drive a Ford Focus (as I do). Now suppose that someone somewhere is run over by a car (as is surely happening somewhere today). The probability that the car that ran them over is a Ford Focus, given that I ran them over, is very high (virtually 100%); but the probability that I ran over them, given that the car that ran them over is a Ford Focus, is far, far smaller (perhaps 0.1%). The mere fact that it was a Ford Focus that caused the injury is nowhere near sufficient evidence to conclude that I did it, for there are thousands of other Ford Focus cars on the road. But if you knew that I had done it, you'd be wise to bet that I did it in a Ford Focus, because that is what I drive. So MPP is simply wrong about this, and his error is fundamental. It's actually called the Prosecutor's Fallacy or the Fallacy of the Converse. It's one of the most common and most important cognitive illusions, in fact. Now, correlations are actually symmetrical, but this didn't ask for a correlation, it asked for a conditional probability. If MPP doesn't understand the difference, that's even more worrisome. You can't even compute a correlation on this data, because it's categorical; your eyes are either blue or they aren't, they can't be 42% blue. Correlations are for ratio data; you could ask what the correlation is between a mother's height and her daughter's height, and that would be symmetrical. That isn't what we asked here though.
In all, Inevitable Illusions isn't too bad. It may be worth reading simply as a quick introduction. But if you really want to understand cognitive economics, read Kahneman instead.
Most people are familiar with term "optical illusion". One well-known example is the picture of two equally long lines, but one has arrow-heads at the end turned inward, while the other has arrow-heads turned outward. The arrow-heads make the lines appear to be of different lengths. They look something like this:
<-------> >-------<
However, most people are NOT aware that there are similar mental illusions that affect how we make decisions. This book describes what researchers have found in this field in the last decades, and it is a very interesting read.
For example, there is an effect called framing, which means that the way a question or a problem is phrased has a large impact on how we answer it. In an experiment, doctors were told that when using a certain medical procedure, the probability that the patient is alive two years later is 93%. Another group of doctors were told that with another procedure there was a 7% chance of the patient dying within two years. Both groups of doctors were asked whether they would recommend the procedure or not. Significantly more doctors would recommend the procedure as stated in the first case than in the second, even though the two cases are identical! This shows how powerful the framing effect is.
Another example: A wheel is spun, giving a number from 0 to 100. After seeing the number, people are asked to estimate the percentage of African nations that are part of the UN. If the number on the wheel was high, people give a high estimate of the percentage, if low a low estimate is given, even though people know that the number on the wheel has nothing to do with the actual percentage. This mental illusion is known as anchoring.
There are many more mental illusions discussed in the book, and there are lots of entertaining (and revealing) examples. I found the book very interesting and informative, and it has made me look out for mental illusions in my own decision making. It is also interesting to note that it doesn't always help to be aware of a certain illusions - you can still be fooled by them. This is analogous to how the lines above still seem to be of different lengths even though we know that they are not.
My one criticism of the book is that the language is a little bit difficult and sometimes it doesn't flow as well as it could. But this is a minor problem. Also, there is a similar book that concentrates on mental illusions when it comes to money. It is called "Why Smart People Make Big Money Mistakes" by Belsky and Gilovich, and is also highly recommended, even though a lot of the material they cover is the same as in this book.
L’ illusione di sapere è una sconcertante rassegna degli errori madornali che compiamo quando prendiamo delle decisioni. Questi tunnel sono universali, sistematici, indipendenti dallo stato emotivo del momento, del tutto inconsapevoli e influenzano le nostre scelte nei più svariati campi.
La "illusione di sapere" è un fenomeno ben documentato nella psicologia cognitiva e negli studi sulle decisioni. Ci sono diverse ragioni per cui le persone possono cadere in questa trappola cognitiva.
Uno dei fattori principali è la nostra tendenza a cercare conferme per le nostre credenze preesistenti, invece di cercare di verificare se queste credenze sono effettivamente supportate dai dati. Questo fenomeno è noto come "confermation bias" e si verifica quando cerchiamo informazioni che sostengono ciò che già crediamo, e ignoriamo o minimizziamo quelle che contraddicono le nostre convinzioni.
Inoltre, spesso ci sentiamo sicuri delle nostre conoscenze anche quando queste sono incomplete o basate su informazioni errate. Questo può accadere quando ci sentiamo sicuri di avere una buona comprensione di un argomento, ma in realtà abbiamo solo una conoscenza superficiale o frammentaria.
Un altro fattore che contribuisce all'illusione di sapere è la nostra tendenza a credere in narrazioni semplici e lineari, anche quando la realtà è molto più complessa. In generale, gli esseri umani tendono a cercare spiegazioni semplici e a ridurre la complessità del mondo, anche quando questa semplificazione porta a una comprensione distorta della realtà.
Per evitare l'illusione di sapere, è importante impegnarsi a cercare informazioni complete e affidabili, cercare di mantenere una mente aperta e flessibile, e sforzarsi di comprendere la complessità del mondo che ci circonda. Inoltre, è importante essere consapevoli dei nostri pregiudizi e delle nostre convinzioni preesistenti, e cercare di esaminarle in modo critico per verificarne la validità.
-------
Massimo Piattelli Palmarini (Roma, 29 aprile 1942) è un biochimico e linguista italiano. Laureatosi e perfezionatosi in fisica, si è occupato di biochimica, filosofia della scienza, scienze cognitive e linguistica.
I found the writing quality in this book very poor. It's overly verbose and the use of English is rather uninspired.
The organization of the material is not much better. The order he presents his material is rather haphazard, and he'll constantly say things like "which I will explain later", "we will see that ...", "I will show that ...", etc., as well as delay explanations in a very artificial way so we'll have plenty of time to be wowed by the "illusions" (what he calls mental tunnels, i.e. inevitable traps of our flawed intuition). I was left rather unimpressed.
Some of the categories are not very clearly or scientifically defined, but talked about in a frustratingly vague way (i.e. "magical thinking", my favorite), while others are harped about forever (our intuition about probability). He dedicates a lot of time talking about Bayes' theorem (without giving the actual math) and how it is the most wonderful thing ever and could lead us into the center of the sun with its conferred rationality. I mean, OK, the Bayes' theorem is great and all, but come on.
Another annoyance: several times he name-dropped studies in evidence for a cognitive illusion he was introducing, but instead of attempting to explain the study and why the given conclusions could be drawn, he would go on and on about how complicated it was to show this, how meticulous the researchers were, (in one case going so far as to use "byzantine" as if that word conveyed positive connotations of genius rather than convolution) as if to say the simple-minded general public wouldn't understand such things so he won't bother to waste words on it. I wanted to scream "but you've already wasted my time by talking it up so much!"
The book also features the most muddled explanation(s) of the Monty Hall problem that I've ever read (which he refers to as the "Grand Finale" of his illusions). I'd say a lot of the other material is much cooler and mind-blowing.
OK, and with that harsh (and clearly emotional) rant out of my system, I have to say I really did like some of the illusions. I love the subject-matter itself. This book is a very good idea, just not the best execution.
This was a great book! Whether you have experience with statistics (mind you it isn't a book about statistics) it'll make you think. It concerns what Massimo calls, & what I think is in cognitive science officially called (not sure) mental tunnels, how we make decisions, good & bad, & how we make them often in an irrational manner. It has a lot to do with probability & demonstrates a lot of the ways we think about things that often seem sound initially or on the surface, genuinely aren't. Some examples are a little difficult to wrestle with & others are ridiculously simple in hindsight after things become more obvious. Definitely recommend for anyone interested in the way the mind works, logic &/or probability.
"Rationality is not just 'a' faculty we possess; it is not a spontaneous characteristic of our species. What is proper to our species is our capacity to discover on our own certain striking internal contradictions & to refute them. It is also part of our capacity as humans that we possess the basic elements from which we can construct & refine rational thought. Thus, from what we have been exploring here is nothing to lead us to either pessimism or optimism. Using our reason means being straightforwardly realistic. We seek to recognize our limits, to understand the geography of our minds, to elaborate normative theories of rationality, & to improve our judgments-- in the light of these theories, & employing a better awareness of our natural limitations."
The concepts in this book are illuminating and intriguing. It is dedicated to Tversky and Kahneman and, written in the 90's, it is a great primer to illusions of thinking. It's translated from the Italian and sometimes it's bogged down by a little too much jargon -- not sure if this is an issue with the translation or the author's style. Regardless, when the author gives examples of what he calls "tunnels", or acquired human biases, the explanations are fascinating. For example, he looks at the Monte Hall problem and parallels it with Martin Gardner's Prisoner's Dilemma.
This book, in conjunction with Kahneman's "Thinking Fast, Thinking Slow", gives a comprehensive survey of the experimental bases for defining some of the heuristic fallacies in our thinking.
BTW, the bibliography is quite extensive, if not up to date.
This book can be a good introduction to decision theory, the study (and improvement) of human decision making.
The book presents a collection of systematic mistakes that most people make, when they intuitively evaluate evidence, assess probabilities, make predictions, or take risks. Most of the book is dedicated to the research of Nobel laureate Daniel Kahneman and his colleague Amos Tversky.
Warning: Because it is written for a lay audience, the content can be misleading and scientifically inaccurate. If you want to really understand the works of Amos Tversky and Daniel Kahneman you'd better skip this book.
Annoyingly enough I started reading this during a bout of insomnia. The reason that's annoying is because I found myself reaching for the pen and paper that was nowhere to be found, to try and work out how my brain fell into the same thinking traps as everybody else.
It's a fascinating read, although it does take a little bit of work to get into owing to the translation from Italian to English. Interesting stuff though, and provides good solid answers to questions such as "How many people need to be in a room before there is a better than even chance that two of them share a birthday?" (The answer, by the way, is surprisingly low!)
Reason is a wonderful thing, but we were designed by natural selection to survive and not by Intel to compute. There are ways in which human brains are consistently fooled. In some circumstances we are doomed to act in undeniably silly and suboptimal ways. This book gives clear illustrations of some of these cognitive blind spots through puzzles and games.
This is a subject that I read a lot about. This is a pretty advanced, technical book. It was therefore really dry and took me awhile to get through. I would recommend it only to those very interested in the subject. It is not a good starter book. If you haven't yet read "How We Know What Isn't So" then you shouldn't read this or any book on the topic.
My first-ever foray into the world of cognitive psychology. I've read the book every year since it was published nearly 20 years ago: its messages are as thought-provoking - and perhaps just as troubling - as they were in 1994. Do yourself a favour and buy the book - you'll want to read it again and again.
I enjoyed this book as a quick refresher, though Kahneman's "Thinking, Fast and Slow" was much better. I have to knock down one star when I consider that there is a rushed quality to the book, and that this quality degrades clarity and precision on a topic that calls for it.