This was a book that I thought was amusing, not fantastic, and then overall came around to thinking it was pretty good. Overall, Gerd Gigerenzer talks about how we often make decisions based on a misunderstanding of the data and that we need to do a better job of being what he calls being “risk literate.”
In the first part, he talks about how people internalize a risk if a lot of things happen at once (like 9/11 attacks) where there is a low probability of catastrophe, but that people don’t always see the potentially worse risks that are a higher probability, but lower severity (like individual car accidents that are much more frequent but don’t kill as many people in each event). People sometimes act suboptimally when faced with these types of tradeoffs. “If reason conflicts with a strong emotion, don’t try to argue. Enlist a conflicting and stronger emotion.” (12) We often overestimate the probability of catastrophe, sometimes resulting in bad results. “How many miles would you have to drive by car until the risk of dying is the same as in a nonstop flight?...the best estimate is twelve miles.” (13)
He then goes on to talk about the differences between uncertainty and risk. “In a world of known risk, everything, including the probabilities, is known for certain. Here, statistical thinking and logic are sufficient to make good decisions. In an uncertain world, not everything is known, and one cannot calculate the best option. Here, good rules of thumb and intuition are also required.” (24) We are often given a false sense of certainty by complicated models when they might not do a very good job of predicting anything. He talks about the rule of thumb that pilots can use regarding whether they can make a runway: “Fix your gaze on the tower: If the tower rises in your windshield, you won’t make it.” (28) They also talk about the turkey illusion, where we base our perception on the future on the risk (previous observations of being fed) and not uncertainty (someone “preparing” for Thanksgiving).
The next section discusses defensive decision making. They talk about how pilots use checklists, but surgeons don’t, often to the peril of their patients. “Defensive decision making: A person or group ranks option A as the best for the situation, but chooses an inferior option B to protect itself in case something goes wrong.” (56) People often use a rule of thumb like hiring a recognized company for a job rather than who they actually think is the best because they think it will be more defensible if something goes wrong. Some types of defensive medicine they mentioned include superfluous tests, more prescriptions than necessary, unnecessary referrals to specialists, and invasive procedures to confirm a diagnosis. (59) A remedy that he provided was: “Don’t ask your doctors what they recommend to you, ask them what they would do if it were their mother, brother, or child.” (63)
Another interesting section was on minding your money. He noted that the predictions of financial experts are not typically all that great. His alternative that he proposes is much simpler. “Allocate your money equally to each of N funds.” (93) He advocates this over a mean-variance optimized portfolio. “I have convinced myself that simple is better. But here’s my problem. How do I explain this to my customers? They might say, I can do that myself.” (95) This is true, but wouldn’t it be better to just advocate keeping 1/N as one of the options when picking the best model? I would argue against adding unnecessary complexity, but it seems like the advisor is just bad if they pick models on something other than performance adjusted for other important things (such as risk). He says that when we have “high uncertainty, many alternatives, and small amount of data” we should make it as simple as possible. Alternately, when we have “low uncertainty, few alternatives, and a high amount of data” we should make a model more complex. (96) He advocates allocating savings equally across stocks, bonds, and real estate. (105) That certainly is simple, but it likely is not the best risk profile for many people.
The next part talked about gut decisions. I thought he had a little more faith in gut decisions than I do, but he does make some good points. “A gut feeling, or intuition, is a judgment (i) that appears quickly in consciousness, (ii) whose underlying reasons are fully fully aware of, yet (iii) is strong enough to act upon. Having a gut feeling means that one feels what one should do, without being able to explain why. We know more than we can tell. An intuition is neither caprice nor a sixth sense, but a form of unconscious intelligence. By definition, the person cannot know the reasons, and may invest some after the fact if you insist. To take intuition seriously means to respect the fact that it is a form of intelligence that a person cannot express in language. Don’t ask for reasons if someone with a good track record has a bad gut feeling.” (107) I thought one good point he made was that there is often, and for good reason, a desire for people to only accept evaluations if a person has a list of reasons why. By definition, you can’t explain a gut feeling. I think the important thing is to look at the track record of a person who is sharing their gut feeling – I have known people with very reliably good intuition, and others that are worse than useless.
He goes on to talk about natural frequencies and their superior ease of comprehension compared to conditional probabilities. This makes sense – there is no need to overcomplicate things. He also talks about the illusion of certainty, and how we think we have a better grasp of what is going on with slot machines, for example, than we really do. (136) “If you are highly proficient at a sport, don’t think too long about the next move. If you are a beginner, take your time in deciding what to do.” (137) This goes along with the gut feelings chapter, since you have muscle memory and intuition once you have repeated something many, many times, and the danger of “overthinking” something exists. When you don’t have much experience, though, you have not had enough encounters to develop a reliable gut feeling, and need to resort to other methods.
The next part talks about getting to the heart of romance. One idea I thought was interesting was to toss a coin to help with decisions and then not look at the result. Which result were you hoping for? (149) I thought the discussion about how middle children always get less total attention because if you spread your time equally (1/N), you will never have any time where the middle child doesn’t have to split time with at least one other person (typically), so N is on average larger for the middle child than for the oldest or youngest. (154)
I thought the parts on medicine were really interesting. One of the more discomforting things in the book was the discussions regarding the probability that a person has an illness given a positive test result, and how many doctors were way off when it came to this knowledge/understanding. Regarding breast cancer, “out of ten women who test positive in screening, one has cancer. The other nine women receive false alarms. Yet the 160 gynecologists’ answers, monitored by an interactive voting system offering the four choices above, were all over the map…Only 21 percent of doctors would correctly inform women.” (163) He again promoted the idea of looking at natural frequencies (that is, the frequency of something occurs, rather than probabilities, or conditional probabilities, so saying something like 6 in 1,000 will die from a disease). Regarding Down syndrome, “only one out of every six or seven women with a positive result actually has a baby with Down syndrome.” (172) I really liked the discussion of how doctors have what he calls SIC Syndrome. “Physicians don’t do the best for their patients because they: 1. Practice devensive medicine (Self-defense) 2. Do not understand health statistics (Innumeracy) 3. Pursue profit instead of virtue (Conflicts of interest). (178) He talks about how the benefits of many tests like MRI and CT scans are discussed with little discussion about their drawbacks. These are implemented because of the S (if they did a test they did all they could) and the C (financial rewards for running tests), not because it was necessarily in the best interest of the patient. These tests in particular can even be harmful for the patient, as long-term radiation exposure has negative impacts on your health. I thought the discussion of mortality rate versus 5 year survival rate was interesting, as well. He talks about two different kinds of bias present in diagnosis. Lead time bias is present when diseases that take a long time to kill you are diagnosed much earlier (say, at age 60), resulting in better 5 year survival rates. If you wait until they are much sicker (say, at age 68), the 5 year survival rate might be lower, but it doesn’t mean that a higher fraction of people lived past the age of 70. Another type of bias is overdiagnosis bias. “Overdiagnosis happens when doctors detect abnormalities that will not cause symptoms or early death. For instance, a patient might correctly be diagnosed with cancer but because the cancer develops so slowly, the patient would never have noticed it in his lifetime.” (188) If the same number of people die from a cancer (the numerator), but you dramatically increase the number of people who have some form of it (the denominator), the mortality rate can drop dramatically. Like with x-rays, the biopsies done to better diagnose prostate cancer are not without harm. While some tests and screenings are very useful, it sounds like the PSA screening for prostate cancer is not nearly as reliable. I think one of the takeaways is that it’s important to communicate how reliable a test is, and that not all tests are great. “In addition, almost half of the U.S. doctors falsely believed that detecting more cancers proves that lives are saved. Under the influence of their confusion, they would recommend screening to paitents.” (200) He advises to look at the mortality rates rather than the 10 year survival rates, doing as best as you can to compare apples to apples. He then advocates fighting cancer (and other diseases) with prevention, not screening. This is obvious – it’s better to be proactive rather than reactive – but prevention is something I should have done for a long time, and screening is something I can do while still not adhering to a strict diet or otherwise healthy lifestyle. While we spend a ton of money on research for drugs to cure cancer, he suggests that it would be better spent on education and promotion of healthy lifestyles, since somewhere between 40 and 50 percent of cancer is essentially due to either smoking or obesity/diet/lack of exercise. (221)
He concludes by talking about how it’s important for people to be comfortable with numbers, understanding risks, and that natural frequencies can help with this.
I thought this was a decent book. I thought he was a little more into rules of thumb, but I think the general message of only adding complexity if it substantially adds to the benefits is a good message. I thought he had more of an emphasis on using gut feelings specifically rather than a more general "simple is better" that I would have preferred, but it still made some good points. I thought the discussions regarding how much other people you typically encounter know about risk and statistics is good as well. Given that I have a decent background in math and statistics, it’s beneficial to know what other people are and are not familiar with. I also liked his emphasis on properly accounting for the negative impacts of tests. More generally, what are the costs for doing too much testing and delaying decisions? We often frame things as is this good or bad (narrow framing), when we instead should look at it in the context of all available options.
Overall this was pretty good, especially in how it made you question the analysis you are given, and be wary of creative liberties with statistics.