Over the years, some very smart people have thought they understood the rules of chance - only to fail dismally. Whether you call it probability, risk, or uncertainty, the workings of chance often defy common sense. Fortunately, advances in math and science have revealed the laws of chance, and understanding those laws can help in your everyday life.
In Chancing It, award-winning scientist and writer Robert Matthews shows how to understand the laws of probability and use them to your advantage. He gives you access to some of the most potent intellectual tools ever developed and explains how to use them to guide your judgments and decisions. By the end of the book, you will
How to understand and even predict coincidences When an insurance policy is worth having Why "expert" predictions are often misleading How to tell when a scientific claim is a breakthrough or baloney When it makes sense to place a bet on anything from sports to stock markets
A groundbreaking introduction to the power of probability, Chancing It will sharpen your decision-making and maximize your luck.
Robert Matthews is a visiting lecturer in science at Aston University, in Birmingham, England. He has published pioneering research in fields ranging from code-breaking to predicting coincidences, and won the internationally renowned Ig Nobel Prize in Physics for his studies of Murphy’s Law, including the reason why toast so often lands butter-side down. An award-winning science writer, he has contributed to many newspapers and magazines world-wide, and is currently science consultant for Focus magazine, a flagship BBC publication.
This book is an excellent layman introduction to the practice of basic statistics, and does for that subject what “The Drunkard’s Walk” did for elementary probability, in that the reader will get a mature non-dumbed down exposition within the pages of each of these books. There are obvious limits to the amount of functional knowledge one can get by virtue that without the frequent use of mathematical symbols and formalism, there’s only so much one can unpack in english from the information-dense nature of the respective subjects. That being said, I’m satisfied that a person coming totally fresh to the areas reading this book or Drunkard, will get enough functional knowledge to intelligently pursue further studies, with some guidance on appropriate textbooks, if they so choose.
In many ways, both these books mate well together especially in the historical viewpoint. “The Drunkard’s Walk” will take the reader from antiquity to mostly from the 1500-1700s, with a bit from the 1800s, covering much of probability that had a distinct combinatorial and algebraic flavor, which were the main tools of the day for mathematicians. That book emphasizes the reasoning with “bins & buckets” nature of basic probability, where enumerating objects and taking the quotient on the count of certain types of enumerations over all (or finding clever ways to avoid accounting for all enumerations in a closed form) gets one to the answer.
This book takes the next step, and introduces what can one do with these counting-schemes, with respect to more sophisticated games of chance? There is an intersection in coverage from a historical standpoint as Drunkard’s Walk technically ends in the 1800s, in the time of Laplace and Gauss, and even covers (to my recollection) the methods of least squares, which serves as foundation to the techniques of regression. Though that part was really covered fleetingly. This book also briefly covers the least-square techniques, but is much more focused on taking the tools/concepts like regressions, expectations etc., and applying them to modem games of chance (and skill), the so-called “casino” games, from cards, to roulette, as well as problems of insurance, and other practical concerns.
Here, the book surprised me as not only did it go over the expected material with respect to card games and roulette, but it also introduces the notion of utility within the first few chapters. Here, like the rest of the coverage, the material is introduced historically, tracing the motivational material from Pascal, who was the first to come up with the weighted expectations concept, that allows one to know what is a good (or bad) bet within the context of probabilities and currency, and shows why this naive concept is deficient for the general case (namely that ‘value’ of currency is relative). Here, the book actually doubly-surprised me as the next thing the author introduced was the log-utility, and explained why logging may be important with respect to certain risk/statistics questions. Although the book doesn’t take the final step and introduces one of the log-utilities best applications, the Kelly Criterion with respect to the process of martingales, one can easily understand the basics of that material from a functional standpoint after reading the material here.
The other topic that was covered well were those around the notion of testing, which involved simple applications of the Bayesian formula. These were often explained within the context of medical scenarios, but it served the material well. An interesting fact I learned from the text was that the more modern data-centric version of Bayes’s formula, that integrates the notion of ‘belief’ with respect to priors, originates with Turing. The book makes great use of these concepts in the last quarter of the book where it gorgeous through a standard ‘statistical process’ from choosing a distribution, to estimating it’s parameters, and making inference (all conceptually), and provides great exposition to why it is each step was taken. And as a last surprise the book actually covers the concepts of extreme-values theory, which is somewhat esoteric for an introduction text (much less a layman introduction), yet it’s explanation is so clear that I suspect it will be accessible to most everyone.
This book is a real gem in layman text on statistics, that tries to teach the reader about something, along with “The Drunkard’s Walk”, and possibly “The Perfect Bet”, these 3 books serve as a great conceptual exploration into statistical reasoning, and could definitely serve well as accompanying book in a formal introduction in statistics. Highly recommended.
First, I liked the book. I listened to the audiobook. The narrator was excellent. My favorite parts of the book were the chapters on finance, economics, and games of chance. Many of the details and formulas were difficult for me to follow on audio. Perhaps it was just me. I recently listened to both "Fooled by Randomness" and "The Drunkard's Walk" and anticipated this would be as easy to follow but it wasn't. I have the printed version and will read it this year so perhaps my rating will go up however I do recommend for my nerdy friends.
RM aims to show that an understanding probability, but for the most part statistics, can enhance decision making for organisations.
The book starts with the basics of probability (law of large numbers, randomness, independence, normal distributions), before moving onto discussions of statistical significance, Bayesian statistics and fat-tailed distributions that I found quite enlightening.
It was good to read alongside Ricthie's Science Fictions and in many ways was superior to it, in that whereas Ritchie took it as a-given that frequentist statistical methods were well-meaning but misapplied, RM argues that there are many circumstances where their application will never be appropriate (e.g. big data, epidemiological studies)
The discussion of Bayesian statistics and the financial crisis was quite weak in that it didn't take seriously the city's position and restricted the discussion to attacking out of context quotes.
Not the worst book, a decent introduction to a few areas. Though I can't say I'm a particular fan of seeing an Oxbridge-educated academic refer to people who frequent gambling establishments as 'deadbeats'.
If you pay attention to science, especially medical science, you know there’s a crisis among researchers — many of the most interesting studies cannot be replicated. Why? Matthews, in this super-entertaining analysis of statistics and probability, constructs a credible argument that it’s because the standard for statistical significance is hopelessly flawed … and that even the guy who first proposed using a “p” score for statistical validity knew it and warned against its pitfalls. While he’s at it, Matthews also takes a wrecking ball to any non-Bayesian thinking, data mining, the Normal or bell curve, regression analysis … it’s no wonder our scientific research is so deeply flawed. Even with all that, the book is extremely entertaining, with many examples, occasional wit and just enough math (or as the Brit Matthews says, “maths”) to help you understand what he’s talking about. The guy is a legitimately great science writer and I blew through this book, learning a ton along the way. Highly recommended.
The first 75 pages and overall small size made me think this would be relatively light on explaining theories and heavier on how to make them work for you, but it was the opposite, which quickly turned into the worst aspects of math class. That, and if you want to write a big brain book, how about you also make some big brain moves and get an editor, or barring that, learn how to spell. Judgment only has one e.
Basic statistics wrapped in stories. It’s still a nice piece of work for readers of few statistical background. The only part this book inspired me was its narration on the origin of a theory and their logics. It suffices my understanding of why some statistics term ehich I rarely understood at class, is named or applied in their ways. This makes me wonder if theories should come before examples or otherwise?
Interesting book if you want to know about what statistics and games of chance and odds and average and deviations are. I did, and I still do. The book confirmed some of my thoughts on these matters and opened my eyes to others.
It reads a bit like an interesting textbook; think Carl Sagan and his book Contact or Jacob Bronowski and The ascent of Man .
This book describes many of the pitfalls in using many of the statistical techniques that we use frequently without regard to the underlying assumptions that went into their development. The author destroys our beloved bell-shaped curve that is our fall back that we apply everywhere!
Some ideas from the book: * The human mind is a pattern-seeking machine, and it's very good at finding patterns, even when they're not really there. * Chance is not the enemy of order; it's the very foundation of order * The laws of chance are not just about predicting the future; they're about understanding the present.
I am too dumb for this book, and it’s already in layman’s terms 🥲 This took me three months, and I’m not even sure if I quite got it. This was hard to follow compared to Taleb’s books. I’m sticking with him instead lol.
For some twenty years I worked in the casino industry as a table games dealer; this should mean that I am immune to being fooled by any proposition that looks too good to be true. However, like just about everyone else, what I think I know about statistics and probability is pretty much bunk. This book sets out to explain statistics and probability, and leaves one feeling somewhat wiser and educated.
First off, the author is English, which is not a problem, unless one hates references to Celsius and the metric system (not to mention pounds as opposed to dollars). Among other things that the book tells us is that the Bell Curve is a marvelous tool for determining distribution of things that are not affected by other things in the set (like flipping a coin), but that the Bell Curve does not work well with things that are affected by other things in the set. This would seem to be self-evident, but the Bell Curve was (and is) used in social sciences. Also, Median and Mean are not the same thing, as the Median actually works better with skewed distributions. (If forty people make $30k a year, and one person makes $300k a year, the Mean is $36.5k, while the Median is $30k.) The book also allows one to be wary of medical breakthroughs that aren’t statistically significant, to be wary of correlations that are not causations (the more movies Nicolas Cage makes, the more people die in American swimming pools), and to be especially wary of using (or misusing) statistical methods to predict the stock market.
The book goes into great detail on some of the math involved, but one does not have to be a math major to understand the book. (People used to assume that I was good at math to be a Table Games Dealer; I would respond that if you know how to count to twenty-one, you could deal Blackjack.) The book leaves one with the conclusion that the messier real life becomes, the harder it is to use statistical methods to make sense of it all.
Every honest researcher should read; This important work explains in a semi rigorous way concepts from "statistical inference". If one were to focus on only one chapter (it is a long book) "Turing meet Dr Bayes". It is no coincidence that the Bayesian methodologies only began to take off when computing was in it's infancy.
I had never heard of mathematical community rejecting the idea of bayesian's since the starting point is often unknowable. He seems to be getting at the idea that `Artificial Intelligence meet Bayesian simulations`. Further his reporting on the rejection of "confidence interval" brings into question the whole concept of z-scores something he quite likely doesn't say directly since that would berate the whole field of say psychology.
was how in bayesians we are working backwards from the known state (current conditions) using something called the `likely hood ratio`. So in other words 'when conditions change what would you do ?' becomes a finite iterative sequence modified by improvements to the likelyhood ratio rather than the (seemingly) more complicated bayesian formula which he credits more to LaPlace.
Probability is literally everywhere in our everyday lives, and yet to many people, is also one of the most commonly misunderstood or underrated topics. Often witty and sometimes insightful, the book offers an interesting look at how people can be fooled by randomness, especially eye-catching outlier events, confused by probability statistics (e.g. chances of rain, likelihood of getting cancer), and tricked by dodgy "scientific" findings based on questionable methodologies or misuse of p-values. Even more captivating is the author's lively explanation of Bayesian statistics and its under-appreciated power in helping us obtain a more reliable probability of events. The last few chapters focus on how the normal distribution is actually not that normal in the real world and that fully capturing all the uncertainties in finance or economy with complex models is most likely a fool's errand, which the 2008 Financial Crisis made abundantly clear.
A decent enough popular mathematics book. Much of what is touched upon is how most people think probability works, how it actually works and the dangerous gap between expectation and outcome. It’s not quite as playful in tone as Ian Stewart or ‘The Secrets of Mental Math. Each statistical theory is explained in a short, digestible chapter with an anecdotal story to explain it. At the end of each chapter there is a little bit summary to make sure the meaning is clear. Perhaps it won’t change the way people think but for the most part the book does what it sets out to do.
If you studied statistics in passing - at a high school level, or perhaps as one of the core, early year subjects in an undergraduate course, then this book is for you. Some basic understanding of statistical models are required. Then the author will carry you away with his enthusiasm for the wonders and dangers of everyday applied statistics as they are so often mentioned in popular media and news reports.
Among many interesting things, the GFT (Google Flu Trends) and the Netflix optimizing algorithm stories are highlighted... the first shows a failure of "Big Data", and the second shows a success -- which, however, couldn't be implemented because of complexity!
I enjoyed this book. It covers different views of chance through the lens of mathematical formulas of probability and various historical examples which if nothing else, should make us be skeptical of experts and the limitations we humans have in assessing certainties of any kind.
Great introduction to Bayes. Good exposure of statistical 'significance' and other seemingly infallible statistical tools like the 'Normal Distribution' and regression methods