Jump to ratings and reviews
Rate this book

The Failure of Risk Management: Why It's Broken and How to Fix It

Rate this book
An essential guide to the calibrated risk analysis approach The Failure of Risk Management takes a close look at misused and misapplied basic analysis methods and shows how some of the most popular "risk management" methods are no better than astrology! Using examples from the 2008 credit crisis, natural disasters, outsourcing to China, engineering disasters, and more, Hubbard reveals critical flaws in risk management methods–and shows how all of these problems can be fixed. The solutions involve combinations of scientifically proven and frequently used methods from nuclear power, exploratory oil, and other areas of business and government. Finally, Hubbard explains how new forms of collaboration across all industries and government can improve risk management in every field.

Douglas W. Hubbard (Glen Ellyn, IL) is the inventor of Applied Information Economics (AIE) and the author of Wiley's How to Measure Finding the Value of Intangibles in Business (978-0-470-11012-6), the #1 bestseller in business math on Amazon. He has applied innovative risk assessment and risk management methods in government and corporations since 1994.

"Doug Hubbard, a recognized expert among experts in the field of risk management, covers the entire spectrum of risk management in this invaluable guide. There are specific value-added take aways in each chapter that are sure to enrich all readers including IT, business management, students, and academics alike" —Peter Julian, former chief-information officer of the New York Metro Transit Authority. President of Alliance Group consulting

"In his trademark style, Doug asks the tough questions on risk management. A must-read not only for analysts, but also for the executive who is making critical business decisions." —Jim Franklin, VP Enterprise Performance Management and General Manager, Crystal Ball Global Business Unit, Oracle Corporation.

309 pages, Kindle Edition

First published April 1, 2009

112 people are currently reading
915 people want to read

About the author

Douglas W. Hubbard

12 books86 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
128 (31%)
4 stars
171 (41%)
3 stars
79 (19%)
2 stars
22 (5%)
1 star
9 (2%)
Displaying 1 - 30 of 54 reviews
Profile Image for Brahm.
594 reviews85 followers
May 11, 2022
This is one of the best business books I've read, and I'd recommend it to anyone working for an organization that uses a "risk matrix" (those grids that show frequency on one axis, consequence on another, and are usually brightly-coloured in red, yellow, and green).

Hubbard argues many approaches to risk management are "equivalent to astrology" (p.xv, preface) and that at worse, risk management systems can expose an organization to more risk than they mitigate. Risk matrices have spread from company to company "live a dangerous virus with a long incubation period" (p5) and have even been codified into some laws, and now we're all trapped using these faulty tools.

Nassim Taleb makes the philosophical case for thinking about and avoiding risks, Hubbard delivers the application guide - I think these two authors are very complimentary. More on Taleb below.

I used 3 stacks of page tabs on this one... some of the most interesting ideas and best quotes:

p66, on some people's objections to incorporating expert subjective probability estimates in qualitative methods to estimate risk: "Some analysts who had no problem saying a likelihood was a 4 on a scale of 1 to 5 or a medium on a verbal scale will argue there are requirements for quantitative probabilities that make quantification somehow infeasible. Somehow, the problems that were not an issue when using more ambiguous methods are major roadblocks when attempting to state meaningful probabilities." (this is such a great risk management zinger)

p68-72: Senior leadership of organizations should establish a Risk Tolerance Curve (chance on the y axis, loss/risk on the x axis) to unambiguously quantify what losses (at what probabilities) they are willing to accept. By analyzing all risks in the organization, a Loss Exceedance Curve (LEC) can also be generated that visualize the current (well-quantified) risks. If LEC > RTC, there's a problem. Capital can be deployed to mitigate risks to bring the LEC under the RTC. Easier to prioritize and visualize than a list of things in the "red corner" of the risk matrix.

p102: Some very well-established organizations have fallen into the risk matrix trap. ISACA in the IT space, PMI in the project management space (everyone with a PMP designation), and NIST in the States.

p110: Some good definitions.
Uncertainty: The lack of complete certainty - that is, the existence of more than one possibility. The "true" outcome, state, result, value is not known.
Measurement of Uncertainty: A set of probabilities assigned to a set of possibilities. For example, there is a 60% chance it will rain tomorrow, and a 40% chance it won't.
Risk: A state of uncertainty where some of the possibilities involve a loss, injury, catastrophe, or other undesirable outcomes.
Measurement of Risk:A set of possibilities with quantified probabilities and quantified losses. For example, "we believe there is a 40% chance the proposed oil well will be dry with a loss of $12M in exploratory drilling costs."

p135: Chapter 7 is all about "the limits of expert knowledge and how humans are bad at estimating probabilities. Fortunately, enough research has been done to be able to mitigate these effects. Calibration training can dramatically improve people's abilities to estimate a range of possibilities. I love this quote on p136 to justify calibrating people: "Technicians, scientists, or engineers [...] wouldn't want to use an instrument if they didn't know it was calibrated [...] For managers and analysts, too, we should apply a measure of their past performance at estimating risks. We should know whether those instruments consistently overestimate or underestimate risk."

p164: The concept of ordinal scales: a scale that indicates a relative order of what is being assessed, not actual units of measure. Movie star ratings, for example. Chapter 8, "Worse Than Useless" is all about how arbitrary values set in risk matrix templates can totally skew risk analysis outcomes when you start multiplying frequency into consequence.

p182: "Risk matrices can mistakenly assign higher quantitative ratings to quantitatively smaller risks. For risks with negatively correlated frequencies and severities, they can be 'worse than useless,' leading to worse-than-randomness decisions" - Tony Cox's most-cited paper on risk matrices.

p203: Hubbard has an interesting 6-page call-out of Nassim Taleb. As I said above I think Hubbard and Taleb are mostly on the same page with each other, and Hubbard has fallen into the trap of having to refute some common misconceptions about risk based on how the "black swan" has entered the wider vernacular. I think Hubbard misrepresents some of Taleb's points though, the "turkey problem" (p40 of Black Swan 2e) is misrepresented as a historical analysis, whereas Taleb's point is that "a black swan is relative (to your point of view". (again on p279 Hubbard won't let the turkey go!)

p217: Hubbard refutes "not-invented-here" syndrome and talks about how yes, risk management can be done at any organization.

p228: The Risk Paradox. "The most sophisticated risk analysis methods are often applied to low-level operational risks, whereas the biggest risks use softer methods or none at all."

p271: How to calibrate people (for accurate probability estimates). Repetition and feedback, equivalent bet test, consider that you're wrong (premortem), avoid anchoring.

Why not 5 stars? Risk management, let's be honest is not the most exciting topic, so some areas were a necessarily dry. I think some sections could have been shorter and there was a fair bit of repetition. I believe the 1e was almost 100 pages shorter.
Profile Image for Dennis Boccippio.
105 reviews19 followers
October 20, 2010
I had high expectations for this book after reading "How to Measure Anything", and unfortunately none of them were met. My very short review would state: were it not for those high expectations, I would have stopped reading the book about 1/3 of the way in, but based on past performance, I stuck it through to the end. That was a mistake.

The defects in Hubbard's second book are many. First and foremost, it is simply not pleasant to read. While "How to Measure" adopted a posture of helpful tutorial, "Failure" attempts to rehash most of the same material, albeit from a posture of criticizing almost every risk analysis method Hubbard has not personally worked on. The tone is shrill, smug, and "low emotional intelligence quotient". In the book we are treated to several "I won't name names but you know who you are" diatribes, a personal critique of author Nicholas Taleb for being too abrasive in delivery (which he is ... but Hubbard delivers this assessment with apparently no hint of irony), and ever more stories of how Hubbard publicly shames clients during working meetings into admitting they do not know as much as he does. If that is one's corporate approach towards change management, it would seem Hubbard is your man. Ironically, all of these things suggest a sensibility towards the actual "people systems" of not just management, but implementation, which is completely lacking - and thus undermines Hubbard's credibility as an expert on anything other than analytic techniques. This may be an unfair personal assessment, but Hubbard does little in the book to communicate even rudimentary management sensibilities, and the burden of proof - especially when exploring a topic such as this - should be his.

Hubbard spends an inordinate portion of the book repeatedly - redundantly - making the same self-evident point that low-fidelity risk analysis methods such as scoring approaches are, well, low-fidelity, and subject to bias. This is tautological. Even for those consumers of the methods who haven't thought hard about the issue, the point can be made in five pages, and does not need 150. (Note that it is at least that long before solutions begin to be offered). Even worse, Hubbard's primary critique other than offending "first principles" sensibilities is that these techniques have not been proven to actually have measurable impacts on performance. This might be an interesting line of inquiry had Hubbard actually done any new research on the subject, or joined with management consultants who had. Or, more importantly, had demonstrated the benefits of using the more rigorous, probabilistic risk assessment techniques which he advocates. He does not. (He alludes to this in literally the closing chapters of the book, but never actually tackles the challenge of performance-based assessment. Simple techniques are bad because they are not as rigorous or unbiased as the techniques he would advocate - therefore they must (or perhaps may?) do more harm than good. Difficult to say, as this issue is delivered rhetorically rather than rigorously.

The biggest failure of "The Failure of Risk Management" is that it mostly declines to tackle actual management. As Hubbard himself seems to realize and admit very late in the book, he has written a text about risk analysis, not risk management. Ultimately, the content - even if it were not largely a rehash of the material from "How to Measure" - is much, much, much thinner than the title, and the title could have been a very interesting exploration of modern (or not so modern) management techniques. A further challenge is that from Hubbard's anecdotes, it appears he views even risk management (read: analysis) as something done solely for the purpose of decision support for senior executives. No mention is made of risk management as a tool for allowing not just C-suite executives, but project managers but the employees who actually have to manage and mitigate risks. This elision allows Hubbard to even more stridently dismiss all low-fidelity techniques out of hand. (Make no mistake - scoring approaches and their efficacy do need hard scrutiny. Unfortunately, Hubbard does not provide it, he simply shouts for others to perform it.)

In summary - if you have read "How to Measure Anything", you have read 90% of what Hubbard has to say, and probably enjoyed reading it more than you will by engaging in this book. If you have an agenda to promote probabilistic risk management within your organization, to the detriment of other approaches, this book will provide you ample rhetoric, as well as theory, but not actual evidence, or ROI documentation, and very little in the way of tangible implementation tools or techniques to go forward. It is an opportunity missed.
Profile Image for Mike Smith.
527 reviews18 followers
August 3, 2011
I read this book out of professional interest. It likely won't appeal to anyone who doesn't work in the areas of risk analysis and risk management. Author Hubbard explains why, in his opinion, the most popular risk analysis methods are ineffective and may, in fact, cause more harm than they prevent. The popular methods are based on what he calls "scoring systems", assigning a score to a risk using a poorly defined scale such as 1 to 5 or "low", "medium", and "high". He gives reasonable arguments against these scoring systems and shows how they lead to bad decisions. He does admit that it often doesn't matter, though, because very few organizations actually track decisions made as a result of risk analyses to determine whether the analysis was flawed.

Hubbard recommends using quantitative risk analysis methods, mainly based on Monte Carlo modelling. The Monte Carlo method involves running hundreds or thousands of simulations of a system and making note of the frequency and impact of various outcomes. These define your risks. The models should be based on expert estimates of various inputs. Even with minimal data to start, subjective inputs can be averaged and corrected for through assorted means to minimize the effects of subjectivity.

I'm obliged to use a scoring system in most of my work, and I've been complaining for years that it makes no sense. This very good and readable book is another argument in my corner. There is some technical material in the final few chapters, but the book is largely math-free, although some math and statistics is required for good risk analysis, in Hubbard's view.
Profile Image for Michael Huang.
1,026 reviews54 followers
Read
October 14, 2018
Risk management is basically understanding the risks taken. The right way to analyze the risks is using appropriate tools of probability. But the common methods taught (I’m guessing to business people) are flawed: they are vague and qualitative; they ignore concepts of common mode risks; etc. Humans are poor at estimating things, and are subject to all kinds of biases. So don’t trust “experts” trust science and math: get a model and run Monte Carlo simulations. No data? No problem. Make them up and adjust as observations come in. And if your job is risk management, read this book for the finer details.
Profile Image for Felipe Moreira.
40 reviews5 followers
October 25, 2015
Enfaticamente contra os chamados "scoring methods" (chamados pelo autor de "tão precisos quanto astrologia"), Hubbard afirma que só há uma única maneira de se gerenciar riscos: através da análise quantitativa. Critica severamente entidades profissionais que recomendam o que ele chama de "método comprovadamente falhos" e "piores do que inúteis" como boas práticas.

O ponto alto do livro é, sem dúvida, os insights e a desconstrução de falácias e mitos sobre a análise quantitativa de riscos (em especial da simulação de Monte Carlo). Insiste que as barreiras impostas por profissionais para não se utilizar da análise quantitativa está não nos limites da mesma, mas no desconhecimento e/ou inabilidade dos profissionais. Lendo o livro, levei vários "tapas na cara virtuais" vendo o autor me convencer que eu estava errado.

A proposta de abordar problemas do âmbito do gerenciamento de maneira científica é sem dúvida de grande valia. Desenha em alto nível um conjunto de princípios para a construção de modelos probabilísticos, não fornecendo nenhum exemplo mais detalhado (o que é perfeitamente aceitável). Recomendo além da leitura do livro o estudo das planilhas auxiliares que o autor fornece em seu site - são importantes para o entendimento dos conceitos apresentados.
Profile Image for Kai Evans.
169 reviews6 followers
October 31, 2018
basically the good old "other statisticians are idiots", but with less arrogant charm than Taleb
Profile Image for Herman.
3 reviews
June 23, 2025
«The Failure of Risk Management» is a book where Hubbard smugly takes aim at everyone else while contributing little more than a shallow walk through undergraduate-level statistics and economics. The tone is self-congratulatory, with an outsized focus on criticizing others rather than providing tools or frameworks of his own. Hubbard’s energy is spent not on substance, but on polemics. The result is a book that postures as rigorous while avoiding the hard work of engaging seriously with the complexity of risk.

P.S. Sorry for sounding a bit like Hubbard myself, but after reading a book this heavy on criticism and this light on substance, it’s hard not to mirror the tone a little
Profile Image for S..
127 reviews4 followers
September 20, 2025
Man, I'm sorry but I really hate everything about the way Douglas Hubbard writes. I want to care about quantitative risk management but I feel like discarding it to spite this smug asshole.
Author 1 book7 followers
September 2, 2016
This book is a must-read for those that make, or contribute to, decisions. Just like in How to Measure Anything: Finding the Value of Intangibles in Business, Hubbard pulls no punches in describing what works - and what doesn't.

The book begins with a survey of the current state of risk management in many different fields. It is made painfully obvious that particularly in IT and project management, we are ignoring basic knowledge from fields such as actuarial science. Hubbard makes it clear that subjective, non-calibrated 1-5 scales for risk such as what is prescribed by PMI are worse than useless. They actually cause us to unknowingly take on risk while we think we are being safe. You've used them. I've used them. Now we know better. We need to stop.

What should we do instead? Models, simulations, and calibrated experts. Hubbard clearly shows it is not overkill to use these tools, and recommends ways to do so effectively. Do yourself and your organization a favor - read this book and educate others on how to do effective risk management.
4 reviews
September 27, 2011

Like the Black Swan, this book opened my eyes to the failure of risk management. My issue is that it shows why qualitative risk management fails but does not provide evidence for the accuracy of quantitative methods other then vague statements by the author that he goes back and checks his estimates. Independent studies I have found for nuclear facilities, there are surprisingly few, put in doubt even quantitative methods. That being said, enjoy the mathematics of quantitative methods but believe they must be used with much caution until more research is done.
11 reviews
November 6, 2018
To understand the book, one needs to understand the difference between Qualitative and Quantitative risk analysis.

Traditional risk management states that the severity of a risk is equal to Probability x Impact.

Let's start with Qualitative Analysis. Qualitative usually begins with a scoring system. For example 1 would be the lowest and 5 the highest in terms of probability and impact.

Suppose we want to know the risk severity of a customer slip and fall in our store. An example might be this:

Probability of a fall: 1 (Low)
Impact of a fall: 5 (High)
Severity = 5 (P x I) - On a scale of 1 to 25, this would be somewhat low severity

Per Hubbard, this type of non-scientific analysis is pervasive through the risk management industry, widely used by fresh MBA grads and management consultants.

In classic Pareto form, the bulk of this book boils down to a small summary: Qualitative risk analysis is immeasurable, and therefore useless. The above analysis provides no real data or scientific reasoning to treat it with any importance, thus Qualitative methods are meaningless, and in some cases even dangerous.

Hubbard argues for a cultural shift to Quantitative techniques, methods that seek to assign numerical probability to risk events through inferential statistics and mathematical models.

Although Hubbard makes a great case against Qualitative Analysis, he fails to make more complicated methods seem approachable to risk management new comers. The only model to my recollection that receives any deep explanation is Monte Carlo analysis, a technique for simulating a system many times to create a probability distribution.

Over 70% of the book is Hubbard attacking other methods, without providing any alternatives for individuals lacking his technical background. This is unfortunate, because these technical methods are largely teachable if explained correctly.

Failure of Risk Management is a good read for Risk Managers and other industry professionals. I only wish Hubbard had taken the time to make some complex concepts simpler, thus giving the book a broader appeal.
Profile Image for Cold.
618 reviews13 followers
February 12, 2022
Hubbard writes about the same few ideas over and over. He is a pragmatic, opportunistic quant who operates in emerging risk topics dominated by qualitative methods. As a result, he proselytizes a handful of empirical methods that don't need huge amounts of historical data. We hear a lot about monte carlo methods, expert judgement with calibration, decomposition etc etc. All of this is fine as a narrow argument about how to analyse emerging risks.

But he over-extends the argument so so badly. First, this is a book about risk analysis not risk management, which he admits. So the content doesn't actually address the title. Second if you write a book about "The Failure of Risk Management" in 2018 arguing for more quantitative methods, it needs to seriously come to terms with the financial crisis and related financial disasters. When he cites Taleb and the misapplication of narrow-tailed distributions, this is more a call to apply the right quantitative methods. There is no reflexive critique about false confidence, quant models as political tools, etc etc. More generally, it felt like all case-studies and literature were cherry-picked to suit his argument about applying quant to emerging risks.

I dunno, maybe I am the sucker for expecting more from what is essentially an airport book, but the book felt dishonest.
47 reviews
November 15, 2020

I am usually a tad reluctant to read books from other consultants about risk management but I saw a review about it somewhere and I was also encouraged by the fact that it was published by Wiley (a hangover or a bias from my distant Uni days).

When you consult or engage with an issue in a business environment you need to spend time and effort articulating the issue before you switch into what I call 'solution mode'. The rationale is that if people don't see that the issue they are unlikely to engage with the proposed response. I will come out openly at this point and admit that my reflex is to go too quickly solution mode. This book is a the opposite 200 pages about what is wrong and about 60 how to fix it. Don't take me wrong, I believe that the proposed solution - a risk quantification based on layers of Bayes consideration - has potential and I was keen t get under the skin. Alas I need to find another book to do so.

If you have read any of the books from Nissim Taleb, you would have undoubtedly seen the strength of his views. I am not quant enough to challenge Taleb but it was good to see a critique here (Chapter 8).
Profile Image for John.
623 reviews5 followers
June 1, 2021
This book is probably one of the most referenced risk management books around, short of Taleb. Finally broke down and got a copy of the 2020 edition. I have been involved in quantitative risk analysis of capital projects for about 20 years (Hubbard would put me in the War Quant category). I read this book on a plane some years ago, but decided to re-read it in the new 2020 edition given questions people were asking me about it. Hubbard has a very broad remit of risk management of enterprises and decisions of all kinds. In other words, the book is not going to give anyone off the shelf tools. In a nutshell, he is trying to get enterprises off of the exclusive use of the risk matrix and deterministic analysis and to use more quantitative methods and probabilistic models. And recognizing that subjective input is required even for quantitative models, to get the best (calibrated) subjective input you can. But also to get real data into one's models. That's about it. There are a lot of tid-bits to bookmark about basics about stats, findings of research, etc. I am glad to have refreshed my recall of it; however, nothing really new to me. Good reading for those newer to the topic.
Profile Image for Avi Poje.
129 reviews
September 1, 2025
I read this book at the same time as one of Hubbard’s other books, How to Measure Anything. By the end of both, I am a convert to his insistence that quantitative risk analysis using Monte Carlo simulations is superior to risk matrices. Or rather, risk matrices are placebos and do very little on the way of helping make informed decisions.

Risk matrices remind me of the story of the drunk who is looking around under a streetlight. A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, "this is where the light is".[^1] Just because risk matrices are *easier* doesn’t make them effective.

The only trick, of course, is that I still need to learn how to put together a Monte Carlo simulation.

[^1]: David H. Freedman (2010). Wrong: Why Experts Keep Failing Us. Little, Brown and Company.
Profile Image for Josh Ranger.
3 reviews2 followers
December 28, 2021
In my humble opinion, the title is misleading. This is a book about quantification as part of risk analysis within a risk managment process. If you're looking for a comprehensive book addressing risk management frameworks and processes, look elsewhere.

The authors' premise seems to be that risk managers 'don't do math good' and therefore the risk management is ineffective. I agree with some points, particularly that professional risk managers need to ask themselves 'how do we know we're being effective?' Frankly, this is challenging, especially if additional resources aren't allocated for reporting and monitoring. I would have appreciated more page space on effective communications - especially in large organizations with multiple business units. Even if the math is dead on, I've seen instances of changing assumptions to get the answer to fit a narrative.

Constant references to the consulting practice and website rubbed me the wrong way. Giving three stars for useful insights on risk quantification.
7 reviews1 follower
December 29, 2022
I was very surprised with how engaged I was with the content of this book. I believe it should be required reading for anyone participating in risk analysis of any kind.

Probability and statistics are not always exciting, but this author makes it as interesting as it can be. I have a B.S.M.E degree, but believe most persons can grasp the information within these pages. Would recommend to any professional involved with enterprise risk management.
Profile Image for John.
623 reviews5 followers
March 8, 2018
A good executive level summary of some of the challenges the risk management and risk quantification profession faces. I think it leaves the executive hanging a bit at the end in terms of workable solutions for anything other than major decisions (granted, most consultants only work on the big stuff), but overall its a good treatment and I agree with most of its findings.
Profile Image for Bayar.
6 reviews
October 12, 2018
The most common risk-management methods are flawed because they rely on qualitative descriptions and don’t account for human bias and the relationships between risks.Therefore, in order to effectively determine and manage risk it’s essential to use probabilistic models, which rely on calibrated experts and comprehensive variables.
Profile Image for Eric Wenger.
31 reviews3 followers
December 31, 2017
This is an above-average business book about how to assess and manage risk. Some weird attempts to pick fights with other probabilistic thinkers like Talib. Also somewhat repetitive—common to business books. But thought provoking.
8 reviews
January 30, 2021
Required reading for a statistical risk management course. Advocates for a more scientific method to risk management. A lot of what's said here boils down to (1) you need to be able to quantify your results (2) just because you think you did, doesn't mean you actually did.
226 reviews
July 22, 2025
Read for graduate school Critical Infrastructure class. Very dense and didn't answer my question on how to assess risk of terrorist events (my professor says it can't be done, continue to use the risk matrix the book says not to use).
Profile Image for Marcin Nowak.
54 reviews
November 20, 2017
a popular position aimed at increasing awareness of what risk really is. In that sense, I think it worked out for me
Profile Image for Andy.
849 reviews6 followers
November 24, 2018
A useful discussion of risk management with a good discussion on the applications of judgment and decision making research. The calibration tests are a nice benefit as well.
205 reviews
June 13, 2021
Oh I remember nothing about this book...
Profile Image for East West Notes.
116 reviews33 followers
November 24, 2025
Very useful, but I should have gone for ebook rather than audiobook for this one.
Profile Image for Denis Korsunov.
84 reviews3 followers
February 6, 2017
It's good book about techniques of risk assessment, but extra-mural conversations with ideological opponents (i.e. Taleb) is adding fatigue.
Displaying 1 - 30 of 54 reviews

Can't find what you're looking for?

Get help and learn more about the design.