Jump to ratings and reviews
Rate this book

Basic Books Escape from Model Land How Mathematical Models Can Lead Us Astray and What We Can Do About It.

Rate this book
Why mathematical models are so often wrong, and how we can make better decisions by accepting their limits    Whether we are worried about the spread of COVID-19 or making a corporate budget, we depend on mathematical models to help us understand the world around us every day. But models aren’t a mirror of reality. In fact, they are fantasies, where everything works out perfectly, every time. And relying on them too heavily can hurt us.   In Escape from Model Land, statistician Erica Thompson illuminates the hidden dangers of models. She demonstrates how models reflect the biases, perspectives, and expectations of their creators. Thompson shows us why understanding the limits of models is vital to using them well. A deeper meditation on the role of mathematics, this is an essential book for helping us avoid either confusing the map with the territory or throwing away the map completely, instead pointing to more nuanced ways to Escape from Model Land . 

256 pages, Paperback

First published November 24, 2022

109 people are currently reading
857 people want to read

About the author

Erica Thompson

12 books6 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
57 (18%)
4 stars
122 (40%)
3 stars
93 (30%)
2 stars
26 (8%)
1 star
5 (1%)
Displaying 1 - 30 of 57 reviews
Profile Image for Brian Clegg.
Author 161 books3,174 followers
November 29, 2022
Over the last few years a number of books, notably Sabine Hossenfelder's Lost in Math, David Orrell's Economyths, Cathy O Neil's Weapons of Math Destruction and Tim Palmer's The Primacy of Doubt, have pointed out problems with the mathematical modelling done by businesses, physicists, meteorologists, epidemiologists, economists and more. These are not anti-science polemics, but rather people who know what they're talking about pointing out the dangers of getting too carried away with elegant mathematics and models, often assuming that the models effectively are reality (and certainly presenting them that way in some of the writing and press releases from the scientists building and using the models).

Erica Thompson takes on the problems of mentally inhabiting the mathematical world she describes as 'model land'. As she cogently points out, it's fine to play in model land all that you like - the problem comes with the way that you exit model land and tie back to the real world. This book is loaded with examples from climate forecasts, economics, pandemic forecasting and more where the modellers have been unable to successfully get out of model land and present their information usefully to those who have to make decisions (or the public). This is not an attempt to get rid of models. Thompson's key argument is that while models will pretty well always be wrong they can still be very useful - and an understanding of uncertainty/risk combined with expert interpretation is the best (if sometimes narrow) bridge to link model land to the real world.

Unfortunately, unlike the books mentioned above, Thompson's doesn't read particularly well - the writing is very dry. It's also arguable that having set us up to ask questions about scientific output and models, we don't get the same degree of analysis applied to Thompson's personal ideas. So, for example, she tells us 'Diversity in boardrooms is shown to result in better decision making'. I don't doubt this, or the parallel she is drawing for needing diversity of models - but how was success of decision making measured, and what does diversity mean in this context? In fact diversity is something of a running theme, with Thompson several times referring to model makers as largely WEIRD (apparently standing for Western, Educated, Industrial, Rich, Developed) - the acronym seems an unnecessarily ad hominem jibe - and is it really possible to develop mathematical models without being educated?

There are a few oddities and omissions. One of Palmer's big points in The Primacy of Doubt is the oddity that economics hasn't taken up ensemble forecasting - something that isn't mentioned here. The way (mathematical) chaos is presented is also a little odd - it's mostly referred to as 'the butterfly effect', which is really only a specific example of a potential (though relatively unlikely) impact of a chaotic system. Thompson also calls chaotic systems complex, yet they can be surprisingly simple. It's also unfortunate when describing the limitations of vaccine modelling there is no mention of a point emphasised in the scientific journal Nature: the way surface transmission and cleaning continued to be pushed many months after there was clear evidence that transmission was primarily airborne.

Thompson's enthusiasm for diversity has one notable exception that throws into doubt her concept that the best way to use models is to have more intuitive human input from academics to interpret and modify the results. She has an impressive list of diversity requirements - age, social class, background, gender and race for those academics. But she omits the diversity elephant in the room, which is political leanings. When the vast majority of academics are politically left wing, surely this too needs to be taken into account.

Overall, some interesting points, but a dry academic writing style combined with some limitations means that it has less impact than the books mentioned above.
Profile Image for Gary Moreau.
Author 8 books286 followers
January 6, 2024
Erica Thompson is a senior fellow at the London School of Economics’ Data Science Institute and a fellow of the London Mathematical Laboratory. She has worked on models relating to COVID-19, public policy, and climate change. She might be the last person you would expect to say, therefore, that “follow the science” can be a fool’s folly.

Despite the world’s current worship of data as holding the answer to all questions, data are dumb. They know nothing. They rely on models to add meaningful relationships between them. Said slightly differently, all data exist in context. And no context relating to the reality of the world we live in can ever be completely knowable. It is simply too complex to draw it all together in a definable mathematical form.

Which is why she quotes British statistician, George Box, as famously noting that, “All models are wrong.” She notes, “The forecast is a part of a narrative…An engine, not a camera; a co-creator of truth, not a predictor of truth.” “Even if the model is ‘made’ by an artificial intelligence rather than by a human expert, it inherits in a complex way the priorities of those original creators.”

That is not to say, however, that we should simply jettison all of the modelers and revert to simple intuition or folklore for guidance in dealing with complex issues like weather, climate change, and pandemics. We just need to use their models responsibly. We need to understand their limitations and potential biases that may be unknowingly built into them.

And she concludes the book with five recommendations that should be adopted to help do just that. And they all make sense to me.

I live in the world of business, not public policy. And in the 45 years I have toiled in the corporate world I have been alarmed at the degree to which business people today rely on data and the financial models they create with them to make important decisions that greatly impact the lives of real people. Intuition and experience are largely dismissed, which is one of the reasons youth is in and those who have been there before are largely relegated to the back of the room.

Virtually no project gets approved today if it is not supported by an elaborate financial model. And if you can say that it was built with Big Data it is almost a shoe-in. Yet are our corporate decisions today any more accurate, or just less considerate of the company’s constituencies, particularly its employees.

Common sense is the oldest form of intuition there is and this book is full of it. Wherever you stand on the largest issues of the day, like climate change, this book will help you become more informed and better able to represent your position.

And it’s all written in language that any of us can follow. No mathematical notations or dense scientific jargon. The writing is very conversational and easy to follow.

Well done.
Profile Image for Anna.
2,114 reviews1,016 followers
July 23, 2023
I decided to read Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It after attending a webinar in which the author discussed it. The use and misuse of statistical modelling is of great interest to me, as my job and past academic research have involved various forms of models. Thompson discusses what models are, how they should be used, and their fallibility in a thoughtful manner. She started the book prior to the pandemic, which then provided an extraordinary case study in public and political interpretation of epidemiological model outputs. Her phrase 'Model Land' is helpful shorthand for the gap between the real world and what models assume:

You cannot avoid Model Land by working 'only with data'. Data, that is, measured quantities, do not speak for themselves: they are given meaning only through the context and framing provided by models. Nor can you avoid Model Land by working with purely conceptual models, theorising about dice rolls or string theories without reference to real data. Good data and good conceptual understanding can, though, help us to escape from Model Land and make our results relevant once more to the real world.


Two of Thompson's main points are that all models contain judgements and the determining the usefulness of any model's results also involves judgements. Insofar as possible, these judgements should be transparent and from diverse perspectives. I was tickled that chapter 4 begins with a quote from the Borges story 'Funes the Memorious', as I also compared this with large language models in my review of Labyrinths. Thompson writes well about machine learning as money laundering for bias:

The evolution of business and policy-making decisions away from so-called GOBSAT method of group decision by 'Good Old Boys Sat Around the Table' (with all the non-inclusiveness that the phrase implies and more) and towards more quantitative methods has been a positive development, but it is also starting to go too far by removing anyone at all from the table and substituting them with what I might call Good Old Boys Sat Behind the Computer. We have not solved the underlying problem here, which has very little to do with mathematics. [...] For this reason, I think that a primary challenges of twenty-first century decision-making is learning to curb overenthusiasm for mathematical solutions.


I enjoyed an unexpected discussion of how astrology's usefulness as a predictive model:

I am certainly not inclined to take up astrology in preference to mathematical modelling, and my aim here is not to claim that the two cannot be equated, but there are instructive similarities. One is the always conditional nature of future forecasts: if the conditions of the model are never satisfied, will we ever be able to say retrospectively whether it was 'right' or 'wrong'? Another is the potential for bias to creep in according to the funding of research: what kinds of mathematical models do we elevate to high status in different fields, and how does this reflect the priorities of funding agencies? And third is the ability of the mathematical framework to support and give give credibility to the judgements of the experts who construct, drive, and interpret the models.


I was also pleased to learn about the Hawkmoth effect, which I hadn't heard of by that name before. It means sensitivity of results to model structure, rather than initial conditions (the butterfly effect). Chapter 8 was the most difficult to read, despite being excellent, as it covers climate change modelling. This was particularly enraging and depressing to read during the dangerous, record-breaking heatwave blanketing the northern hemisphere in July 2023 (for future reference). I came across this phenomenon repeatedly while researching carbon dioxide mitigation policies:

[McLaren & Markusson, 2020] highlight in particular the ways that promised solutions in the past have failed to live up to their advertised potential and so 'layers of past unredeemed technological promises have become sedimented in climate pathway models'. As they say, this constant reframing and redefinition of climate targets tends to defer and delay climate action - even when the intentions of those involved are largely positive - and this undermines the possibility of meaningful responses, as a result constantly shifting the burdens of climate risks onto more vulnerable people.


I really appreciated how lucidly Thompson explains concepts like this and the way that models assume future technological improvements will come along and save us, e.g. carbon capture and storage, hydrogen fuel cells. How can these possibly be useful when fossil fuel multinationals continue to pour money into extracting oil, gas, and coal like there is no tomorrow - and according to their business plans there won't be. As Thompson says, the next such technological fix to be thrown into models will undoubtedly be geoengineering. Once it's in the models it'll move towards the Overton Window of politically acceptable policies, which would open a gargantuan can of worms.

During my postgraduate studies I also became familiar with fucking Nordhaus and his model that said 4 degrees of warming would be economically optimal. Being reminded of that was especially enraging. Economic discount rates may sound boring and technical, but their effect is to justify destroying the liveability of planet Earth by always prioritising current economic growth. As Thompson explains, choosing environmental policies on the basis of willingness to pay right now is fundamentally flawed: 'if willingness to pay reflected value, we would find that oxygen is worth much more to an American than to an Ethiopian'. (Don't think about this for too long or it'll make you feel ill.)

The book's central theses about models are widely applicable. My mind immediately jumped to transport modelling, which isn't specifically mentioned. The UK Department of Transport's travel forecast models are part of the system of car dependence. They are designed with embedded assumptions that always project rises in car travel, which are used to justify expanding road capacity, which induces more car travel (the Braess Paradox, discovered more than fifty years ago). This is a particularly direct example of what Thompson describes as performativity; 'when the use of the model directly shapes the real-world outcome in its own image'. This risks going even further with autonomous vehicles, as Paris Marx warns in Road to Nowhere: What Silicon Valley Gets Wrong about the Future of Transportation. Tech companies argue that AVs need a more predictable road environment without pesky pedestrians crossing, etc. For AVs to function, the road network apparently needs to become Model Land - not a remotely realistic prospect.

Although the climate change chapter was brutally dispiriting, overall I was impressed with Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It and found it very thought-provoking interdisciplinary examination of modelling as a concept. Thompson provides novel and useful ways of thinking about so-called AI, as well as modelling of pandemics, climate change, the economy, and anything else you can think of. However I'm not sure how accessible it would be to someone who has never studied or used mathematical modelling in any context, as it doesn't spend long establishing its terms and context before getting into detail.
Profile Image for Paul.
2,229 reviews
April 26, 2023
The future is not what it used to be – Laura Riding and Robert Graves

Anyone who tries and takes a guess into what is going to happen is making a guess. Some of those guesses may be educated or based on long experience of a particular thing but it is still a guess. One of the methods that we have turned to, to understand what might happen in mathematical modelling. And whilst they can be a useful tool, some of them are not much more useful than a foam screwdriver.

These models that have been created are full of hidden dangers. The people who have created them either consciously or unconsciously inflict them with their own biases. Some of them do not accurately take into account all the information and others make dangerous assumptions about the way things actually happen in the real world.

I thought this was a very interesting book. Thompson puts the case well that we need to use these mathematical models but also be very aware that they have finite limits and are not the answer to all of our problems. The maxim rubbish in = rubbish out is very true, especially in some of these models.

Even though it is a complicated subject, though some of that is smoke and mirrors by the people that want to Thompson makes it accessible and interesting and she made me very aware of the limits that models have. I thought it was a very interesting chapter on financial models that seem to increase rather than decrease the risk. The chapter on climate modelling is well worth reading. I think that the call for a CERN-type system that is run by scientists from all over the globe makes a lot of sense. I can recommend this. 3.5 Stars
Profile Image for Jeff.
1,734 reviews163 followers
July 15, 2022
Astrology == Mathematics. For Sufficiently Large Values Of 2 While Imagining Spherical Cows. Thompson does a truly excellent job here of showing how and where mathematical models of real-world systems can be useful, and where they can lead us astray - perhaps a bit *too* good, as at times she has to jump through a few mental hoops to excuse the inadequacies of preferred models such as those related to climate change and the spread of COVID. On climate models in particular, she actually raises one of the several points Steven Koonin did in 2021's Unsettled - namely, just how wide each cell of the model is by necessity and how much variation there is within these cells in reality yet models must - again by necessity - use simply an average value throughout the cell. But she discusses a wide variety of models in addition to climate, and again, she truly does an excellent job of showing their benefits and how they can harm us. One star is lost due to the extremely short "future reading" section in place of a more standard bibliography (20% or so is fairly standard in similar nonfiction titles). The other star is lost because this book does have a robust discussion of the numerous COVID models and *I DO NOT WANT TO READ ABOUT COVID*. I am waging a one-man war on any book that references this for any reason at all, and the single star deduction is truly the only tool I have in that war. Still, again, this book really is quite good - as a narrative alone, indeed better than the three star ranking would seem to indicate. Very much recommended.
Profile Image for Lucille Nguyen.
450 reviews13 followers
October 12, 2025
I wanted to like this book. As a practicing data scientist who's worked with climate models, financial models, and other simulations of complex systems I very much think that there are severe limits to what can be learned and known from models.

And yet, this book was disappointing. It tried to make the distinction between uncertainty and risk, but did so poorly and didn't explain the Knightian distinction. Complexity from model inputs (butterfly effect) and model structures is presented, but the underlying logic of complexity is not. An argument for model pluralism and ensembles are made, but no practical means to sort from it.

Also, the references to behavioral and social sciences are poor. Talking about the marshmallow effect without noting its unreplicability, presenting only one theory of decision making under uncertainty (narrative theory), and talking about behavioral economics as deviations from rationality without discussing Kahneman-Tversky (or to go even further with it, Gerd Gigerenzer for an opposing view). For an LSE scholar who's worked on epidemiological and economic issues, one would think some comment would be made on those.

The problem is that the book is too dry to be generally read, too non-technical to be read by professionals, just in the in-between where something serious that needed to be said about models (the risk and need to take them with a grain of salt) will be lost to the books like The Signal and the Noise and whatever Malcolm Gladwell's butchering these days. Ultimately, for the worse, for public understanding of science and simulation.
Profile Image for Kyle.
419 reviews
July 26, 2024
My best summary of my review is that there is no escape from Model Land, if what we want is to simply talk about the "real world." The author makes some good points on improving thinking about modeling and not undervaluing non-quantitative models, but doesn't provide any strong evidence that incorporating more qualitative models would improve things [although this is stated many times, one wonders how often qualitative models have made things worse, and so some systematic study is needed]. Evaluating model assumptions is a worthy endeavor and requires many people to truly challenge all of a model's assumptions.

I find this book somewhat hard to evaluate. On the one hand, its main idea seems very reasonable to me and most of the suggestions make sense. On the other, I found the front part of the book strong on assertions and weak on evidence at times. The author's background is in climate modeling, which probably also gives a different take than my more physicist-like take on models. The latter chapters on pandemic and climate modeling somewhat compensate for this. I dislike that there are no citations for most of the studies or evidence used in the book, or formal citations at all [that is with a footnote/endnote], just a further reading section. It makes it difficult to evaluate if there are just bare statements.

[Some examples of statements that could use further argumentation:
p. 79 "If we could not construct some form of conviction narrative to settle emotionally on a course of action, this kind of everyday decision would be soul-crushingly impossible to navigate." Maybe for some people, but everyone?
p. 81 "I don't know anyone who thinks of their own life in Darwinian terms, but research suggests that a degree in Economics can significantly weight your own value systems towards quantitative metrics and less social behaviours" Which research?
p. 82 "As David Davies notes, 'films that present geopolitical events as clashes between forces of good and forces of evil do not furnish us with cognitively useful resources if we are to understand and negotiate the nuanced nature of geopolitical realities.'" Is that their purpose? Would more nuanced good/evil films actually help us negotiate geopolitical realities?
p. 97 "If you listen to scientists who make models, you'd be forgiven for thinking that the value of a model is solely a function of its ability to make reliable predictions." My physics background strongly militates against this idea for plasma physics modelers, who usually argue for insight into behavior rather than strict predictability.
p. 111 "In this way, formal and informal scientific gatekeeping enforces WEIRD values onto anyone who wants to do science: those who make it to the top have tended to be those who do things in the accepted way." Some sort of study/evidence would be great here given that there are clear counterexamples and it is unclear what "accepted way" means here.]

One of my first problems is with the framing. If the book made clear from the very beginning that "models" and "Model Land" is exactly synonymous with only formal mathematical models, I would feel better about awarding a higher rating, but the beginning doesn't make this unambiguous. It states on page 1 "The frameworks we use to interpret data take many different forms, and I am going to refer to all of them as models. Models can be purely conceptual: an informal model of the casual jogger might be 'when my heart rate goes up I am exercising more intensely'." This means that talking about the "real world" becomes a philosophically complex thing, laden with (you guessed it) models. In fact, I'm not even sure what it would mean to talk about the "real world" without some type of model, informal or formal (which I take to be mathematical)

I think if you instead think of model as formal model, the book's advice and idea makes sense, but the definition quote above comes on the first page and means that you as the reader have to make that inference.

The book's thesis is that a diversity of models (diversity meaning using many different assumptions) is a great idea. However, creating all these models is not always practicable. I think the author makes a great case that it is worth the extra effort because this helps probe the explanatory scope and plausibility of differing assumptions for models. Other reviewers have noted that see the author's use of "diversity" construed narrowly; I think that's one possible interpretation, but I think the more charitable interpretation is that the author appears to mean it as many different backgrounds and assumptions, including political ones based on other passages highlighting that moral and political choices are a part of models and should be discussed and adjudicated among modelers. [For example, p. 100 "Depending on your political orientation, your own scientific level and your trust in the workings of expert knowledge, you may reasonable come to different conclusions."]

The author does a good job of explaining that predictive power alone is not the be-all end-all of models, and that being a good explanation in itself, its fit-for-purpose [having input/output straightforwardly translatable to outside of the model], being verifiable, and probing counterfactuals are all potential benefits beyond prediction. I would add educational value [in physics, the simple harmonic oscillator] is another, such that qualitative prediction rather than quantitative accuracy is often already incorporated in people's ideas of models.

I like that the book is short and to the point for the most part. I think the author's ideas and reasoning are better drawn out on the chapters in climate and pandemic modeling because concrete examples are far more useful than generic advice. Saying we shouldn't just value mathematical models, but also expert informal models seems to me to be exactly the sort of thing that will be difficult to adjudicate in situations without clear decision criteria, which themselves are subject to criticism. Informal models can be worse because they cannot make clear distinctions, just as formal models can be too simplistic or give false confidence in accuracy. But this will not be news to anyone. Giving too much power to experts opens up bias, and finding some golden mean between algorithm and human heuristic will remain ever shrouded in uncertainty.

Still, I appreciate that the book makes clear that assumptions are behind every model, and those assumptions really do determine if a model is useful. Having many people evaluate these assumptions (hopefully with useful data to adjudicate on top) really can improve our understanding of a model and its limitations and make models more resilient.
207 reviews
February 8, 2023
Very academic in style from the start mentioning qualitative and quantitative exits to “Model Land” — a construct I found a bit hard to really get my bearings around — this was a book i found more challenging than other recent ones. Has some interesting deep dives in how models do lead us astray including AI, which makes it worth dipping into.
Profile Image for Dyanna Jaye.
15 reviews2 followers
October 26, 2023
Okay, this book makes a useful argument. Thank you Erica.

But, like many academic books, the argument could be made well enough through a much shorter summary. I'd recommend first starting with the podcast with Erica Thompson on Volts, which is great, and you probably don't need to read the whole book.
Profile Image for Shrike58.
1,450 reviews23 followers
October 8, 2024
The basic question I come away from this book with is who is it really written for? It can't be written for practitioners, as I suspect that all but the most dogmatic are aware of the limitations of their models. It also feels too technical for the mythical general reader as an avenue of enlightenment about the pitfalls and issues involved in simulating real-life phenomena. Therefor, once you get past Thompson's critique of the work being done in regards to such variegated endeavors as epidemiology, climate prediction, and financial management, I supposed that the target audience is the relevant advisors to policy makers who hold the purse strings. Certainly Thompson's proposal of the equivalent of a CERN for the pursuit of better modeling of climate change does seem to be relevant and doable; assuming the will is there.

Actual rating: 3.5.
Profile Image for Aaron.
209 reviews1 follower
June 6, 2024
Read as a part of the Spring 2024 CSU Statistics book club. This book strongly argues for people modeling data to remember that models are not reality and we must remember about how to successfully exit model land when we want to discuss the real world. The book is well-written with a very nice philosophical background. As a statistician, I don’t think I’ve ever thought models were reality, so this argument in the book wasn’t too impactful for me. I did enjoy the examples and general background. Students seemed to like the book as well.
Profile Image for Drtaxsacto.
699 reviews56 followers
January 11, 2023
If you want to think about how models influence our lives this is a good book to do it with - with a couple of minor exceptions. We constantly want to understand and deal with uncertainty and thus try to construct things which will help us deal with those uncertainties. We have models to deal with financial markets, with disease, with climate and a host of other things.

Thompson does a good job of explaining how models are constructed and when they are useful. She also points out that like any other human enterprise models are constructed not just with pure mathematics. ALL models involve political and value judgments about what to count and how to weight each factor. And indeed the things we construct to help us think about what might happen next are laden with assumptions. There is where the problems arise. In public discussions about what might happen next we have begun to hear "the science is settled" - that is not science.

My concern with Thompson's book is that she does a great job of laying out the problems with value biases in models and then in her discussion of health and climate models she puts too much trust in the numbers produced by those specific models. In COVID most experts relied on models which were correct in their math but had no consideration of alternative ways to model the same effects.

The final chapter is about how to prevent errors in our use of models (because they will always be imperfect). Thompson suggests two strategies. First, we need to be rigorous in trying to look at outliers and understand whether they have anything to offer us. Second, the model builders need to be a lot more explicit about their underlying values and policy assumptions. If model builders were a bit less involved in trying to prove their models we might all be better off.
Profile Image for Lloyd Downey.
756 reviews
February 24, 2024
Ever since I was taught about "error analysis" with experiments, I've had a guarded view of mathematical models. And I've continued to be surprised at the way mathematic models of great complexity are employed with no regard to error
Erica Thompson has put together a really good piece of work here in drawing attention to the ubiquitous use of models today and also emphasising that the model is not reality. the other thing she does extremely well is draw attention to the fact that even if models give us incorrect answers they can provide us with new insights and suggest further avenues for research. But, most illuminating of all, she draws attention to the fact that the output from models has to be interpreted by society and that involves certain value judgements.

I really, enjoyed the book and learned a lot from it. Here are a number of quotes from the book that made an impact on me or that summarise some of the lines of her arguments:
Climate tipping points are absolutely on the radar of mainstream scientific research. It's not that we think these kinds of events can't happen, it's that we haven't developed an effective way of dealing with or formalising our understanding that they could happen. One premise of this book is that unquantifiable uncertainties are important, are ubiquitous, are potentially accessible to us and should figure in our decision-making.

Though Model Land is easy to enter, it is not so easy to leave.. Having constructed a beautiful, internally consistent model and a set of analysis methods that describe the model in detail, it can be emotionally difficult to acknowledge that the initial assumptions on which the whole thing is built are not literally true.

Box's aphorism has a second part: 'All models are wrong, but some are useful.' Even if we take away any philosophical or mathematical justification, we can of course still observe that many models make useful predictions, which can be used to inform actions in the real world with positive outcomes.

Depending on the type of model, we may have to ask questions like:
1. What kinds of behaviour could lead to another financial crisis, and under what circumstances might they happen?
2. Will the representations of sea ice behaviour in our climate models still be effective representations in a 2°C-warmer world?
These are questions that cannot be answered either solely in Model Land or solely by observation (until after the fact): they require a judgement about the relation of a model with the real world, in a situation that has not yet come to pass.

We want to be able to give a narrative of how the model arrived at its outcomes. That might be an explanation that the tank detector is looking for edges, or a certain pattern of sky, or a gun turret. It might be an explanation that a criminal-sentencing algorithm looks at previous similar cases and takes a statistical average. If we cannot explain, then we don't know whether we are getting a right answer for 'the right reasons' or whether we are actually detecting sunny days instead of tanks.

The need for algorithmic explainability and the relation with fairness and accountability, described by Cathy O'Neil in Weapons of Math Destruction, is now acknowledged as being of critical importance for any decision-making structures..... I want to extend this thought to more complex models like climate and economic models, and show that, in these contexts, the value of explainability is not nearly so clear cut.

Most real-world objects...are not close to being mathematical idealisations..... In these cases, we resort to statistics of things that can be observed to infer the properties of the one that cannot be observed or that has not happened yet:

If I can't make a reasonable model without requiring that π =4 or without violating conservation of mass, then there must be something seriously wrong with my other assumptions. ...... Our cultural frame for mathematical modelling tends to mean that we start with the mathematics and work towards a representation of the world, but it could be the other way around.

Complex models, however, have numerous outputs, so to make an ordered ranking we have to find some way to collapse all of this complex output to a single number representing 'how good it is'..... This will give a single value for each model which can then be compared with the other models......But... Which variables are important? If more than one, are they equally important.... How much does being slightly wrong matter? Should there be a big penalty for getting the prediction wrong, or a small one?...etc..

For some time the guidance offered by the Bank of England’s probability forecasts of economic growth noted a conditionality on Greece remaining a member of the eurozone, so by implication they would be uninformative if Greece were to have exited......What this means is that attempting to provide a full Bayesian analysis of uncertainty in a "climate-like' situation is a waste of time if you do not also and at the same time issue guidance about the possible limitations.

The question is whether the models (books, movies, assumptions, scientific processes) we are exposed to are sufficiently varied to achieve that, or whether they have an opposite effect of making us see only through the eyes of one group of people.
Simplified historical and political models are manipulative in similar ways, embedding sweeping value judgements that not only reflect the prejudices of their creators, but also serve to reinforce the social consensus of those prejudices.

Other shared models (or metaphors) include the household budget model of a national economy, which suggests prudent spending and saving but does not reflect the money-creating abilities of national governments..... Sociologist Donald MacKenzie described the Black-Scholes model as 'an engine, not a camera' for the way that it was used not just to describe prices but directly to construct them.

My point is that, regardless of any justification or truth value, the framework by itself can be a positive influence on the actions and outcomes. Use of such a framework can convey complex information in a simple and memorable format, systematise potential actions so that they can be confidently undertaken even given uncertainty about the future.

Who makes models? Most of the time, experts make models.....Hopefully, they are experts with genuine expertise in the relevant domain:.... Interacting with the model starts to influence how the expert thinks about the real system...... If nine out of ten models do a particular thing, does that mean they are 90% certain to be correct?.....Why might nine out of ten experts agree on something? It may indeed be because they are all independently accessing truth...... Or it might be that they are all paid by the same funder, who has exerted some kind of (nefarious or incidental) influence over the results. Or it may be that they all come from the same kind of background and training.

If someone says that climate change is not happening or that Covid-19 does not exist, they are contradicting observation. If they say that action to prevent climate change or stop the spread of disease is not warranted, they are only contradicting my value judgements...... As such, most of these are social disagreements, not scientific disagreements.

If economic models fail to encompass even the possibility of a financial crisis, is nobody responsible for it? Who will put their name to modelled projections?...In my view, institutions such as the IPCC should be able to bridge this accountability gap by offering an expert bird's-eye perspective from outside Model Land.

As we are talking about decision-making, I want again to distinguish models from algorithms, such as those described in Cathy O'Neil's great book Weapons of Math Destruction. Algorithms make decisions and as such they directly imply value judgements.

The models for climate policy which assume that individuals are financial maximisers, and cannot be expected to do anything for others or for the future that is not in their own narrow short-term self-interest, are self-fulfilling prophecies.

Mathematical modelling is a hobby pursued most enthusiastically by the Western, Educated, Industrialised, Rich, Democratic nations: WEIRD for short...... The kinds of modelling methods that are most used are also those that are easiest to find funding for, and those that are easiest to get published in a prestigious journal. In this way, formal and informal scientific gatekeeping enforces WEIRD values onto anyone who wants to do science:

One of the traps of Model Land is assuming that the data we have are relevant for the future we expect and that the model can therefore predict.....The 99th percentile is a useful boundary for what might happen on the worst of the good days, but if a bad day happens, you're on your own. David Einhorn, manager of another hedge fund, described this as like an air bag that works all the time, except when you have a car accident'.

Self-interest in principle should include longer-term sustainability of the system, but in practice those market participants who do not price in longer-term sustainability can be more competitive and put the rest out of business.

Where the modeller endows their model with their own values, priorities and blind spots, the model then reflects those values, priorities and blind spots back....... In this fast-moving field [of economic and financial situations] models are useful for a time - sometimes extremely useful - and then fail dramatically.

If we take the Nordhaus model at face value. what it implies is that the Paris Agreement is founded on a political belief that less climate change is better even if it costs more. ......If willingness to pay reflected value, we would find that oxygen is worth much more to an American than to an Ethiopian..... concepts of 'optimal' climatic conditions have varied over time. They are invariably produced by dominant groups who cast their own original climate as being optimal' for human development, on the grounds that it produced such a wonderful civilisation as their own.

As SAGE scientist Neil Ferguson was quoted as saying in 2020, 'we're building simplified representations of reality. Models are not crystal balls.'.....but no model can 'decide' what to do until the relative weightings of different kinds of benefits and harms are specified..... But in order to come to a decision about action, we must decide on some set of values.... These political processes lie outside Model Land, but they are at least as important as the mathematics.
Although all models are wrong, many are useful..... One way to escape from Model Land is through the quantitative exit, but this can only be applied in a very limited set of circumstances. Almost all of the time, our escape from Model Land must use the qualitative exit: expert judgement...... And if models are, very much engines of scientific knowledge and social decision-making rather than simple prediction tools, we also have to consider how they interact with politics and the ways in which we delegate some people to make decisions on behalf of others. The future is unknow-able, but it is not ungraspable.
Easily worth five stars from me.
313 reviews17 followers
July 6, 2023
This is a decent primer into the world of modelling and forecasting and how it can lead us astray.

To unfairly lead with my one real critique, the book is decently written and engaging, but I found that, unfortunately, it fell a tiny bit short of what I would need to use it in an undergraduate class for one particular reason: it never really unpacks and reflects on the central term. Thompson lumps conceptual models used as representations of complex systems together with predictive models used to forecast future occurrences. And, while these obviously have an /incredible/ amount of overlap, because they're treated as interchangeable throughout the book, (a) the critiques land a little more softly than they could otherwise and (b) there's not quite enough guidance for the average beginner reader to help them make sense of the many different manifestations of models that permeate our world today.

That said, I think there's a lot of /great/ material in here. I love her notion of the "accountability gap" for the harms caused by models (p. 11) and the ways models are put into positions of excessive power when humans abdicate their decision-making responsibility (p. 94). She has a beautiful example of this where

...experts can pretend their models are policy-relevant when asking for funding and support, but disclaim responsibility - 'it's only a model' when their recommendations turn out to be suboptimal. (p. 105)


I also appreciate her pointing out the ways that models themselves can influence future conditions making their own predictions more or less likely (p. 83). Thompson also has a gift at showing all the ways we produce models inside our own heads (e.g., discussion of conviction narrative theory on p. 78 and its implications for our personal foresighting). I also appreciate her perspectival work, such as arguing that we need to question the dominance of atmospheric physics in climate science (p. 152).

Her articulation of the impact of models on our own perspectives and paradigms is also so useful. "The process of generating a model changes the ways we think about a situation," she argues," encourages rationalisation and storytelling, strengthens some concepts and weakens other. The question of whether a model generates accurate predictions is important where it can be evaluated, but it can in certain circumstances be secondary to the way that it is used for decision support" (p. 92). And, later, she says "Just as fiction has the power to change how we think, so this mathematical version of climate fiction exerts a strong narrative pull on our political and scientific institutions" (p. 161).

To be frank, there are just a bunch of other bangers:

"There is profit to be had from incorrectly characterizing risk, and in particular from underestimating it... Before the event, tail risks are unknown anyway if they can only be estimated from past data. After the event, there are other things to worry about." (p. 124)

"...it doesn't necessarily tell us very much if all the available models agree with each other. What we would need to do truly to generate confidence is to assure ourselves that no other equally plausible models could disagree." (p. 155)

(Re: Hine's review of swine flue) "Perhaps, more importantly, she concludes that the emphasis on modelling 'reduced the opportunity for a full contribution by other disciplines' such as clinical epidemiology and behavioural science, because the implementation and discussion of theoretical models crowded out the limited time and attention of policy-makers to the detriment of other sources of information" (p. 182-183).

"The decision to incorporate some variables and not others is ultimately a sociopolitical choice as well as a scientific choice, even if the model is made before there is any political interest in it. The decision to compartmentalize in certain ways above others reflects the priorities and assumptions of the group or group making a model, or sometimes those of the people who have collected the data that become the only possible input to a model" (p. 188).

Overall, it's a phenomenal book. But, the limitation in helping the novice reader make their way through the different ways 'model' is being used is - while an advantage for the advanced reader - a real limitation to its application for, say, an undergraduate class or an audience of decision-makers.
311 reviews4 followers
September 20, 2025
Really enjoyed this book, it was well written and informative. I wish I had it on Kindle vs checking it out of the library, there was a lot I wanted to highlight and revisit. The author talks a lot about models and uncertainty, and has chapters on the Great Recession and Covid to provide examples and failures of modeling.

Model Land is not an objective mathematical reality, but a social idea. Models shape the way we think about possible futures, and shape the kinds of decisions/actions to influence those futures. Where are the exits from Model Land? We need to understand and explain what went into the model, including assumptions and value judgments, and what the definition of success is. When rare events happen is it because we just got unlucky, or is the model wrong (think 100 year floods or financial crises)? “Modelers must understand that trust itself has a mathematical place in their equations.” A way to escape is to assume you have the wrong probabilities and simply downgrade your confidence that the model is right. (The author says it is not a graceful exit but it does work.) Completely ignoring/refusing to believe models is another, unattractive escape. Mathematic models are not purely objective; they represent personal judgments about relative importance. “Models are dependent on the education, interests, priorities and capabilities of the model maker.” Many times escape is through expert judgement. Provide qualitative guidance that states some failure modes without defining probabilities. Models that are testable, improvable and ultimately reliable (ballistics, radioactive decay) are rarely in doubt. Escaping becomes a question of understanding the timescale or other circumstances on which our forecasts become less useful and eventually irrelevant. We can take action to reduce risk, assuming the model is not perfect at predicting the future.

In her final chapter Dr Thompson presents five principles for responsible modelling, along with questions to ask the modeler. 1. Define the model’s purpose. 2. Don’t say “I don’t know.” Give up the prospect of perfect knowledge, but recognize models can provide insights about the situation. 3. Make value judgments and explain why you made them. 4. Write about the real world. Get out of Model Land and own the results. Explain how the model is inadequate or misinformative. What important processes does it fail to capture? Translate model results for non-experts. 5. Use many models. What other models of this situation exist? How might someone with a different disciplinary background or personal/political interests create a different model?

Some random notes I took from the book: Quantification of uncertainty in complex systems (and their models) may be impossible. Pick the right “reference class”. Unmodelled phenomena may result in outcomes beyond the range of model outputs, and unquantifiable in principle as well as practice. “Experts should provide ‘uncertainty guidance’ to clarify where the model may be inadequate, and how quantitative outputs should be used in practice.” Sometimes we have to make decisions without sufficient info to do so with mathematical reliability. “In Model Land the computer has a clear and unambiguous aim: win the game. Outside Model Land the game itself is unclear and to some extent we are making the rules up as we play.” Establishing “values” in a model is a human judgment. The process of establishing political values lies outside Model Land, but they are “at least as important as the mathematics; generally much more important.”
11 reviews1 follower
November 7, 2023
Extremely long-winded. The 200+ pages lead to a very low-hanging-fruit conclusion which is: "we need more diversity in the modeling community". Additionally its arguments were all over the place. The author claims that the modeling community is heavily dominated by WEIRD (Western, Educated, Industrial, Rich, Developed) professionals which is interesting given that: 1) It would be very difficult to find someone who would be ok with hiring a person with inadequate education to build, interpret, and communicate complex models/algorithms to policy and decision makers. 2) I am not sure if the author is applying this acronym to academia specifically, but Quant teams in most corporate companies are dominated by professionals educated from countries such as India and China so the "Western" element in the acronym is debatable as well.

Diversity of viewpoints and diversity of backgrounds are repeatedly hammered as the solution to the potentially biased view points of academic elites, but near the end of the book on page 217 the author concludes "I have suggested that science would be both more effective and more trustworthy, and likely more trusted, if those experts were selected to maximize the diversity of perspectives rather than to optimize for a certain limited kind of expertise"...so we are back to experts again, but this time experts with diverse professional backgrounds but not with too much education as per the acronym?

Some arguments made by the author suggests a lack of experience outside of academia. The author claims that algorithms should focus more on "caring" about the individual rather than purely optimizing on a hard (often times financial) metric, but companies are too short-sighted/budget conscience to pursue this venture. Where this argument falls apart is that concepts such as "care" and "altruism" are highly abstract and thus cannot be represented by a single variable or feature. They are typically mathematical composites of a series of characteristics. For companies to capture this, they would need even MORE data to train algorithms with. In the author's mind, customers are enthusiastically throwing this private information at companies and it is the corporations that are at fault for ignoring it in pursuit of short-term gains. Following along this argument would leave you wondering how slow the author's personal computer must be from all the cookies willfully accumulated.

The book does offer some useful commentary on the issues with mathematical models (listed below); however, they are not unique to this book.

1) Over-fitting models
2) Vulnerability to Black-Swan events
3) Bias in sample and feature selection
4) Models are used to forecast the future, but more importantly, the interpretation of their output can impact the future as well.
28 reviews
April 2, 2024
This book provides an overview of how mathematical modelling can be used, and most importantly misused, to describe the world around us. Firstly, broad concepts are introduced which need to be considered during model construction, before providing some deep dives into the areas of climate modelling (the field of interest for the author), economic and market modelling, and forecasting of epidemiologic events.

The book’s central thesis is that we translate our real-world into a simplified abstraction, which we call a model. Due to our simplification, we lose information, and part of the challenge is deciding which information is most important to retain. Once we have ‘modelled’ our world, we must then translate these results back into useful information that we can act upon in the real-world, while recognising the limitations we have necessarily imposed on our model. This final action is reflected in the title of the book.

There are lots of useful concepts explored here, including perspective bias among those that create models, clearly defining the expected output from a model so that we can be sure of its intended purpose, determination of near-term versus long-term value, and addressing model transparency to improve public confidence in decisions made using these methods. Several of these topics are ones that I have come up against within my own profession, and it is very useful to see these being talked about and offered up for examination.

While the ideas submitted are strong, the overall presentation can feel a little bit weaker. Firstly, it feels like the writing attempts to simultaneously appeal to audiences with different levels of scientific and mathematical knowledge, producing some inconsistencies in the level of detail provided. This leads to a 200+ page book, which ideally could have either been condensed for some audiences, or a more detailed and expansive work for others.

Additionally, the structure of some sections doesn’t flow effectively, which can make for a disjointed read. Some more effort to craft a throughline and create a clear narrative would have assisted with readability.

There is also a heavy reliance on analogies to explain more scientific or mathematical concepts. Again, I believe this might be a symptom of trying to appeal to two sets of audiences. For those who work within these fields, these analogies can obfuscate the point that is trying to be made, but for people from other backgrounds these might be helpful.

Nevertheless, it is the ideas that shine through here, and these are important considerations which only become more vital as society places more faith in mathematical modelling to divine the future. I suspect that this will become a regular reference, although I will look to supplement with other works which focus in detail on some of the points raised here.
412 reviews16 followers
March 23, 2024
A thoughtful look at modelling by an experienced climate modeller.

What are models for? The most common answer would be "to predict the future behaviour of some system," but Thompson argues a far more subtle line: that the most important models often fail to be predictive in any real sense. Much of this is down to problems of validation, especially in climate models for which we have no experience of the world the models are trying to predict.

An even more subtle mistake is regarding all models as "cameras" that simply observe the world. That's true for the more abstract kinds of modelling, where one is trying to understand possible behaviours of systems in general without tying them to specific circumstances. But the models with which most people are familiar act ore like "engines" that can perturb the system they're purporting simply to observe by baing used as drivers for policy. Climate and epidemic models seek to warn as well as predict and understand, but this exacerbates the problems of validation: if the model's predictions don't come to pass, perhaps this is because policy-makers took corrective actions in response, or maybe bacause they didn't intervent effectively enough. This isn't a reason to give up on modelling altogether: how else are we to understand complex systems, and how else are we to respond rationally to them? But it does mean that the notion of "following the science" problematic.

Thompson also wrestles with the problem of groupthink amongst modellers, who often share a common overallping background. I agree this is a problem, but the idea that we can increase diversity in the community easily seems flawed to me. Modellers share a scientific viewpoint and a belief in modelling, and no-one who doesn't will ever be able to effectively engage with the models or their arguments. Perhaps it's enough that scientists are always advisors and never decision-makers, and allow politicians to deal with the integration of different choices and values – although that split isn't always appreciated by the public, and is often (as in the covid-19 pandemic) deliberately blurred to allow less-trusted politicians to draw credibility from more-trusted scientists and doctors.

Overall I think this is a lucid and valiant attempt to summarise and explore the benefits and limitations of models, and science in general, when it impacts directly on the wider world. It deserves to be widely read in the scientific community so that we can better understand our place in policies that we often unavoidably have to influence.

Profile Image for Manuel Del Río Rodríguez.
134 reviews3 followers
April 24, 2023
In Escape from Model Land, the author makes a case for the limitations of Mathematical modeling (she herself being a professional modeler), along with some suggestions on how to deal with said limitations. The main idea is that models are not reality and have limited uses as reliable instruments for predicting the future, and dealing with uncertainties (some of which can be quantified, some of which not). Besides, all models come with a baggage of ethical, social and political values that determine them.

The 'Escape from Model Land' does not rely on abandoning models, which are valuable and have their uses, but in trying to avoid the issues they generally come with. The 2 main ‘‘exits’ from Model Land are explained: 1) in the quantitative, we compare our model with new and outer data; 2) in the qualitative, we make and rely on expert judgements about the quality of the modeling (easier said than done), also incorporating outside, ‘real world’ opinion. There should be a partnership between brain-models and brain-insights. Still, problems that can appear here are subjectivity, cultural bias and ‘pass the parcel’ accountability issues.

The book goes on to explain the limitations and constraints of models, their dual uses as performative and predictive tools, the lack of accountability they tend to show, as well as exploring them in the fields of finance, climate and epidemiology.

Overall, it is a very interesting and informative book that I recommend. My only complaint would be that I feel it shies away too much from being a little bit more 'mathy' and technical, as books for a general audience often do. It would have been very illustrative and useful to have some actual examples of mathematical modeling and on how a model can be mathematically constructed. The book is also pretty small (c. 200 pages), so it could easily have gone in one of either two directions: flesh the models more and more technically (for a higher page turnout) or be a little bit more synthetic and just emphasize the main arguments (in which case it would have fitted into a couple of articles and/or blog posts).

I took some notes / summaries of each chapter, which you can find in the online google docs in which I register some of my (denser) reads:

https://docs.google.com/document/d/1-...
40 reviews
August 1, 2024
More suitable for an essay or two than a full-length book, IMO.

Basically, the book argues that a) mathematical models should be used with caution and humility and b) their social and political context matters. This is a super valuable point! But the book kind of just says that over and over again. Ways that this could be remedied:

- More emphasis on specific examples and case studies. The book does contain a fair number of solid examples, but far more vague, general rhetoric. This would be fine if the book were focused on building a theory or framework of some kind, but it's not.

- A clearer philosophical stance. For example, in the introduction, the book acknowledges that "model" is actually an extremely broad category, including everything from a set of PDEs to an informal, purely conceptual model ("an informal model of a casual jogger might be: 'when my heart rate goes up I am exercising more intensely'"). But the argumentation in the rest of the book seems oriented specifically to formal models, and seems to identify intuitive/implicit models ("expert judgment") as the alternative to these kinds of models. This is fine, but I wish that distinction were identified explicitly so that the tradeoffs were clearer. More generally, the book just doesn't seem to offer any particular view besides that formal models are highly context-sensitive.

- More emphasis on the alternatives. The book describes a bunch of pitfalls of formal models while offering hardly any evidence that the alternatives are any better. I know it's a common critique to say that something is pointing out problems without offering solutions, but in this case that is the biggest reason why this isn't quite a refined enough narrative to be a whole book. Like, one could surely build a case for using expert judgment instead of formal models under a variety of circumstances. There's plenty of research on the topic. Why doesn't this book do that?

The statements "you should downgrade your confidence in mathematical models" and "modeler diversity is important because mathematical models are informed by their social context" are valuable, but IMO this book doesn't do quite enough to justify spending 223 general-audience pages on them.
Profile Image for Ray.
369 reviews
September 5, 2023
I was excited about this book since it had so much potential. I could hear about a wide variety of models and how they're being used today to better our lives and how to improve on those models. Great refresher on the limitations of models, such as single-purpose models, initial conditions, subjective ideas inserted by modelers and their biases. Great discussion on the relationship of models to the public and how those models are used/described, which are major factors in changing behavior when it comes to climate change and COVID-19 models.

However, this quickly became a book with a fairly narrow perspective. I was hoping to hear about models from a wide range of industries, but the large majority of the discussion with about climate change, COVID-19 and financial models. The summary should've at least mentioned that. On top of that, there wasn't in depth discussion about them, probably because of their complex nature, but more discussing conceptual shortcomings to the models and their limitations. There was a missed opportunity to talk about the huge increase in data within the last few years and how that contributes to these models because data is changing the modelling world very quickly.

It was certainly informative, but I'm not sure who the audience would be. People that work in data/models would know about the shortcomings of models, so I guess this would be a good reminder for them. People that are not in those roles would probably not be too interested in this because it is fairly niche, as mentioned above, and uses a good amount of jargon. I understand why Thompson says "escape from Model Land" or "return to Model Land", but it just seems like a way of saying comparing models with reality without actually saying it.

Recommended for people working with models or those interested in these topics. Helpful for communicating models and results to others, especially the limitations of models.
47 reviews
October 13, 2024
A little unstructured and repetitive as is the case for most books like this. Essentially the book is a caution against overreliance on models. Unfortunately, it doesn't offer too much to the readers what the practical steps are that modelers or experts can take.

This quote was pretty funny.. and probably sums up the whole book.

Paul Wilmott and Emanuel Derman included in their Modeller's Hippocratic Oath the following: "I will remember that I didn't make the world, and it doesn't satisfy my equations"

The other idea is the importance of recognising that a model should be for for purpose, and not just theoretical beautiful.

Finally, another good quote was basically above analysis of residuals..


When you subtract one complex, many dimensional dynamical system from another, you do not get simple random noise. If you did, you could model that noise statistically and gain an understanding of the average error in different circumstances. Instead, what you get when you subtract climate from climate model is a third complex, many dimensional dynamical system, which has it's own interesting structures, correlations, patterns and behaviours. As put by eminent meteorologist and climatologist Brian Hoskins, it is not noise, but music.

There's was a small point made about Nate Silver's Trump /Clinton forecasts. She was skeptical of Silver's defense that it was a probabilistic forecast and that 538 was better than other sites. She is very aware that you can technically aggregate probabilistic forecast of binary events and assess the accuracy of the forecast, but still expressed doubts that different forecasts are comparable because they are all so idiosyncratic. It's definitely a thought worth considering. Superforecasters don't seem to see that as an issues.
Profile Image for Jessica Dai.
150 reviews68 followers
December 26, 2023
I really wanted to like this or maybe I had high expectations because I wanted to know about uncertainty [quantification] and how it should shape decisionmaking but this really did not address it. (though to be honest i'm not sure who the target audience is.... the substance is too high level for technical people and the writing is way too dry for anyone who doesn't really really want to know about mathematical modeling lol)

as the top reviewer points out it's "ideological" - I wouldn't mind this if it were well done (and I think between thompson and this reviewer, I probably agree way more with thompson on politics in general; and the standard way of writing about math/science is certainly also ideological) - but the problem is that it's done in kind of a ham-handed way where, like most of the rest of the book, the interesting bits don't get clicked into and the rest gets smoothed over in sweeping generalities. I know authors don't always control blurbs but I probably should have known... models reflect the biases, perspectives, and expectations of their creators where have i heard that one before lol???

you should read climate science crystal ball instead -- also by someone who builds and works with scientific models but that, I think, really engages with (a) constructive/prescriptive ways in which those models can be used and (b) alternative epistemologies and how to take action based on them
16 reviews
August 30, 2023
It is somewhat surprising that a book on mathematical models can contain no mathematics. Dr. Thompson communicates heavily with analogies, which themselves (along with the whole book) represent conceptual models of mathematical models. This, it may be claimed, allows her to equip the innumerate to criticize (and ultimately discredit, if they so choose) the mathematical models that they briefly come into contact with in their daily lives.

However, Dr. Thompson's metamodel of mathematical modeling is subject to all the limitations, foibles, and dangers that she lists for models in general. But this is compounded by the fact that her conceptual models (that is, analogies) lack any mathematical basis or language. To see why is compounds the problem, consider the fact that in the presence of numeracy, many of the mountainous perils of mathematical modeling become molehills. But no amount of mathematical education can redeem Dr. Thompson's metamodel from its failings.

This is most humorously demonstrated when, of the five things Dr. Thompson recommends (I suppose, in a perverse way, they are "outputs" of her metamodel), none of them seem to include the model consumer moving from innumeracy to numeracy. Indeed, they are simply equipped with more intelligent-sounding reasons to ignore things they never wanted to believe in the first place.

It is a pity, as Dr. Thompson and her metamodel seem to have been unable to affect their own escape from Model Land.
99 reviews2 followers
December 26, 2022
A refreshingly constructivist account of scientific knowledge and communication. This book is at its best when it is most abstract, and really nicely describes some of the most significant issues with how we think about science today. My major complaint is that I think the practical chapters/examples were too limited. Covid and climate change are the obvious examples that have motivated this kind of understanding (popularly) of science as it relates to life, but it was too bad that she was less well versed/did not dive into the next step to this issue: ml/algorithmic decision makers that are already massively effecting our lives based on the kinds of meta-assumptions and embedded values that she talks about with disease and climate models. Her “pragmatic constructivist” (?) framing would absolutely be useful to think about these problems. It was also too bad that she didn’t explicitly reference Latour, whose sociological approach to science she seemed to effectively expand beyond the laboratory to the larger political/social/scientific system (to be clear, I need to read more than a few odd articles by latour, on the list), and Pierce, whose semiotic understanding of science (I’m like a broken record) is slightly differently flavored, but I think instructive to how to resolve some of the dialectics Thompson identified but basically just shrugged off.
42 reviews
May 7, 2025
5 Stars – A Brilliant Wake-Up Call for Model-Driven Decision Making

Escape from Model Land by Erica Thompson is a must-read for anyone who works with data, relies on models, or makes decisions based on “expert predictions”—which, in today’s world, is nearly all of us. Thompson deftly dissects our growing dependence on mathematical models while making a compelling case for humility, transparency, and critical thinking in the modeling process.

What sets this book apart is its accessibility and depth. Thompson doesn’t just explain how models work—she reveals why they can mislead, even when they’re technically sound. With sharp insights and a keen awareness of real-world impact (from climate change to economic forecasts), she shows how models reflect the values, assumptions, and blind spots of their creators.

This isn’t an anti-science book—it’s a thoughtful call for responsible modeling. Thompson invites readers to “escape” the perfect, idealized world of models and return to the messy, complex reality where decisions are made and lives are affected. Her writing is clear, elegant, and often surprising, making abstract ideas come alive.

Whether you’re a scientist, policymaker, analyst, or just someone trying to make sense of the numbers shaping our world, Escape from Model Land offers wisdom and guidance we desperately need.
52 reviews1 follower
May 22, 2023
Are Mathematical Models Legit?

As I was reading through the book I began to wonder when the author was going to stop complaining about mathematical models and present concrete alternative approaches rather than vague hand waving about getting all stakeholder opinions represented. The more I read the more I got the feeling the complaints were constantly recycled to make a full length book of open ended complaints with no concrete achievable solutions suggested. The main complaint was that mathematical models inherently reflect the prejudices, selfishness, and corruption of the model creators. As such they do not address all the realistic concerns associated with the model’s use. The author’s solution was to either entirely jettison mathematical models in favor of a committee of diverse human “experts” navel gazing at what real world data is available to produce a “consensus” of what the data is telling us or the new committee adjusting model outputs based on a consensus of their expertise. My sense is adding more diverse human prejudices, selfishness, and corruption to or replacing mathematical model output would not make for more reliable decisions… just slower ones.


Profile Image for Ergative Absolutive.
641 reviews17 followers
March 12, 2023
This book did an excellent job at what it intended to do. I think it crystallized a lot of the problems with practices and uses of statistical modeling and interpretation that I've thought about quite a bit--especially through the covid years--but not quite been able to put into words. Unfortunately, I doubt that the people who most need to read this book--that is, the people who build the models and see themselves as maximally rational makers of science, or the policy makers who (ostensibly) use those models to inform their decisions or (more usually) pick and choose the models that support their pre-determined decisions in the name of 'following the science'--are actually going to read it and benefit from it through changing their thinking.

More likely it's going to be read by people who already mostly agree with it, and who will like it and nod and say, 'Ah yes, that's what I've always felt but couldn't quite articulate.' (Which, in a way, is almost like politicians choosing the models to 'support' their decisions by finding which models give them the results they like.)

I could be wrong, of course. I hope I am. But I doubt it.
Profile Image for Sophie.
67 reviews
January 4, 2024
Really a delightful book, helping to organise the many reasons not to trust models into thoughts that can be vocalised, and a plea for scientists to grasp that they aren’t purporting objective truths. I enjoyed the various references to literature, but think there could be benefit from evidencing more specific case studies. Nevertheless, an enjoyable read!

Somewhat of an aside; this book really reminded me of an interaction I had with a fellow budding mathematician not too long ago. We were debating whether or not a character in a tv show had a specific quality about them, and one of his arguments against was that the character was in too few scenes for there to be a significant enough variance for my having enough evidence for the quality to be true. Necessarily, did he have enough scenes to support that I was wrong? Safe to say, I think he might have found a way to live in model land! One of many examples of us mathematicians taking our own models of the world a bit too seriously, and trying to apply the concepts used in models to seemingly everything.
287 reviews
November 6, 2025
Models serve as vital tools for understanding complex systems in various fields, such as hydrology and climate science, by simplifying reality to aid decision-making. However, overreliance on models risks obscuring their limitations, leading to misplaced confidence in their outputs. It's crucial to recognize the inherent assumptions and simplifications in any model, as they profoundly influence the results and their application in policy.

Thompson emphasizes the importance of diversity in modelling teams, as biases from their cultural and educational backgrounds can significantly shape the models' design and interpretations. The term 'WEIRD' highlights how many model-builders may overlook critical contexts relevant to diverse stakeholder groups, potentially leading to flawed or unrepresentative outcomes. Recognizing and mitigating these biases is essential for creating more inclusive and accurate models.

The book concludes with principles aimed at fostering responsible modelling practices, such as clearly defining the model's purpose, making assumptions explicit, and recognizing value judgments in the modelling process. Engaging stakeholders and incorporating various inputs alongside models can enhance decision-making and ensure that models serve as useful tools rather than definitive answers. Ultimately, a critical engagement with models encourages humility and adaptability in their application.
Displaying 1 - 30 of 57 reviews

Can't find what you're looking for?

Get help and learn more about the design.