Jump to ratings and reviews
Rate this book

Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions

Rate this book

American taxpayers spend $30 billion annually funding biomedical research. By some estimates, half of the results from these studies can't be replicated elsewhere—the science is simply wrong. Often, research institutes and academia emphasize publishing results over getting the right answers, incentivizing poor experimental design, improper methods, and sloppy statistics. Bad science doesn't just hold back medical progress, it can sign the equivalent of a death sentence. How are those with breast cancer helped when the cell on which 900 papers are based turns out not to be a breast cancer cell at all? How effective could a new treatment for ALS be when it failed to cure even the mice it was initially tested on? In Rigor Mortis, award-winning science journalist Richard F. Harris reveals these urgent issues with vivid anecdotes, personal stories, and interviews with the nation's top biomedical researchers. We need to fix our dysfunctional biomedical system—now.

281 pages, Kindle Edition

First published April 4, 2017

122 people are currently reading
2313 people want to read

About the author

Richard F. Harris

1 book14 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
212 (25%)
4 stars
400 (48%)
3 stars
170 (20%)
2 stars
32 (3%)
1 star
6 (<1%)
Displaying 1 - 30 of 123 reviews
Profile Image for David Rubenstein.
867 reviews2,788 followers
January 25, 2019
The title of this book Rigor Mortis is not about the death of humans, but is about the death of rigor in science, specifically, medical science and biochemistry. This book goes into considerable detail, about why so many research studies are not reproducible.

According to the book, the wasteful use of money to generate useless, incorrect, unreproducible research is a major contributor to the problem. The reasons are varied. One is that academia encourages publication of incremental, insignificant advances rather than significant increases in understanding. Quantity is encouraged, not quality.

Also, even after a publication has been retracted, it can be cited in the literature hundreds of times, and even assumed to be correct. Researchers are sometimes intellectually lazy, unwilling to accept that a hypothesis is wrong, even after it has been proven to be incorrect.

Then, there are the great technical difficulties in doing some of this research. Sometimes, the results of an experiment can depend on how a test tube is cleaned, how briskly a chemical is stirred, or how similar or different the genetics are of a set of mice.

Sometimes, the lack of money can be an issue, for example, not being able to afford a verification of the type of cell that has been purchased from a biochemical company, or using a sample of animals that is too small to have any statistical significance.

And, sometimes, experiments are simply designed poorly. The use of the "p-value" of statistical significance is often misused, and intellectually lazy researchers sometimes formulate their hypotheses after performing an experiment. This problem is reminiscent of a famous quote by Richard Feynman:
“You know, the most amazing thing happened to me tonight... I saw a car with the license plate ARW 357. Can you imagine? Of all the millions of license plates in the state, what was the chance that I would see that particular one tonight? Amazing!”
Profile Image for aPriL does feral sometimes .
2,203 reviews542 followers
January 7, 2019
'Rigor Mortis' is written by Richard Harris, a three-time winner of the AAAS Science Journalism Award. The book is a general survey of common research practices which are not being followed, or of research practices which are being followed despite proven shortcomings, by many scientists and laboratories conducting biomedical studies. This book pulls together in one place a lot of disparate information which has been printed here and there, off and on, in the previous few decades. His book is for the general science reader in accessible language, and he has included in the back of the book an extensive Notes section with source material for his book, and an index. It is a professional and genteel read.

My review is not any of that. *Ahem*


It appears scientific research currently happening in biomedicine laboratories, particularly in academia laboratories, is failing or being stalled because of a variety of pressures - described in more detail with source information and illustrative stories in the book - almost all of them by a lack of money as a root cause, I think - a funding shortage caused by institutionalized, by purposeful design, competition for scarce money. So, there have been no new breakthrough drugs since the 1980’s despite lots of announcements of so-called new discoveries. The whys of biomedicine ‘vaporware’ for the last thirty years are outlined very persuasively, and calmly, in this book.

Shortcuts are being taken by scientists in order to be the first to submit papers to prestigious magazines, i.e., Cell, Nature, Science. As the initial basic-research published discoveries fail to be duplicated in other laboratories, or the misdirection of incorrect published results promulgate throughout scientific laboratories all over the world, it causes tremendous amounts of money and time to be wasted in fruitless follow-up researches by others attempting to develop a drug or serum or vaccine or process from the erroneous information of, for example, how a gene or set of genes works, or how an enzyme functions, or how a new protein or cell-type works.

Long before medical testing on humans is done, long before a medical product is being sold over the counter or is being utilized to cure people in hospitals and clinics, scientists experiment on cells and animals in laboratories. The pressures to succeed are enormous today. Universities and private laboratories need money to support their existence, and scientists need publicity and celebrity today in order to advance their careers and to earn grants for more research. No longer is it considered a good to follow wherever curiosity leads. Looking for cures for such cancers as that of the liver and breast, or for ovarian cancer, or developing new vaccines, or finding cures for neurological diseases such as ALS and other ills, are dependent wholly on basic laboratory research as a first step. But in the social pressure cooker of creating so-called professional prestige and a public face of celebrity scientist-saviors, corruption has crept into some scientific laboratories in the pursuit of funding, grants and in earning money.

Yet laboratory factories further down the line in product production gear up for mass production of approved biomedicines, as NIH and other watchdog organizations sign off on fast-track approval, especially if government scientists have scanned the developer's published output in respected journals. The medicine is released for public use in hospitals and by doctors, all of whom are assuming all of the testing has been thoroughly done and the side effects have been thoroughly vetted. This is not in the book, but from other reading I have done.

As is described in the book, decades can be wasted by other laboratories trying to figure out the production or mainstreaming possibilities of how the new whatever can fix cancer or a neurological disease. Institutional resistance to change course or admit something is going wrong or to call out fellow scientists as having announced an erroneous result also contributes to wasted effort and money. Worse, the journals which judged the original papers worthy to publish are reluctant to print retractions for any reason.

Chinese scientists are becoming arguably the worst offenders at publishing erroneous articles, but many prestigious universities are also failing in their duty to help Humanity because of their malfeasance. However, the Institutional or governmental watchdog departments are being gutted by, for example, Republican Party government budgets in America (not in the book, this comment is on me, acquired from reading other sources).

Problems with determining if experiment results are correct are not primarily because people are intentionally screwing up - simple human nature causes many problems, such as seeing what you want to see, and not seeing what is actually there; or exhaustion or lack of resources leading to shortcuts, such as 'cooking the books' of the experiment by excluding results which changed the expected hypothesized outcome, or changing the hypothesis to match the finished result. Of course, straight-up fraud is not unknown, either. Relying on animals, especially designer mice, is endemic in experimentation; but it is commonly known by all scientists what works in animals often does not work in people. There are hidden problems affecting results of experiments, such as what particular lab equipment was used had an effect on results, as well as odd one-time or singular effects caused by unnoticed environmental elements, such as the temperature or lighting in the lab. These issues have been discovered only after much backtracking and confusion after another lab was unable to duplicate a result. There are new techniques involving mathematical proofs, but many scientists are not proficient in statistical maths or logic; they use the recommended math formulas and misread the results as having had a successful experiment. This is described in more detail in the book.

Checks-and-balances systems are a good thing, but the rules and regulations designed to keep science honest only work if they are followed. Having another laboratory duplicate results before submitting a paper for publishing is an excellent check on results.

In the end, who pays for scientific narcissism, institutionalized pride and lack of safeguards? You and me, gentle reader. Do not expect your loved one's incurable cancer, or Alzheimer's, or ALS, or liver disease to be cured soon. Despite this book and the information it has gathered together, despite the widespread knowledge among scientists of how science is going wrong in many laboratories, as well as the fact most scientists want to help people, bad science is not being corrected.

Do I seem angry? I am. I am an elder-ish person living in a senior park. In the thirteen years I have lived here, I have seen four people die of cancer, one person who had her polio return so she went to assisted living, several people who suffered for five years or so with dementia of various types, and a couple of folks who dropped from heart attacks and lung diseases. Before they succumbed entirely to their diseases, there were multiple visits by concerned adult children and firetrucks and ambulances. Very likely millions of dollars were spent cumulatively by patients on medications and treatments, some of which were useless or more harmful than the medical condition they were meant to cure, hold in abeyance or give palliative care. The author of this book is not angry, gentle reader. I am bringing it, though.

>: @




It is a good, informative book, gentle reader.

The chapters are:

Begley's Bombshell
It's Hard Even on the Good Days
A Bucket of Cold Water
Misled by Mice
Trusting the Untrustworthy
Jumping to Conclusions
Show Your Work
A Broken Culture
The Challenge of Precision Medicine
Inventing a Discipline
Profile Image for Jeanette.
4,091 reviews839 followers
May 18, 2017
This is a difficult book to review as many terms that would be used in the review are not easily translated into an elemental language for those who have little knowledge of cell categories and other extremely specific particles and culture lines used in research. Quite apart from the animal (usually mice) or other aspects of the research environment which are not common connotation for those outside of those technical fields and innovative drug research facilities. But I'll try.

Suffice it to say that it is a piece by piece exact definition of what is "rigor" and how it has to (NEEDS TO) be found again. Not in the sense of conduit for any job security and/or cultural quest for so many publications per decade, or an avenue toward increased funding. Even within studying any category of information in minutia experiment by scientific method when all or some of the former points of study are "accepted" consensus. Because they are wrong (inconsistent, not constant, variable not exact etc.) -so any or all the empirical proof is then nil.

The chapter on ALS was heart breaking. As was the muscular dystrophy story of particulars. Also the one on the HIV drug prospect example that killed the test cases with liver cancer. Or the ovarian cancer blood test that wasn't. But more than the mistakes, is the number of experiments and millions and millions spent on the "wrong" premise with "wrong and altered" material. So the experiment is useless from the get-go, regardless of being "published". It would be better to test AGAINST your own premise of improvements or change. Failures is where science has highly progressed in the past, but now the set-ups are nearly opposite. And the results of published and "successful" research is now as little as 11 to upper reaches of 16% which CAN afterwards be duplicated or supported in near measure elsewhere within a second or third test experiment. So 80 plus% is fixed or skewed in results or format. One example was a "breakthrough" that used 4 to 8 mice of supposedly "exact" DNA sequence. Way too little numbers of mice, not applicable to natural DNA differences- maybe 4 other reasons that the results are and will be not supported by repeat or will be inconclusive for true "scientific" experiment.

In reality, very little breakthrough science has occurred since the 1980's. Not just in medicine either.

This is a time of echo chambers throughout the wider culture. Not just in education, or politics, or in the research lab. And in the mix of that last, the full disclosure to TRUTH has been entirely lost. Not just by intent but by the volume of dependable factors that are exact. For instance, nearly every "same" experiment to the exact testing of cancer group of cells is not the "same". Because in such widespread use of research teams, similarly labeled to 5 and 8 digit designations (their labeled names) of cancer cell groupings used in the research have been contaminated. Or the experiment is "different" in temperature or surrounds or even within the audible aspects.

It was shocking to me and worrisome to read about the IVF cultures especially. And also because I read Henrietta Lacks HeLa cells story and bio- how that is not at all what it was. Nor will it be in 3 or 5 years the same as it is now.

What I learned, is that there is HUGE amounts of wasted money on useless threads of "research" founded upon untrue priors. Plus all kinds of intricate details within labs like I worked in decades and decades ago within pharmacy. For instance- this one. You can't even use different lab assistants in some mouse experiments. WHY? Because the mice react chemically very different to male lab assistants than they do to female lab assistants. They are so terrified from a pheromone that they are naturally neuralgic if a male handles them.

Also all kinds of other tidbits of specific cases where it went wrong and then millions were spent to become even wronger. Scientific method has been drowned in cultural and physical components that no longer give valid results that can be duplicated elsewhere. The stats are abysmal. In France, nearly every experiment is cut off quickly and funding stops if duplication of results fails or if any entrapped result of a clue to "failure" (often if the failure can be duplicated- it might be insightful) is inconsistent. That's not the case in the USA and within governmental AND private drug company research. The case of the TERT and also of the "brown fat" find- both examples of how "true" is not "true" at all. But only another choir singing to their own echo chambers of "it must be because we want it to be" and "we believe it has to be".
Profile Image for Jim.
Author 7 books2,089 followers
January 19, 2019
I'm intrigued & aghast by this book. I thought the results of biomedical experiments I read in Science, Nature, or other journals were accurate. Maybe not. There's a huge percentage that rely on experiments that aren't repeatable &, for the most part, the current system doesn't care. Scientists, especially at the universities, have to publish. Literally, they're often (usually?) judged on the number of published papers, not on the quality. Worse, retractions often aren't published, so the papers are continually referenced by researchers who believe them to be true. For instance, one paper on vitamin E helping with some forms of cancer has long been debunked, but it's still being used & cited.

Money is a huge factor, especially how it is portioned out. It's a madhouse that encourages positive results at the cost of good science. Worse, there aren't good guidelines for experimentation nor are methods required to be shared. At first blush, not sharing methods doesn't seem like a big deal, but it is. This is cutting edge science that pushes the limits of tools & it's been found that different test tubes or mixing methods can make an experiment unrepeatable.

I never would have thought of cells changing so much due to their environment, but I've read enough on evolution that I should have. Everyone should have & yet cell cultures often aren't tested. The Immortal Life of Henrietta Lacks has contaminated many cultures.
Her cells are extremely robust & often show up as the sample for colon or breast cancer cells.
Apparently contamination is incredibly easy & rampant.

I'm aghast at the sloppy experiments, although it is understandable in some cases. The bit about how the way they mixed solutions (shaken rather than stirred) completely changing test results is a good example. Another is the difference between top & bottom shelf mice. The top shelf mice are exposed to more light & noise which stresses them more. That in turn can dramatically change how hardy they are. Other practices are less understandable. Not using blind sampling techniques to stop bias or tailoring results is just wrong, but when money & jobs are on the line, even good scientists can do bad things.

All in all, this was eye-opening & shameful. We can & should do better than this. There are many working to correct the problem, but they're fighting against entrenched systems. My biggest takeaway from this book is that I'll never believe another study until I see at least a couple more that agree with it & even then I'll take it with a grain of salt until it has an established track record. Actually, that's pretty much the way I've always been, but I'm even more wary now.

I highly recommend this, but it might ruin your faith in medicine. It was a little more repetitive than I prefer, but it can get pretty technical at times so that wasn't always a bad thing.
Profile Image for Andy.
2,082 reviews610 followers
December 14, 2022
This is a book about how the Culture of Corruption and Incompetence in America (see: Detroit: An American Autopsy) is also dominant in biomedical science. The huge investment in medical research yields relatively little in the way of improved health outcomes. The author exposes numerous reasons that this is the case. And the reasons are not about bad apples. They are about entrenched system problems flowing down from the leadership at NIH, academic medicine, scientific journals, etc. For example, there is outright fraud, but there's also undesirable but maybe not dishonest p-hacking, and then there's almost universal misunderstanding of what p-values are for in the first place. The wealth of examples makes the argument convincing. For the examples where I have some prior knowledge, what the author says is true, so I have not done extra fact-checking. I am not sure how clear this would all be for a non-scientist but I felt that it all stayed interesting despite the density of the material.
There is a major hole in the book though. If the underlying concern is to improve health and fight disease, then people should understand that biomedicine is not the whole story. Most of health is determined by non-biological and non-medical factors. Therefore research in public health, prevention, epidemiology and related fields would likely yield much bigger health gains than biomedical research. Yet almost all the spending on "health sciences" is for basic laboratory science and clinical drug research. I think this is fair to bring up because the author frames things in terms of why aren't we making more progress against cancer, etc. And I think most people would rather not get cancer in the first place than have to undergo surgery, radiation, chemotherapy or some experimental new drug.

The Origins of Human Disease
The Origins of Human Disease by Thomas McKeown
Profile Image for Avery.
Author 6 books105 followers
April 4, 2017
(Quick review: Many biomedical researchers are already aware of the content of this book, but for academics in other fields, or people who plan to donate money to biomedical research, I highly recommend it.)

Biomedical science is falling sway to the law of diminishing returns. These are no longer the days when new cures pop up out of nowhere during quick tests. Complex new technologies have opened up millions of new possibilities for discovering agents of disease or possible treatments, while creating countless new opportunities for failure in the process. In the 21st century, it is exponentially harder to find new drugs than it was in the 20th, and increasingly, young researchers around the world are feeling the grind.

Luckily, science has all the tools of the Enlightenment at its disposal to expose mistaken research and weed out bad methods. There are regular conferences, internal reviews, retractions, and impeccable science journalism at the journals Nature and Science especially. The more exuberant apologists for science will tell us that unlike religious prophecy, science gains from failed predictions. So, is there really a problem?

Richard Harris, NPR science reporter, argues in this book that yes, diminishing returns is creating real problems for science. This book reads like a long NPR story, so it would probably make a great audiobook, except that I wouldn't recommend listening to it in the car: some of Harris's findings would probably make you slam on the brakes. Despite the best intentions of hundreds of whistleblowers, and an institutional recognition that things need to change, much of the medical research funded by tax money and grieving parents is… well… a word that Harris refuses to put down on print.

On a structural level, the stakes are very high. A researcher might spend a decade working based on false assumptions, or become widely known in his field for a lauded finding that might not be exactly true. And there is no prize for discovering that a result is false. Researchers may take months or years failing to reproduce a result, with the only reward being the ability to grumble about it at next year’s conference. Scientists are human and there are always little problems at the interpersonal level, but when those problems are well-known to everyone and seem unavoidable, they become part of the structure of science itself.

The result is that there are a number of sacred cows that regularly spawn bad science, but which both academics and publishers have refused to abandon, resulting in the 70-90% rate of inaccuracy among published studies. The inexcusable becomes normal: there’s the overconfidence in mouse studies. There’s the sloppy use of cell lines, which is no longer tolerated in industry studies. There’s the infamous “p = 0.05” standard, which is well known by biostatisticians to be too loose, but which would slow biomedical publication to a near halt if it were abandoned. Researchers are evaluated on quantity of publications, rather than quality, during job interviews. Data sharing in biomedical science lags far behind other fields, due to intense competition for funding. Worst of all, research universities do not offer classes in methodology where problems like these might be discussed. Researchers trying to evaluate recent literature and get new results are forced to learn “on their feet,” either in the laboratory or at conferences.

This is now recognized as a crisis throughout the field. In 2015 a discussion was opened as to how things might be improved. Unfortunately, besides closer attention to detail at the top journals, there is no consensus about what can be done. The hypercompetitive environment that promotes false results and sloppy standards relies on the same psychological drive that causes good researchers to seek out hidden methodological problems. The most frightening question is, in the long run, can these problems actually be fixed? At a structural level, the law of diminishing returns (called Eroom’s Law in the book) means that research is going to get more and more expensive — the possibilities may still be exciting, but institutions are going to pour more and more cash to complete any given study, and the desire for positive results is going to be more, not less. The import of this is that all of us, doctors and laypeople alike, will need to be more and more skeptical of research findings as time goes on.

Although biomedical researchers may want to insist that this book is a compilation of challenges rather than fatal flaws, and that their research continues to save lives, any academic or journalist with a serious interest in the truth needs to read it, in order to understand how biological research in general operates today. When I, a humanities grad student, explained to my biomedical researcher friend the type of procedure that has been proposed by my colleagues for applying “cognitive science” to artistic behavior, he laughed out loud. When you know how modern science really works, and the vast number of pitfalls that might be hiding between the lines of any individual paper, the uses to which non-experts put your work can sound naive or even absurd. Harris is doing his duty as a journalist to put that variety of scientific intuition down in print.

Related books:
- Unhinged: The Trouble with Psychiatry - A Doctor's Revelations about a Profession in Crisis
- The Trouble with Physics: The Rise of String Theory, the Fall of a Science and What Comes Next
Profile Image for Potassium.
804 reviews19 followers
March 30, 2017
I read an advanced print copy to prepare for an interview. This book hit a little close to home since I was, until recently, a biomedical research scientist. I thought Harris did a great job spelling out all the problems in the field. His prose is clean and elegant. Biomedical science is hard to write about for a general audience (so complicated) and I think Harris did a great job. My only complaint is that I wish he had gone more into making the case for why we should still fund science.
Profile Image for Brenda.
1,516 reviews68 followers
April 8, 2017
Here we have a strongly effective look at the science and business behind any and all medication. It encompasses virtually everything: how we've had no real advancements since the 1980s, how a far too big portion of the scientific journals are based on incorrect information, and how there's a general lack of confirming theories presented in peer reviewed articles.

Seriously though, the thing that absolutely flabbergasted me was the lack of actual advancement in recent years. It's so hard to correlate that with all the other advancements--we have smartphones and talking AI's responding to stimuli and cars that can park themselves, but you're telling me we have had no real advancements to speak of in the realm of medication?

It's pathetic really, and Harris does a fabulous job of bringing it all to light. He shows the problems with each individual aspect of these medical journals and how we've had no advancement as a result of shoddy experimentation. It was appalling how many of the "confirmed" experiments weren't able to be replicated. Or how many times scientists cause their own downfall by not doing blind testing.

Essentially I saw this as a call to arms--we need to get back to doing real sciences, not just bullshitting our way through experiments to get more funding. This is one of those times that I hate capitalism--if our researchers had the ability to really focus on what they were doing instead of feeling pressured to have positive results, we'd probably already have cured infertility and cancer and Alzheimer's by now.
Profile Image for The Irregular Reader.
422 reviews47 followers
April 12, 2017
It seems like every other week a new study hits the news: Red wine cures cancer, coffee is terrible for you, taking vitamins is crucial for good health, red wine might actually cause cancer, caffeine in small amount is good for you, vitamins are worthless. With this whirlpool of conflicting information coming rapid-fire into the public sphere, one could certainly forgive the average person if they stopped paying attention, or even started to doubt everything they hear from a scientific source.

In Rigor Mortis, former NPR science journalist Richard F. Harris seeks to illuminate the systemic problems which underlie this phenomenon. Especially in this political environment, such an undertaking is a double-edged sword. It would be too easy for someone to take the basic concept: that there are structural problems within the field of medical research, and leap wildly to the conclusion that science itself is deeply flawed. However, the current situation within the scientific community needs to be addressed. Improvement can only be achieved with honest admissions of fault, greater transparency, and dedication to change. In this regard, Harris’ book does the field more good than harm.

The current crisis has been labeled one of reproducability. Flawed research, lack of standardized methods, and inadequate analysis, combined with the chaos of working within living systems, result in a nigh-impossibility of one lab successfully reproducing the results of another. The causes of this are multifaceted; lack of training in laboratory and statistical methods, the dog-eat-dog nature of research funding, the press by Universities to “publish, publish, publish” with more regard to quantity of work than quality. Right now, it pays far better to be first to be right.

Harris’ book isn’t just a condemnation of the state of the field, he provides concrete adjustments and changes that can be made to improve the quality of research being done, and shares the stories of those within the field who are working towards those ends. The emphasis here is that we should not throw the baby out with the bathwater. As more and more researchers begin to deal honestly with the flaws of their research and seek solutions, the benefits for medical research, and for doctors and patients, will be profound.

A copy of this book was provided by the publisher in exchange for an honest review.

Merged review:

It seems like every other week a new study hits the news: Red wine cures cancer, coffee is terrible for you, taking vitamins is crucial for good health, red wine might actually cause cancer, caffeine in small amount is good for you, vitamins are worthless. With this whirlpool of conflicting information coming rapid-fire into the public sphere, one could certainly forgive the average person if they stopped paying attention, or even started to doubt everything they hear from a scientific source.

In Rigor Mortis, former NPR science journalist Richard F. Harris seeks to illuminate the systemic problems which underlie this phenomenon. Especially in this political environment, such an undertaking is a double-edged sword. It would be too easy for someone to take the basic concept: that there are structural problems within the field of medical research, and leap wildly to the conclusion that science itself is deeply flawed. However, the current situation within the scientific community needs to be addressed. Improvement can only be achieved with honest admissions of fault, greater transparency, and dedication to change. In this regard, Harris’ book does the field more good than harm.

The current crisis has been labeled one of reproducability. Flawed research, lack of standardized methods, and inadequate analysis, combined with the chaos of working within living systems, result in a nigh-impossibility of one lab successfully reproducing the results of another. The causes of this are multifaceted; lack of training in laboratory and statistical methods, the dog-eat-dog nature of research funding, the press by Universities to “publish, publish, publish” with more regard to quantity of work than quality. Right now, it pays far better to be first to be right.

Harris’ book isn’t just a condemnation of the state of the field, he provides concrete adjustments and changes that can be made to improve the quality of research being done, and shares the stories of those within the field who are working towards those ends. The emphasis here is that we should not throw the baby out with the bathwater. As more and more researchers begin to deal honestly with the flaws of their research and seek solutions, the benefits for medical research, and for doctors and patients, will be profound.

A copy of this book was provided by the publisher in exchange for an honest review.
Profile Image for Kellie Reynolds.
101 reviews8 followers
July 23, 2017
The author, Richard Harris, provides a terrifying look at the state of scientific literature. I read the scientific literature almost daily as part of my job and professional development. I am trained to look for flaws in data analyses and reporting. Some of the information in the book did not surprise me. However, I was nor aware of the extent of the problem. Harris provides numerous examples of specific papers that are incorrect and examples of studies that evaluate the widespread problems in scientific literature. The problems are related to misidentified cell cultures, poor study design, poor data analysis, bias, selective reporting of study results, and sloppiness. Some of the studies still provide useful information, but others lead to dangerous conclusions.

The book provides examples of biomedical study results that could not be reproduced. The studies were published in highly esteemed journals and the results altered drug development, knowledge of basic science, or treatment of patients. Other scientists attempted to reproduce the results, without success. Unfortunately, once a paper is published in a respected journal, the results are deemed gospel. Other scientists act on the results. The author points out that biology is now so complex that it is no longer a descriptive science. Studies of biological systems require knowledge of math and statistics. It is easy to be fooled by intuition- how one expects biological systems to perform. Rigorous statistics are essential to interpret study results. Harris describes the practice of HARKing- “hypothesizing after the results are known.” This problem may arise when scientists confuse exploratory research with confirmatory research.

Harris spends much of the book describing a major cause of irreproducible and incorrect study results- the culture in which scientists work. “Scientists often face a stark choice: they can do what’s best for medical advancement by adhering to the rigorous standards of science, or they can do what they perceive is necessary to maintain a career in the hypercompetitive environment of academic research.” What do they have to do to maintain a career in that environment? They must publish hundreds of articles in high impact journals. Harris is critical of the traditional measure of a “high impact journal”—the “impact factor.” As he states, it is a measure that was invented for commercial purposes. It is now used a surrogate to suggest the quality of the research. Papers in journals with a higher impact factor are cited more often and presumed to have more significance. However, the articles may be cited more often because they are flashy and exciting, not because they are the most important. One scientist interviewed for the book said she prefers to publish her work in journals with lower profiles because they publish work that is more detailed and nuanced. Scientists feel pressured to publish in the high impact journals because hiring committees or grant review committees are not interested in scientists that have not published in high impact journals. Numerous scientists quoted in the book make scathing remarks about the use of the impact factor. The games and maneuvers that increase a journal’s impact factor are widely known and used. Still, it remains a measure of a journal’s worth because “it is an easy surrogate.” Scientists are sloppy and abuse their data to quickly get into higher impact journals.

Harris discusses several practices that may help the crisis in scientific research and publication. He describes potential improvements in they way errors are detected and disclosed. Improved education in research methodology and statistics is needed. Harris places much emphasis on the potential for data sharing (open data) to improved scientific rigor. The availability of full data sets that support results would provide accountability and allow other scientists to evaluate the strength of the studies. Various universities and non-profits have initiatives that focus on open data. There is a lot of talk and there is a lot of old baggage.

I was fascinated by this book. Some of the information did not surprise me. The fact that many scientists are fully aware of the problems and go along with the system is disturbing. The movement toward open data is gaining momentum. I am less encouraged by evidence of improved training and movement away from the impact factor. While there is some evidence of improvement, progress is slow.

I will read the scientific literature with a more discerning eye. Everyone who uses scientific literature should read this book.
Profile Image for Noah Goats.
Author 8 books32 followers
August 23, 2018
For years I have rolled my eyes at the constant stream of stories about scientific discoveries in fields like nutrition, medicine, and the social sciences. One nutrition discovery is trumped by the next, supposed breakthroughs in medical science never develop into anything useful, and many revelations about human nature are transparent nonsense from the start. In Rigor Mortis (while focusing on biomedical research) Richard Harris explains why my instinct to eyeroll is often correct.

It turns out that scientists are humans, and because they are humans they make faulty assumptions, use faulty methods, react defensively when told they are wrong, and have a hard time admitting that the traditions of their field are mistaken even when they obviously are. Human nature being what it is, they are sometimes more motivated by a desire to publish a paper that makes a splash than they are to find the truth, and if evidence threatens to torpedo their findings, they look for excuses to ignore it.

Like everyone else, scientists are reluctant to change their ways. And they don't want to question things like the usefulness of cancer research conducted with mice or the effectiveness of they antibodies they use in their studies. Harris reveals the shocking truth that a lot of the so called “science” that gets written up in journals or reported on CNN is just plain wrong.

I am always annoyed when I see one of those pro science demonstrations on television. I agree with the demonstrators in principle. We should amply fund the sciences and, where applicable, make political and environmental decisions based on good science. What I don’t like is seeing signs that say, “I believe in science,” as if science is some kind of religion in which we are expected to have faith in the same way a Christian throws away skepticism and has faith in Jesus Christ. The beauty of science is that it doesn’t ask for blind belief; it wants us to ask questions.

Eye opening, thoughtful, and with helpful suggestions about how to fix the situation, Rigor Mortis is a good book.
Profile Image for CatReader.
1,038 reviews184 followers
March 17, 2024
As a physician, scientist, and past recipient of NIH funding, I think this book is crucial reading for scientists in general, especially trainees aiming for a biomedical research career (undergrads, grad students, and postdocs). While I am a strong proponent of NIH, NSF, and other discretionary government funding of biomedical research, it's also crucial to realize why so much published research doesn't hold up to scrutiny and time, to inoculate oneself against the common pitfalls that make this happen.

A story from own grad school days (where the PI was awarded many NIH and NSF grants): my PI was too cheap to buy cell lines (~$200-$400 USD each at that time) from a central repository (the American Type Culture Collection, ATCC) that verifies their identity. Many/most of our cancer cell lines (including HeLa and many others mentioned in this book) were begged or borrowed from other laboratories, and passaged (propagated) practically indefinitely by our cell culture technician because she was too lazy to go back the liquid nitrogen aliquots from early passages on a regular basis. It wasn't until I definitely demonstrated that the cancer cell line properties (morphology, toxicity profiles against drugs we were tested them on, etc.) were markedly different between early and late passage cell lines that I was able to convince our technician to stop passaging cells indefinitely and successfully convince my PI to order new batches from ATCC. Most others in my lab didn't care and certainly didn't go back and repeat their relevant experiments on verified, early passage cell lines.
Profile Image for Meranda Masse.
39 reviews6 followers
February 2, 2018
This book made me question a lot of choices people have made within my field... But here's to hoping that I don't make those same mistakes! I really liked that this book made the issues within the field of biomedical research obvious to the reader. I also liked that one didn't need to have a strong background in biochemistry for most of the book. Some parts were a little hard to follow, but overall it was an eye-opening read.
Profile Image for Anna.
1,531 reviews31 followers
July 20, 2017
A really important book. It is sad and frustrating that there are so many fixable problems with the modern day research world. I hope that the reformers offering glimmers of hope throughout this book are able to gain ground and get things back on track, or we may be stuck for a long while.
Popsugar challenge 2017: a book with career advice
Profile Image for Robert.
228 reviews11 followers
September 10, 2018
If you read Bad Blood and thought Theranos was hopefully a rare case of sloppy medical science, this book has some very bad news for you. It’s also a great complement to two other of my recent reads, The Gene and The Bad Food Bible.
Profile Image for Michael Gibson.
120 reviews1 follower
October 3, 2022
Basically this is a book about how capitalism and fame have degraded the value of scientific papers and studies to a point where many of them are virtually useless in the information they provide.
In an age where quantity is valued far more than quality, it is no wonder that the general public is losing faith in the scientific community, and the information/recommendations/solutions they come forward with. This environment makes it so easy for vast amounts of misinformation to make its way into the public minds…which we saw so profusely in the race to bring out vaccines for lessening the effects of Covid-19 and its variations over the past several years. Is it any wonder that so many people were hesitant to get the shots to help lessen the effects of the virus. People were even uncertain about the term “vaccine” and what it meant…did it mean you would be immune, be less likely to get the illness, or just not get as sick, thereby easing the burden on hospitals and Emerg departments. Nobody seemed to know or understand anymore!

Add to this all the unsubstantiated claims coming from pseudoscience based “reports” of the healing properties of various holistic based “treatments” that have no scientific backing data whatsoever! While some of these may actually have true healing properties, where is the believable studies that support the claims. Can any of the “science” be trusted??

Why the proliferation of all these type of “reports”?? It is all in the efforts to chase the almighty dollar. Whether it is universities chasing funding, scientists seeking was to extend their tenure, or just market savvy entrepreneurs trying to take advantage of the latest trend…money is apparently more important than saving lives.

People will continue to lose faith in scientific reports and the medical community unless efforts and controls are put in place to confirm and validate that the findings are true, accurate and reproduce-able. Maybe it is time to bring “science” back into “scientific research”…don’t you think??
205 reviews11 followers
June 22, 2017
Ulp. About the crisis of reproducibility in the medical field, which appears at least as bad as the crisis of reproducibility in behavioral psychology. I learned about “Eroom’s Law,” the opposite of Moore’s law, which holds that there is an exponential slowing in the state of drug development, starting in 1950; if it holds, we’ll be done in 2040. Lack of rigor in biomedical research is an important culprit.

Even if mice are good models (which they often aren’t) it turns out that cage position can affect the outcome of an experiment, given mice’s distaste for bright lights and open spaces. Harris quotes a scientist: “As you move from the bottom of the rack to the top of the rack, the animals are more anxious, more stressed-out, and more immune suppressed.” Also, “Mice are so afraid of [human] males that it actually induces analgesia,” numbing pain and screwing up studies. So mouse study results can vary hugely from lab to lab. But the bigger problem may be testing in mice at all, or testing only in one strain of animal. If you tested a new drug on white women aged 35 who all lived in one town with identical homes, husbands, diets, thermostats, and grandfathers, “that would instantly be recognized as a terrible experiment, ‘but that’s exactly how we do mouse work.’”

Harris is only moderately optimistic about small-molecule innovations. He quotes a scientist who argues that “evolution has created so many redundant systems that targeting a single pathway in a complex network will rarely work…. ‘We have evolved seventeen different biological mechanisms to avoid starving to death. Drugging one of those mechanisms isn’t going to do anything!’”

Cell experiments are troubling too, even when they’re properly identified. “The very act of propagating cells in the laboratory changes them profoundly,” and atmospheric oxygen in particular is really important because a lot of regulatory factors that affect tumor growth are oxygen regulated. “In fact, cell lines derived from all sorts of cancers end up looking much more like one another than they do the original tumors from which they came… ‘Some people say that HeLa is a new species,’ [a scientist] told me. ‘… The chromosomes are all rearranged… [I]t has made all these changes to adapt’ to the environment where it now makes its home.” Precision medicine can’t be developed until we deal with the fact that even molecules in a living body change when surgeons cut off the blood supply to the tissue they’re going to remove.

Here are a couple of statistical twists I hadn’t thought about, too. If you set your p-value for significance at 0.05, then there’s almost a 50% chance that running the experiment again would give you a higher value, and almost a 50% chance that you’d get a lower one, and therefore be deemed insignificant. To have a 95% chance that an experiment run a second time would still be statistically significan, a p-value of 0.005 would be required. This can often be done, if the phenomenon at issue is real, by increasing the sample size by 60%--expensive, but Harris argues pretty persuasively that it would be worth the costs. Another point: scientists too often confuse exploratory research with confirmatory research. Statistical tests used to confirm or disconfirm a hypothesis don’t work if you don’t have a hypothesis and are just fishing around for anything interesting or unexpected in the data. “It’s fine to report those findings as unexpected and exciting, but it’s just plain wrong to recast your results as a new hypothesis backed by evidence.”

All is not lost. A federal law requiring scientists doing drug studies to declare endpoints in advance seems to have had significant effects: of 30 big studies done before the law, 57% showed a benefit. But after, only 8% confirmed the preannounced hypothesis.

Reproducibility is the key. Although some responses to the crisis point out that failed attempts to reproduce certain results may be because the original lab did important things differently, but that’s part of the point: “if any tiny detail can derail an experiment, just how robust is the result? Nobody cares about an experiment that … requires conditions so exquisite that only the lab where it originated can repeat it.” Harris advocates (1) blinding (amazingly, not universal); (2) repeating basic experiments; (3) presenting all results rather than cherry-picking; (4) using positive and negative controls—running experiments that should succeed and fail, respectively, if the hypothesis is correct; (5) careful validation of the ingredients (which turns out to be a much bigger problem than I knew; for example, did you know that lots of cell lines labeled otherwise are actually HeLa, which is very good at taking over, and between 18-36% of cell experiments used misidentified cell lines?); and (6) using the right statistical tests.
Profile Image for Jennifer Hu.
24 reviews6 followers
October 25, 2017
Despite the sensational title (I do appreciate the pun though), this is a calmly researched and explained story of biomedical research. Very accessible to non-scientists, but I dog-eared some references in the primary literature I want to follow up on.

One of my favorite people in my lab is the staff scientist who's been in biology for decades. It's so enlightening to talk to someone who's been paying attention long enough to connect the dots. As a grad student, it's hard for me to know whether the paper I'm reading is part of a larger story, or whether it's been overturned by other work. Richard Harris' long career as a science journalist has clearly equipped him with plenty of interesting cautionary tales of sensational findings that later never panned out. His big-picture perspective is quite informative and also reflects his understanding of the culture of academic research. I particularly liked his observation that while competition for grants is good, hyper-competition (the current state of science funding in the US) creates perverse incentives to cut corners and ultimately damages the credibility and usefulness of science.

I would recommend this book to anyone working in science or statistics, as well as people wondering why we haven't cured [disease] despite the endless news stories and press releases claiming we're only a few years away.
Profile Image for Adrian Bubie.
80 reviews8 followers
October 16, 2020
Harris ultimately tackles the same issue and says the same thing in 10 different ways; scientists are sloppy because they're either pushed too hard to publish, not being careful, or just don't care. While it's an important and rarely talked about point (as he mentions multiple times) that bad experimental design and execution leads to bad data which leads to all kinds of waste (money, time, hope), I feel this would have been better accomplished in a series of articles. Overall the book was just a little too long. Harris' writing is quite good, though, so if you are a fan of his from his NPR days, I would say still give this a shot.
Profile Image for Talla Bavand.
23 reviews
May 29, 2017
Informative and really interesting look at the research science field and the flaws in it. Honest look at the incentives for funding, lack of thorough checking of experiments, results, papers, etc. Not a boring for being a book about the ins and outs of this field. I will say it does perhaps lack bias in its opinion of research science field a bit, which is one of the key points in this book for the flaws in research science, haha!
Profile Image for Sarah Clement.
Author 3 books118 followers
January 26, 2020
This book does an excellent job of outlining how science, though still the best we've got for understanding and studying the world, can easily lose its way when the incentives aren't right. As an academic, I thought Harris did a good job of explaining the pressures scientists face, and the scientists he interviewed present a fair and accurate picture of what happens, particularly in the biological sciences. The biological sciences produce some of the highest numbers of PhD grads, and as such, the competition here is intense. I also like that Harris went beyond the typical discussions about p hacking and replicability to discuss some of the less frequently highlighted issues in science. I can see how the book might be a bit confusing if you don't know the cell lines, as some reviewers have commented, but I am not a biologist and I thought Harris explained them well. Ultimately, the reason I give this 4 stars instead of 5 is just that I would have liked more, either more about how this problem exists in other disciplines or more that we could do about it. But that is still a testament to how good the book is.
Profile Image for Mitch Vanyo.
34 reviews
May 6, 2022
This book ended up in my hands through the U of M's "blind date with a book" program, and I was glad it did (though I'm not expecting a second date any time soon). As an outsider to the biomedical field but a proponent of scientific thinking, I was shocked to hear about some of the shortcuts and "sloppiness" that are present in this area of research, as well as the systems and incentives that keep it all going. The writing style was fast and accessible, but it seemed to come at the cost of a more detailed discussion which I would have appreciated- although that may have been due to my own reading pace.
Profile Image for Siobhan Blackwell.
19 reviews
January 22, 2020
We are introduced to the topic somewhat slowly. But presumably in good speed for those with limited knowledge of research or academic processes. It picks up after a chapter or two and manages to convey an the complexities of academic politics succinctly. At points, too journalistic in style but likewise makes it an easy read, which isn't necessarily a disadvantages.
Profile Image for Max Nova.
421 reviews246 followers
May 14, 2017
"Rigor Mortis" explores how perverse incentives and a "broken" scientific culture are fueling the reproducibility crisis in modern biomedical research. Published in April, Harris's jeremiad was a perfect fit for my 2017 reading theme on "The Integrity of Western Science" and an excellent companion to Goldacre's "Bad Science".

Harris argues that competition for funding, politicized peer-review, and pressure to be the first to publish have enabled sloppiness and dubious practices like "p-hacking" to flourish. He investigates how these incentives have created a lack of transparency in research and how they have damaged the self-correcting nature of scientific research. As one of his interviewees points out, "it’s unfortunately in nobody’s interest to call attention to errors or misconduct".

Diving into the technical details, Harris points out how cell line contamination, non-transferability of animal models to human systems, and statistical blundering (the batch effect!) often render vast swathes of biomedical research unreliable. He also explains how much bad science is driven by conflating "exploratory" and "confirmatory" research. You can't run an "exploratory" experiment, mine the results for correlations, and then publish it as a "confirmatory" result. Or rather, you can - and apparently it is done all the time - but it is bad science. This was a "big idea" for me and is one that I suspect the general public (and even most scientists) are not aware of.

One of the major questions I'm trying to answer in my year of reading about "The Integrity of Western Science" is "How much confidence do we need to have in the 'science' before we start making policy decisions based on it?" As another one of Harris's interviewees notes, "it takes years for a field to self-correct" - this complicates the situation even further. How long do we need to wait before we can have a high level of confidence that a particular finding is real? Harris offers no answers here - he just hammers home how difficult of a question this actually is.

As he discusses the inability to impose standards of rigor and transparency on research, Harris briefly touches on how most biomedical science is funded (The National Institutes of Health) and includes the odd and unexplained quote, “The NIH is terrified of offending institutions.” This strikes me as completely absurd. The NIH is funding the research. Don't they know the golden rule? "He who has the gold, rules." Why shouldn't the NIH mandate some easy wins like an experimental registry and a best-practices checklist for each publication? The institutions (read: universities) can either play ball or... find a more lenient funding source. Would love to read more about this.

As far as concrete ideas for cleaning up science, Harris doesn't offer many new ideas. There were three that caught my eye though. The first is that researchers should report their level of confidence in the results of every study they publish. The second is the idea of a one-time scientific "Jubilee" where researchers can retract any of their old research with no penalty. It's tough to see how these first two ideas could work culturally, but they're intriguing nonetheless. The final idea probably has the best chance of working because it has worked before. Harris discusses the 1975 Asilomar conference where leading genetics scientists developed guidelines to ensure the safety of recombinant DNA research. This seems to be an instance of relatively successful self-regulation of the scientific community, and it's nice to think there might be some hope of the community internally cleaning up its act. I'm not holding my breath though.

One final side-effect of reading this book was being introduced to the institutions leading the effort to clean up science:

* Center for Open Science - https://cos.io/
* Science Exchange - The Reproducibility Project: Cancer Biology -
http://validation.scienceexchange.com...
* Retraction Watch - http://retractionwatch.com/
* Meta-Research Innovation Center at Stanford - https://metrics.stanford.edu/

Overall, Harris's book is a great summary of current issues in biomedical research and it comes at a time when the general public is finally becoming aware of some of these problems.

Full review and highlights at http://books.max-nova.com/rigor-mortis/
Profile Image for Trish.
439 reviews24 followers
November 16, 2017
"Unconscious bias among scientists arises every step of the way." (p. 15)

This reminded me of the 11.10.17 presentation Dr. Ryan did about the scientific method, and the fact that there are "opportunities for creativity" at every step of the process. Just as we can ask questions about the ingenuity our scientists exhibited at each step in the process, we can also ask questions about how they employed rigor at each step of the process.

"Failure and ignorance propel science forward." (p. 39)

"Funding was stretched so thin that scientists said they didn't get as much as they needed to do their studies. So they made difficult choices." (p. 57) "There aren't many sources of funding for rigor." (p. 64)

In Chapter 3, Harris outlines how individual donors and patient advocacy groups are playing a larger role in determining what research gets done (and by whom and how). This seems to indicate that communication of research and its potential value (particularly tied to giving opportunities and outright "asks") will continue to grow in importance...and that the funding system will be vulnerable to hype.

"It often starts out in all innocence, when scientists confuse exploratory research with confirmatory research. ... It's fine to report those findings as unexpected and exciting, but it's just plan wrong to recast your results as a new hypothesis backed by evidence. ... Yet press releases, news coverage, and the scientists themselves often muddy this important distinction." (p. 140-141)

A significant gating question we could ask: "Is this study exploratory or confirmatory?"

"'If we created more of a fault-free system for admitting mistakes it would change the world,' said Sean Morrison, a Howard Hughes Medical Institute investigator at the University of Texas Southwestern Medical Center." (p. 183)
Read Kathryn Shultz "Being Wrong" for further exploration of this concept.

Cultural issues--including reduced state funding for universities, too many scientists competing for too few dollars and too many post-docs competing for too few faculty positions, as well as the drive to publish quantity rather than quality-- combine to drive "the natural selection of bad science." (p. 188-189)

"Labor economist Paula Stephan at Georgia State University likens it to a shopping mall: The university owns the building and charges rent; the scientists have become the tenants, spending their grant money on rent as well as research assistants and materials." (p. 190)

"Psychiatrist Christiaan Vinkers and his colleagues at the University Medical Center in Utrecht, Holland, have documented a sharp rise in hype in medical journals. They found a dramatic increase in the use of 'positive words' in the opening section of papers, 'particularly the words robust, novel, innovative, and unprecedented, which increased in relative frequency up to 15,000% between 1974 and 2014." (p. 190-191)

"Everyone has a hype detector." -- Dianaa


"His medical school judges the scientists there on the public impact of their work and cares less about curiosity-driven studies." (p. 226)
This validates our push to prioritize impact over journal impact factor. This may spark changes in what we cover and how -- should we cover a newly published study, or should we cover how work that was published 10 years ago is now being used or how it has been built upon?

Diana asks -- what are the limitations of this study? What else has been done?
Profile Image for Jenine Kinne.
45 reviews
December 12, 2018
Clever title and even better book. A journey through one of the more pressing issues facing science and the reasons behind it...the death of rigor, indeed.
Displaying 1 - 30 of 123 reviews

Can't find what you're looking for?

Get help and learn more about the design.