Jump to ratings and reviews
Rate this book

The Seven Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice

Rate this book
Why psychology is in peril as a scientific discipline--and how to save it



Psychological science has made extraordinary discoveries about the human mind, but can we trust everything its practitioners are telling us? In recent years, it has become increasingly apparent that a lot of research in psychology is based on weak evidence, questionable practices, and sometimes even fraud. The Seven Deadly Sins of Psychology diagnoses the ills besetting the discipline today and proposes sensible, practical solutions to ensure that it remains a legitimate and reliable science in the years ahead. In this unflinchingly candid manifesto, Chris Chambers shows how practitioners are vulnerable to powerful biases that undercut the scientific method, how they routinely torture data until it produces outcomes that can be published in prestigious journals, and how studies are much less reliable than advertised. Left unchecked, these and other problems threaten the very future of psychology as a science--but help is here.

296 pages, Paperback

Published July 16, 2019

47 people are currently reading
899 people want to read

About the author

Chris Chambers

1 book3 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
141 (52%)
4 stars
103 (38%)
3 stars
22 (8%)
2 stars
4 (1%)
1 star
0 (0%)
Displaying 1 - 30 of 33 reviews
Profile Image for Steven R. Kraaijeveld.
561 reviews1,923 followers
March 3, 2018
"If we continue as we are then psychology will diminish as a reputable science and could very well disappear. If we ignore the warning signs now, then in a hundred years or less, psychology may be regarded as one in a long line of quaint scholarly indulgences, much as we now regard alchemy or phrenology. … Of course, few sciences are likely to withstand the judgment of history, but it is by our research practices rather than our discoveries that psychology will be judged most harshly. And that judgment will be this: like so many other 'soft' sciences, we found ourselves trapped within a culture where the appearance of science was seen as an appropriate replacement for the practice of science." (ix)
The discipline of psychology and the practice of psychological experimentation have had—and are still facing—some serious problems. Chambers reveals them with great clarity and offers detailed recommendations for improvement to boot. The seven sins are as follows:

1) The Sin of Bias (e.g., failure to replicate findings; publication bias towards positive and novel results)
2) The Sin of Hidden Flexibility (e.g., manipulating data through p-Hacking or HARKing)
3) The Sin of Unreliability (e.g., disregard for the replication of findings; studies widely lacking power)
4) The Sin of Data Hoarding (e.g., refusal to share or make open access data)
5) The Sin of Curruptibility (e.g., fraud and manipulation of data)
6) The Sin of Internment (e.g., journals kept exclusive and hidden from the public behind high subscription costs)
7) The Sin of Bean Counting (e.g., using meaningless metrics for the impact of journals; using grant numbers as a metric for researcher's status)

Pre-registration of studies, open access options as a norm in publishing, focusing on the quality of research rather than metrics like impact factors, adoption of Bayesian hypothesis testing, and storing raw data and reviewing them publicly are some of the most important recommendations.
Profile Image for Emil O. W. Kirkegaard.
188 reviews401 followers
January 6, 2019
This is a pretty good and brief introduction to recent history of replication bias, and why it's there. It goes hard into the Registered Reports solution, which I don't think is that workable (too strict), but that's alright. Definitely recommended for curious lay people. The math is not difficult.

He discusses bias towards positive and novelty results a lot, which is great, but there is no mention of the rather extreme political bias. This is arguably more important because the positive and novelty biases do not distort the field in any particular direction, but political bias does. Very important for psychology to serve as basis of policy, as it does.
Profile Image for Emma Veitch.
26 reviews
May 29, 2017
This is a bit of a speed review as I've not got the time to do this the justice it deserves. In my opinion... a really important book and very well written. Clear structure and argument throughout and lots of referencing and examples to back up the main assertions. The themes Chambers draws on are familiar to me in other fields of science (particularly eg, clinical trials and epidemiology), relating to the ways in which the culture of doing science undermines its integrity. Particularly in relation to analytical and reporting "sins" - p-hacking, analytical flexibility, bias, data hoarding etc etc. There were parts that were stronger and parts weaker - I felt the chapter on open access was not as compelling as the others, which is slightly ironic given I've worked in open access publishing for over a decade! But this just didn't seem to lend the thesis the same degree of strength as the other areas.
I did have some picky criticisms; parts of the stats sections are just too technical and go over the head of (this) naive reader, the part on "how to do fraud" is a bit silly (do we really want to encourage anyone to do these things?) and the "remedies" section at the end was too long I thought. But overall it was a great read and what I felt was a unique contribution, because most of the focus on the phenomena Chambers describes have acquired prominence in other fields (eg, the work of John Ioannidis, An-Wen Chan, those who have encouraged clinical trial registration and results reporting etc etc), and it is nice to see these principles applied to psychology (where potentially there is an even greater problem).
As a small aside, I would have liked to see a small section tying these things together - acknowledging that the same themes apply across scientific disciplines and that many of the solutions will be common to others. Actually it might be interesting to see a comparison of the extent of these problems across different disciplines.
81 reviews19 followers
September 2, 2020
Discusses and analyzes various problems afflicting academic psychological research, including fraud, extremely questionable research practices, and an overemphasis on cute or surprising findings. At times, the reader may conclude that the discipline is both intellectually and morally bankrupt. The author, himself a neuroscientist and journal editor, is more optimistic and makes several specific proposals to improve matters. Several of these are currently being deployed.

For a book on such an academic topic, the prose is quite engaging and mostly free of jargon. The text has ample endnotes, providing the interested reader with plenty of entries to the more technical literature.

Recommended to advanced undergraduates and graduate students in psychology and other social and behavioral sciences, as well as to anyone interested in the conduct, and misconduct, of psychological science in academia.
Profile Image for A Crawley.
47 reviews3 followers
July 22, 2022
A MUST read for psychologists and researchers in social science. It should be taught in university courses and every postgraduate in psychology. A book that it's transforming science globally, forever. We need more people like Chris, sacrificing their emotional and comfort needs, for challenging the status quo, facing insults and mistreatment. He nails it. He will be remembered in history as a game changer.
1 review
June 22, 2017
The “Seven Sins” is concerned about the validity of psychological research. Can we at all, or to what degree, be certain about the conclusions reached in psychological research? More recently, replications efforts have cast doubt on our confidence in psychological research (1). In a similar vein, a recent papers states that in many research areas, researchers mostly report “successes” in the sense of that they report that their studies confirm their hypotheses - with Psychology leading in the proportion of supported hypotheses (2). To good to be true? In the light of all this unbehagen, Chambers’ book addresses some of the (possible) roots of the problem of (un)reliability of psychological science. Precisely, Chambers mentions seven “sins” that the psychological research community appears to be guilty of: confirmation bias, data tuning (“hidden flexibility”), disregard of direct replications (and related problems), failure to share data (“data hoarding”), fraud, lack of open access publishing, and fixation on impact factors.

Chambers is not alone in out-speaking some dirty little (or not so little) secrets or tricks of the trade. The discomfort with the status quo is gaining momentum (3,4,5, 6); see also the work of psychologists such as J. Wicherts, F. Schönbrodt, D. Bishop, J. Simmons, S. Schwarzkopf, R. Morey, or B. Nosek, to name just a few. For example, recently, the German psychological association (DGPs) opened up (more) towards open data (7). However, a substantial number of prominent psychologist oppose the more open approach towards higher validity and legitimateness (8). Thus, Chambers’ book hit the nerve of many psychologists. True, a lot is at stake (9, 10, 11), and a train wreck may have appeared. Chambers book knits together the most important aspects of the replicability (or reproducibility); the first “umbrella book” on that topic, as far as I know. Personally, I feel that one point only would merit some more scrutiny: the unchallenged assumption that psychological constructs are metric (12,13,14). Measurement builds the very rock of any empirical science. Without precise measurement, it appears unlikely that any theory will advance. Still, psychologists turn a dead ear to this issue, sadly. Just assuming that my sum-score does possess metric niveau is not enough (15).

The book is well written, pleasurable to read, suitable for a number of couch evenings (as in my case). Although methodologically sound, as far as I can say, no special statistical knowledge is needed to follow and benefit from the whole exposition.

The last chapter is devoted to solutions (“remedies”); arguably, this is the most important chapter in the book. Again, Chambers arrives at pulling together most important trends, concrete ideas and more general, far reaching avenues. The most important measures are to him a) preregistration of studies, b) judging journals by their replication quota and strengthening the whole replication effort as such, c) open science in general (see Openness Initiative, and TOP guidelines) and d) novel ways of conceiving the job of journals. Well, maybe he is not so much focusing on the last part, but I find that last point quite sensible. One could argue that publishers such as Elsevier managed to suck way to much money out of the system, money that ultimately is paid by the tax payers, and by the research community. Basically, scientific journals do two things: hosting manuscripts and steering peer-review. Remember that journals do not do the peer review, it is provided for free by researchers. As hosting is very cheap nowadays, and peer review is brought by without much input by the publishers, why not come up with new, more cost-efficient, and more reliable ways of publishing? One may think that money is not of primary concern for science, truth is. However, science, as most societal endeavors, is based entirely on the trust and confidence of the wider public. Wasting that trust, destroying the funding base. Hence, science cannot afford to waste money, not at all. Among the ideas for updating publishing and journal infrastructure is the idea to use open archives such as ArXive or osf.io as repositories for manuscripts. Peer review can be conducted on this non-paywalled manuscripts (some type of post publication peer review), for instance organized by universities (5). “Overlay journals” may pick and choose papers from these repositories, organize peer review, and make sure their peer review, and the resulting paper is properly indexed (Google Scholar etc.).

To sum up, the book taps into what is perhaps the most pressing concern in psychological research right now. It succeeds in pulling together the wires that together provide the fabric of the unbehagen in the zeitgeist of contemporary academic psychology. I feel that a lot is at stake. If we as a community fail in securing the legitimateness of academic psychology, the discipline may end up in a way similar to phrenology: once hyped, but then seen by some as pseudo science, a view that gained popularity and is now commonplace. Let’s work together for a reliable science. Chambers’ book helps to contribute in that regard.

1 Open Science Collaboration, & Collaboration, O. S. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716-aac4716. http://doi.org/10.1126/science.aac4716

2 Fanelli, D. (2012). Negative results are disappearing from most disciplines and countries. Scientometrics, 90(3), 891–904. http://doi.org/10.1007/s11192-011-0494-7

3 Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE. http://doi.org/10.1371/journal.pone.0...

4 Nuzzo, R. (2015). How scientists fool themselves – and how they can stop. Nature, 526(7572), 182–185. http://doi.org/10.1038/526182a

5 Brembs, B., Button, K., & Munafò, M. (2013). Deep impact: unintended consequences of journal rank. Frontiers in Human Neuroscience, 7. http://doi.org/10.3389/fnhum.2013.00291

6 Morey, R. D., Chambers, C. D., Etchells, P. J., Harris, C. R., Hoekstra, R., Lakens, D., … Zwaan, R. A. (2016). The Peer Reviewers’ Openness Initiative: incentivizing open research practices through peer review. Royal Society Open Science, 3(1), 150547. http://doi.org/10.1098/rsos.150547

7 Schönbrodt, F., Gollwitzer, M., & Abele-Brehm, A. (2017). Der Umgang mit Forschungsdaten im Fach Psychologie: Konkretisierung der DFG-Leitlinien. Psychologische Rundschau, 68(1), 20–25. http://doi.org/10.1026/0033-3042/a000341

8 Longo, D. L., & Drazen, J. M. (2016). Data Sharing. New England Journal of Medicine, 374(3), 276–277. http://doi.org/10.1056/NEJMe1516564

9 LeBel, E. P. (2017). Even With Nuance, Social Psychology Faces its Most Major Crisis in History. Retrieved from https://proveyourselfwrong.wordpress.....

10 Motyl, M., Demos, A. P., Carsel, T. S., Hanson, B. E., Melton, Z. J., Mueller, A. B., … Wong, K. M. (2017). The state of social and personality science: Rotten to the core, not so bad, getting better, or getting worse? Journal of Personality and Social Psychology, 113(1), 34.

11 Ledgerwood, A. (n.d.). Everything is F*cking Nuanced: The Syllabus (Blog Post). Retrieved from http://incurablynuanced.blogspot.de/2...

12 Michell, J. (2005). The logic of measurement: A realist overview. Measurement, 38(4), 285–294. http://doi.org/10.1016/j.measurement....

13 Michell, J. (1997). Quantitative science and the definition of measurement in psychology. British Journal of Psychology, 88(3), 355–383. http://doi.org/Article

14 Heene, M. (2013). Additive conjoint measurement and the resistance toward falsifiability in psychology. Frontiers in Psychology, 4.

15 Sauer, S. (2016). Why metric scale level cannot be taken for granted (Blog Post). http://doi.org/http://doi.org/10.5281...
Profile Image for Jurij Fedorov.
588 reviews84 followers
April 14, 2020
This is a must-read for anyone doing science or studying at the university. For social science students this is an ultimative must-read the first year of their education.

Pro:

Man, where has this book been? When I studied for my master's degree in psychology the texts were never like this. They never really taught us about bad psychology results or liars and cheaters in science. We were taught that social science is basically like any other science and that truth wins out at the end as scientists are doing their best to find out what works or doesn't. Maybe truth will win out, but it's a long road there as the prestige comes about from great positive results. Not some factual truth.

This is a must-read for anyone interested in social science. But it's only about cheaters trying to get ahead. There are also other kinds of prestige to gain. For example, we know that stuff like raising IQ and multiple intelligences is not found in any study. But if you could disprove general intelligence it would bring you a lot of prestige as it's one of those enemies of liberal thinking. Especially psychologists and educators who dream about influencing people really love to read about manipulation of people.

Con:

This is mainly written for scientists. It's fine for people like me who study stuff without conducting statistical experiments. But at the end it's written for people who love data, stats, math and doing advanced studies while also helping out with running academic papers. Chapter 8 is full-blown focused on running an academic paper. Which for me just is not engaging. I want to read about what happened not about stats related to editing a paper or advice for how one could think about these things. I already know how to think, that's my whole thing, I just need info to think about.

Conclusion:

It's a great book that I will keep recommending to anyone in social science for years to come until it's replaced by a better or more engaging book. But things like chapter 8 and the various complicated stats tables also make some of it too complicated for laymen readers. I can imagine seeing yet another black-and-white chart and just getting frustrated. It's not colorful or pretty. And among the vivid human examples, it can be dry. Chapter 8 reveals that this is a book of 2 minds. It never really settles on one single level of explanation and it does feel preachy at times.

It basically rests on 5 different anecdotes from social science. If it had about 5 more in-depth anecdotes instead of the dry suggestion stuff it would be focused and much better for it. As of now it does feel unlimately a bit unconvincing just because of the fact that if you remove 2 anecdotes from the book the whole argument completely falls apart as it will rest on basically nothing. So there is some selective data gathering going on and it feels like there really are no more good examples to give. No one in China or South Korea cheated in social science? No one in USA? That's just not true. There are more examples to give and they really ought to replace all the basic and longwinded suggestions in the book.
23 reviews1 follower
April 19, 2020
The future of psychological science largely determined by how we manage this renaissance in psychological research. Chambers makes this shift easy to comprehend, whilst simultaneously avoiding simplifying the issues at hand. Highly recommended read for all psychology students and currently practicing clinicians and researchers alike🙌🏼
Profile Image for Matt Berkowitz.
92 reviews63 followers
March 18, 2024
At only 7 years old, this book is still highly relevant at showcasing the inherent problems of psychological science—which are rife across other areas of science as well. To me, none of these issues were new, so the book really caters more so to the educated reader who has not read much about many of these problems.

Chambers starts by discussing the inherent practices of confirmation bias in how psychology journals publish articles, namely by holding a strong preference for novel and positive findings, while eschewing negative findings and replication efforts (whether successful or not). This biases the literature majorly towards publishing false positive than could have arisen in numerous ways: through fluke/chance events due to under-powered studies, due to HARKing (hypothesizing after the results are known), and by failing to publish negative findings (an instance of the file drawer effect, aka publication bias). These problems are now well-known among practitioners, but there’s still a grossly insufficient effort to combat these issues on the whole. One distinction that Chambers helpfully makes is that of methodological replication vs conceptual replication: whereas the former attempts to replicate a prior experiment using as close to the same methods as possible, the latter attempts to replicate the findings using different methods, thereby assuming the original findings are real discoveries.

Chapter 2 is all about p-hacking: the use of unsound research practices to inflate the chances of getting a statistically significant finding. Chambers runs down the myriad ways in which p-hacking can occur, often unintentionally, through exploiting the many “research degrees of freedom” that researchers typically have when running a study. This phenomenon is observable in meta-science papers that show clusters of p-values just below the typical alpha value cut-off of 0.05 (the statistical significance threshold). This clustering issue has also apparently increased between 1965-2005 (Legget et al, 2013). Chambers concludes the chapter by very quickly touching on methods to combat these many questionable research practices: study pre-registration to combat publication bias, p-curves to combat p-hacking, disclosure statements to limit researcher degrees of freedom, data sharing to improve data transparency, and others. He goes much more deeply into these solutions in the final chapter.

Chapter 3 is about unreliability, which seemed like an extension of the previous two chapters. Chambers demonstrates the hostility that replication efforts face in many parts of the culture of academic psychologists, e.g., when 3 different attempts with larger samples failed to replicate John Bargh’s research on priming, Bargh doubled-down, accusing the scientists of incompetence or ill-informed. The sheer lack of replication, hostility towards it, and lack of incentives to attempt replications do not bode well for the reliability of psychological findings. Chambers then goes deeper into how underpowered studies, lack of disclosure about methods, and statistical misunderstandings all likewise undermine reliability. As an example, Chambers cites a study (Nieuwenhuis et al, 2011) that, in top neuroscience journals, the prevalence of a particular statistical misunderstanding—assuming that a statistically significant and a statistically non-significant effect are themselves significantly different—was 50%!

Chapter 4 is about data sharing—or the lack of it. Though practices have been improved, cultural and institutional norms are still clearly not ideal. Most journals still do not enforce data sharing/code sharing as a condition of publication, though some do. It’s pretty easy to see why failure to share such information harms replicability and reliability. Yet, many researchers still eschew this ethical responsibility. Towards the end, Chambers produces a list of tongue-in-cheek responses that surely accurately characterizes many researchers’ attitudes, e.g., “I’m not sharing my data because I can’t be bothered to organize the files in a way that makes sense to anyone else—in fact, they probably won’t even make sense to me when I look back at them in six months’ time.” (p. 102)

Chapter 5 dives deep into outright fraud and what distinguishes fraud from mere scientific misconduct, and misconduct from poor research practices. We’re told the now infamous story of Dutch social psychologist Diederik Stapel, who had over 50 papers retracted after having been discovered to falsify data routinely—for which he’s admitted and apologized. Chambers insinuates that the rate of true fraud/misconduct surely exceeds that which has been discovered. This discussion feeds into chapter 6’s discussion of open-access journals: journals that charge publishers rather than readers, making all content available to anyone for free. While this solves some problems, it has contributed to others, such as the litany of predatory open-access journals.

Chapter 7 looks at using scientific principles to assess itself (meta-science). Chambers look at the main metrics used to quantify quality, namely the journal impact factor (JIF). As Chambers demonstrates, these metrics have become a distorted signal of quality work, invoking Goodhart’s Law that “when a measure becomes a target, it ceases to be a good measure.” As supporting evidence, Chambers notes how the JIF is a measure of the average citations across all articles in a journal, but that the distribution of citations is highly skewed, with most articles getting no citations and a small percentage getting a lot. Moreover, journals can apparently be lobbied to inflate the JIF by getting them to disqualify certain articles from counting in the denominator. Moreover, Chambers cites evidence (Brembs et al, 2013) that JIF is unrelated to statistical power, an indicator of study quality (though one could argue that it is ought to be a primary indicator of quality). He then concludes that “Despite all the evidence that JIF is more- or- less worthless, the psychological community has become ensnared in a groupthink that lends it value.” This is probably better framed as a collective action problem (which Chambers does allude to later): since everyone regards everyone else considering JIF as an importance metric of reputability, it would cost an individual scientist to ignore it by publishing in journals with lower (or no) JIFs.

Chambers goes on to further rebuke using grant awards and authorship rank as meaningful indicators of scientific quality. The overall picture is pretty clear: psychology—and many other fields of science, no doubt—use misleading indicators to propel scientific publication, incentivizing the wrong kinds of behaviours and disincentivizing careful, high-quality output.

The final chapter goes into solutions. Preregistering study protocols—via Registered Reports—can be expected (and has shown) to improve publication bias for hypothesis-driven science, yet it has often been met with avoidance and even active pushback. If null results populate some journals more than others, won’t that make them less cited and thus less competitive in the journal market? One difficulty in transforming publishing norms is that it often requires buy-in from most or at least a large chunk of journals at once so that none expect a disadvantage to occur to them for being the first mover. It is hard to honestly argue against this solution, but Chambers rebuts many common objections to pre-registration.

Overall, this was a thorough, comprehensive, and meticulous look at the many problems with scientific publishing in psychology with the ultimate aim of improving its rigor and reliability. As a statistician, I was very satisfied with Chambers’ treatment of the many statistical errors that plague the field and his lucid explanations of how to improve statistical practices. Very much recommended.
Profile Image for Raffaello Palandri.
Author 11 books13 followers
August 8, 2023
Book of the Day – The 7 deadly sins of Psychology
Today’s Book of the Day is THE 7 DEADLY SINS OF PSYCHOLOGY, written by Chris Chambers in 2017 and published by Princeton University Press.

Chris Chambers is a professor of cognitive neuroscience in the School of Psychology at Cardiff University and a contributor to the Guardian science blog network. His research focuses on how the human brain supports cognition and he is also interested in the representation of science in the media.

I have chosen this book because I find this an extremely accurate and thought-provoking book about some of the most prevalent pitfalls that can hinder the progress, reliability, and validity of psychological research.

Chambers navigates the landscape of psychological science, offering readers a critical exploration of the field���s shortcomings while simultaneously advocating for a needed more rigorous and transparent approach to research.

The book’s title represents a nod to the seven deadly sins of Christian theology and immediately makes the readers aware that some practices used in research can lead to completely biased or misguided conclusions.

The seven sins are as follows:

Bias – favouring the results that confirm one’s existing beliefs while at the same time ignoring or devaluing evidence that doesn’t
Hidden Flexibility – manipulating data through p-Hacking (collecting and/or selecting data or statistical analyses until nonsignificant results become significant) or HARKing (Hypothesizing After the Results are Known)
Unreliability – disregard for the replication of findings, so that studies lack of scientific power
Data Hoarding – the refusal to share data or make them open-source to allow checks and further analysis
Corruptibility – the actual manipulation of data to make the result more favourable to one’s hypothesis
Internment – the small sharing of the results, often caused by high subscription costs to scientific journals
Bean Counting – using meaningless metrics just to highlight the desired outcomes.
One of the book’s notable strengths is Chambers‘ ability to explain statistical and methodological concepts that some people could find complex in an accessible manner. The author uses real-world examples and anecdotes that help demystify these concepts, allowing both seasoned researchers and those new to the field to grasp the underlying issues at hand.

Chambers takes a pragmatic and solution-oriented approach to addressing these issues. He does not want just to highlight the problems in research, he also proposes several practical strategies and reforms that can help mitigate the impact of these sins. He calls for increased transparency, preregistration, and open data toward enhanced research rigour and reproducibility. By emphasizing the importance of ethical conduct, intellectual humility, and a commitment to advancing knowledge over personal or professional gain, Chambers encourages researchers to embrace a more honest and accountable approach to their work.

While the book primarily seems to be critiquing the field, in reality, it does not lose sight of the genuine contributions and advancements made in psychology.

In conclusion, The 7 Deadly Sins of Psychology is a timely and thought-provoking examination of the challenges that confront contemporary psychological research. Chris Chambers provides readers with a comprehensive analysis of the pitfalls that researchers often encounter and practical strategies for promoting transparency, rigour, and ethical conduct.

This book is a valuable resource for psychologists, researchers, students, and anyone interested in the pursuit of sound scientific inquiry.
Profile Image for Lawrence Grandpre.
120 reviews45 followers
May 26, 2021
This book is more designed for folks in the field, but it pretty accessible to that outside. I wish he had more specific examples, but you get some idea of how the replication crisis came about, that it is indeed real, and some of the practices folks do which perpetuate this reality. Math is pretty accessible, minus on a short segment on statistical power and Bayesian methodology, but even there you can get the gist of what is going on. I wish he had some concrete examples of how to think about the bevy of ways this relates to applies psychology and psychology in policymaking, but that is more work for other books. It's not quite that psychologists are just lying, but that if they were, we would not/could not know based upon current practices. what the book isn't talking about is how many of these methodology practices stem from deeply seeded assumptions about knowledge and power which seem to be more endemic to certain Eurocentric ways of viewing the world, where post-enlightenment science has become a qazi religious practice to express man's power over the world/other men, with an obsession with prestige, dominance, and hierarchy. Other books like Marimba Ani's Yurugu have hit on this point some and it would interesting to see the findings of this text as a reflection of that work, especially given some of the deeply violent technologies and practice of control and violence which have been done to oppressed bodies in the name of, what now appears to be, largely shitty "sciense" done in the feild of psychology.
Profile Image for Patrick Hurley.
65 reviews2 followers
July 18, 2021
This book is absolutely fantastic for an academic audience. As an accounting academic who dabbles in psychology I was already familiar with a number of the arguments set forth by the author. However, the depth and richness of detail backing up each claim was tremendous and left me flipping through pages. The whole time I was thinking about how to instantiate these changes within accounting research, whether there are extreme fraudsters like Stapel and perhaps Smeesters (Jim Hunton was certainly exposed as one in accounting who fabricated a large number of high-profile papers), and what pushback one might expect when pushing for these changes in accounting. This should be a must read for any academic who does experimental work. If I teach a doctoral seminar in the future, this will absolutely be required reading.

There are many problems in academia. To be able to understand and support the solutions, you have to understand the scope and gravity of the problems. This book gives you all of the above in a clear and concise way. 5/5, totally recommend with all the enthusiasm in the world.
Profile Image for Remi Gau.
24 reviews4 followers
August 26, 2018
I have (finally) finished reading Chris Chamber's book on why Everything is fucked in the land of psychological resaerch.
I highly recommend it: especially if you have no idea what the replication crisis is but even if you find it easy to figure out what are the odds of a significant result to replicate if you use the same sample size as the original study(*).
Chambers goes through the main reasons that are behind the current state of affair: the psychological biases, the analytical flexibility, the reliance on poor statistics and too small samples, data hoarding and the lack openness in publishing, the over-reliance on impact factors and other bean counting practices.
The last chapter offers some solutions but focusing mostly on registered reports that force science to be judged on the methods on not the results and forces to draw a clear line between confirmatory and exploratory research.

* The answer is 50%.
Profile Image for Sam.
3,454 reviews265 followers
August 5, 2021
This is an important book for all the sciences not just psychology as it shows where the research world can go wrong and loose its way and purpose and how to get it back, even when it seems too late to fix it. Chambers runs through the 7 deadly sins of bias, hidden flexibility, unreliability, data hoarding, corruptibility, internment, and bean counting and how this has affected research in psycology and ultimately how this has affected the wider world. He refers to different sciences to show how such issues are addressed there and how different attitudes are both as a cause and a consequence of the better science. He rounds it all off with a chapter entitled redemption, which is a call to all those in the sector to push for change and for the right change. On the whole, this was a good read and fairly accessible, although some of the stats issues did take a bit of re-reading to get my head around!
Profile Image for Emīls Sietiņš.
94 reviews9 followers
February 27, 2019
"There is nothing noble in being superior to your fellow person - true nobility is being superior to your former self." (p.216)

I highly recommend this book to anyone who is thinking of becoming a researcher in the field of social sciences or thinking of getting their PhD in psychology. As I am writing up my first ever manuscript which is based on my senior thesis in social psychology, this book gave me lots of good advice and helped me see how, in science, we must value data and the pursuit of truth much higher than the prestige and wealth of researchers.
Profile Image for Adil.
104 reviews19 followers
May 9, 2021
I taught a university course based on this book in Spring 2021. The book served us really well. It's highly engaging and not too technical (but not watered down either). Everyone in empirical research would benefit from reading this, not just researchers in Psychology. The book will need an update soon because science is evolving at a rapid pace but this is not a major problem at all, as of 2021 (it depends a bit on how up-to-date the reader is with regard to the relevant developments). Chambers did great service to the Psychology and science by bring all this together.
4 reviews
November 14, 2018
Highly recommend this book for any researchers, or anyone interested in this area. I will be starting my PhD soon in psychology so it is good to be aware of these prominent issues now, and on the solutions to make my future research more transparent. The writing style is also very engaging which made the book hard to put down. This usually only happens with fictional books for me!!
Profile Image for Görkem Saylam.
37 reviews1 follower
January 17, 2024
The Seven Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice | Chris Chambers
Scoring Rubric
1: baseline
2: creative contextualization bcs of "reforming the psychology practice scientifically" as groundbreaking theme
2: creative conceptualization bcs of correcting these sins by embracing open science
5: total points by 5
Author 4 books14 followers
May 30, 2018
Great overview of what's wrong in psychological science (and to a certain extent in science as a whole) and what can be done against it. Must read for scientists, science communicators and science journalists.
91 reviews
May 11, 2020
very informative book about how research results in general can be skewed easily be the researchers, and how data can easily be manipulated, book that should be read by anyone who conducts research on any level.
Profile Image for Bárbara Nazaré.
13 reviews13 followers
July 12, 2017
A must-read for anyone who is interested in psychology research and who strives for its scientific status. Pertinent problems, along with imperative solutions, are clearly identified.
Profile Image for Michal.
Author 1 book1 follower
August 12, 2019
One star down for making me depressed.
Profile Image for Celly.
4 reviews
August 24, 2020
At the bare minimum, this should be part of every Psychology degree.
Profile Image for Ellie Berry.
4 reviews8 followers
November 29, 2020
Fairplay - Chambers should be hailed as one of the leading members of a revolutionary movement
Profile Image for E.
53 reviews
December 15, 2020
very informative. practice open science or else
Profile Image for Hellen.
299 reviews33 followers
December 6, 2021
Very thorough, pedagogical and constructive, although not as accessible for people not already studying/working in behavioral sciences as the book description led me to believe.
Profile Image for Cal Davie.
237 reviews15 followers
October 4, 2022
Psychology can be a shitshow. This honest book outlines the problems and solutions in a clear and concise manner, and gives hope to the field which is still in disarray.
Displaying 1 - 30 of 33 reviews

Can't find what you're looking for?

Get help and learn more about the design.