Jump to ratings and reviews
Rate this book

Science, Policy, and the Value-Free Ideal

Rate this book
The role of science in policymaking has gained unprecedented stature in the United States, raising questions about the place of science and scientific expertise in the democratic process. Some scientists have been given considerable epistemic authority in shaping policy on issues of great moral and cultural significance, and the politicizing of these issues has become highly contentious. Since World War II, most philosophers of science have purported the concept that science should be “value-free.” In Science, Policy and the Value-Free Ideal, Heather E. Douglas argues that such an ideal is neither adequate nor desirable for science.  She contends that the moral responsibilities of scientists require the consideration of values even at the heart of science. She lobbies for a new ideal in which values serve an essential function throughout scientific inquiry, but where the role values play is constrained at key points, thus protecting the integrity and objectivity of science. In this vein, Douglas outlines a system for the application of values to guide scientists through points of uncertainty fraught with moral valence. Following a philosophical analysis of the historical background of science advising and the value-free ideal, Douglas defines how values should-and should not-function in science.  She discusses the distinctive direct and indirect roles for values in reasoning, and outlines seven senses of objectivity, showing how each can be employed to determine the reliability of scientific claims.  Douglas then uses these philosophical insights to clarify the distinction between junk science and sound science to be used in policymaking. In conclusion, she calls for greater openness on the values utilized in policymaking, and more public participation in the policymaking process, by suggesting various models for effective use of both the public and experts in key risk assessments.

256 pages, Paperback

First published January 1, 2009

15 people are currently reading
173 people want to read

About the author

Heather E. Douglas

3 books7 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
26 (32%)
4 stars
27 (34%)
3 stars
21 (26%)
2 stars
4 (5%)
1 star
1 (1%)
Displaying 1 - 6 of 6 reviews
Profile Image for Matthew Brown.
130 reviews34 followers
May 31, 2012
I've read this book several times since my initial rating, and I'm upping it to 5 stars because it really deserves it. The more I work with it, the more I think Heather's is one of the better and more important books in philosophy of science of the last 10 years. The inductive risk argument has, I think, overtaken the underdetermination argument in terms of the debate about values in science, and that's all due to Heather. I'd call this a must-read for anyone working on general epistemology of science or on interaction of science and policy.

A few things to complain about, though. The chapter on the history of science policy is too dry and laundry-list based. Its main aim seems to be getting a bunch of information down in one place, and while this is admittedly useful in terms of reference, it is maybe just barely worth reading in the context of the book. It could have been made more interesting, done more analytically, or integrated more tightly into the book as a whole. Plus some really interesting and important parts of the story are left out, e.g., the NSF programs on "Interdisciplinary Research Relevant to Problems of Our Society" (IRRPOS) and "Research Applied to National Needs" (RANN).

Another worry I have has to do with the historical story, especially the "Whatever happened to Rudner" question. Here's an excerpt from a blog Q&A Heather did for my graduate class in Fall '09:

Q: Are there not more compelling arguments for the value-free ideal in light of Rudner’s arguments? As it stands, we had a hard time seeing the philosophical motivations for the eventual acceptance of the ideal in the mid-20th C.

Heather Douglas: When I first began looking at this literature, I was really surprised at how weak the arguments for the value-free ideal were. Now, it might be that I missed something in the historical body of work from that period, and so I would love to hear about key aspects of arguments I just overlooked. There could also be arguments made for the value-free ideal that were not articulated at the time– perhaps about the need for similar standards across scientists to assist with the unity of science. Of course, Kuhn 1977 would make that sort of approach problematic. I have a hard time figuring out any purely philosophical motivations, so I would be open to the excavation of them.


I'm still not happy, but I don't have an alternative answer, either!

That's small potatoes, though. It's a fantastic book!

---
Acquired 8/24/09 - desk copy, w00t!

This will be the main text for most of October in my grad seminar. I'm especially interested in the history of the Science Advisor. Also keen to get the whole story of C. West Churchman, Richard Rudner, etc. (which I've skimmed around in a bit already).
Profile Image for Sharad Pandian.
434 reviews166 followers
March 25, 2020
4 stars for being ingenius and eclectic, eventhough the systematic philosophical account of science (like any of its kind) is fairly useless as it stands.

Douglas attacks the value-free ideal, which holds that science needs to be autonomous from society to function. She draws on Hempel's notion of inductive risk to get a foot in the door - science works through induction, induction can never establish anything with total certainty, this means that scientists always have to weigh not just the consequences of the theory when it's right, but also the consequences for being wrong.

Although she mentions considering positive and negative consquences of the theory being correct (citing work on "forbidden knowledge" by Deborah Johnson and Kitcher), she sneakily moves on to only consider the consequences of the theory being wrong.

She argues that although values - which she divides into the ethical, social, and cognitive (not to be confused with the epistemic, which mark reliability) - cannot substitute for evidence directly, they should always help determine how much evidence is enough to accept a claim:

Two clear roles for values in reasoning appear here, one legitimate and one not. The values can act as reasons in themselves to accept a claim, providing direct motivation for the adoption of a theory. Or, the values can act to weigh the importance of uncertainty about the claim, helping to decide what should count as sufficient evidence for the claim. In the first direct role, the values act much the same way as evidence normally does, providing warrant or reasons to accept a claim. In the second, indirect role, the values do not compete with or supplant evidence, but rather determine the importance of the inductive gaps left by the evidence. (96)

Values are not the same kind of thing as evidence, and thus should not play the role of providing warrant for a claim. Yet we can and do have legitimate motives for shifting the level of what counts as sufficient warrant for an empirical claim. (97)

In her mind, value-based evaluation takes on more prominence the less evidence there is. So the more evidence accumulates, the less role values will play in reasoning:

More evidence usually makes the values less important in this indirect role, as uncertainty reduces. Where uncertainty remains, the values help the scientist decide whether the uncertainty is acceptable, either by weighing the consequences of an erroneous choice or by estimating the likelihood that an erroneous choice would linger undetected. (96)

maintaining a distinction in the kinds of values to be used in science is far less important than maintaining a distinction in the roles those values play. (98)

For her, it has to be scientists making decisions, because

only the scientist can fully appreciate the potential implications of the work, and, equally important, the potential errors and uncertainties in the work. And it is precisely these potential sources of error, and the consequences that could result from them, that someone must think about. The scientists are usually the most qualified to do so. (73-4)

As for what kinds of risks they need to be sensitive to, she initially seems to suggest it is the ones organically determines by scientific communities:

Scientists should be held to a similar standard of foresight, but indexed to the scientific community rather than the general public. Because scientists work in such communities, in near constant communication and competition with other scientists, what is foreseeable and what is not can be readily determined. (83)

But later, in the last chapter, she argues for the increased input from communities, mentioning for example analytic-deliberative processes where analyst scientists engage in dialogue with stakeholders and representatives of the public, for a deeper appreciation of the values at stake (159-67).

The problem, it seems to me, is that Douglas offers little guidance in describing how social/ethical consequences are meant to be taken into account to decide where the level of acceptable evidence should be. Because she mentions Galileo somewhere, consider: what is the Church decided that any risk to its authority risked the eternal soul of man, and therefore the appropriate level of evidence needed to overturn any of its canonical claims should be practically infinite?

The point is that like so many of her analytic philosophy colleagues, Douglas takes for granted a world suffused with scientific infrastructe and its basic reliability, and is simply trying to rearrange some popular public notions into palatable ways. So when she considers expert disagreement, she claims that unless there's faulty reasoning or unethical behavior, the disagreement is probably due to disagreements about value-informed acceptablity threshold (just like her theory!). She fails to consider that cognition isn't naturally oriented to some transcendental truth, and that coordination is an achievement, not a given (think of the work of the last 50 years of Sociologists of Science).

This is perhaps most starkly visible in a mini-discussion on "objectivity." Pointing to how objectivity is ascribed to many different things - "objective knowledge, objective methods, objective people, objective observations, and objective criteria" (115) - she suggests that a common feature of all this is the assertion of intersubjective trustworthiness:

To say a researcher, a procedure, or a finding is objective is to say that each of these things is trustworthy in a most potent form (see Fine 1998, 17–19). The trust is not just for oneself; one also thinks others should trust the objective entity too. (116)

She then offers "seven bases for such a trust" (116), or seven ways of being objective,

human-world interactions
1. manipulable (from Hacking)
2. convergent (same result through multiple avenues)

individual thought processes
3. detached (not using values directly)
4. value-neutral ("taking a position that is balanced or neutral with respect to a spectrum of values" (123), or being "reflectively centrist" (124))

social processes
5. procedural ("same outcome is always produced, regardless of who is performing the process" (125))
6. concordant ("if some set of competent observers all concur on the particular observation" (125)
7. interactive (a group coming to consensus after discussion)

The obvious question is...why call this "objectivity"? And if these are all about declaring that something is trustworthy, is "objectivity" not fundamentally social? Are these also all not contestable, and even when they occur cleanly, contingent achievements?

She comes tantalizingly close to a more subtle treatment when in her discussion of procedural objectivity mentions:

Theodore Porter’s historical work traces the development of this sense of objectivity in the past two centuries (Porter 1992, 1995). In his examination of objectivity in accounting, he shows how the focus on rules, particularly inflexible and evenhanded ones, lent credence to the field of accounting (1992, 635–36). In Trust in Numbers (1995), Porter expands the areas of examination, looking at the role of rule-bound quantification techniques across engineering, accounting, and other bureaucratic functions. Quantification through rules (as opposed to expert judgment) allows for both an extension of power across traditional boundaries and a basis for trust in those with power. Procedural objectivity thus serves a crucial function in the management of modern public life. (125)

Maybe a non-historic deification of general, plausible-seeming epistemic principles is the wrong way to go then? Maybe the serious work would be in working out how different norms are made and function in different contexts, instead of trying to come up with a general theory of this sort that simply aggregates slightly-refined intuitions? To come so close, and yet remain so far.

(One thing that annoyed me massively was when she began her book by dismissing the SSK school (among others) as advocates (paraphrasing) "against the authority but not the autonomy of science." If instead she had read them carefully and engaged with their complex casestudies, she could have actually asked some tough questions about science, authority, and autonomy - including cases where non-scientists got it right before scientists did. But we have no time for anything that doesn't instinctively shill for science's authority here)

Still, there's such a mishmash of stuff here - histories of American science policy advisors, good case studies of medical issues, summaries of government risk assessment-management strategies, and a robust multi-disciplinary work - that I quite enjoyed reading it. If you're going to do impractical, procrustean philosophy, might as well flame out like this.
3 reviews3 followers
January 13, 2022
Really excellent! A really potent attack on the value-free ideal(which is a bit of a misnomer, since no one really denies the importance of epistemic values in science). Chapter 6 on Objectivity in Science is worth reading closely. It does a great job of demystifying what we mean by objectivity of scientific claims.
Profile Image for Áine.
30 reviews
October 5, 2023
Tbh she lost me in some of it just because of the way she writes… like a lot of her points are saved by the analogies and concluding sentences she adds. The way she approaches and unpacks the implications of her opinions and views is a tedious read and I find myself asking if it was worth it.
259 reviews2 followers
September 26, 2020
This one was dry and too philosophical/detailed/wordy for my particular interest. A few new ways to consider values and objectivity within science.
Displaying 1 - 6 of 6 reviews

Can't find what you're looking for?

Get help and learn more about the design.