Jump to ratings and reviews
Rate this book

The Foundations of Scientific Inference

Rate this book
Not since Ernest Nagel’s 1939 monograph on the theory of probability has there been a comprehensive elementary survey of the philosophical problems of probablity and induction.  This is an authoritative and up-to-date treatment of the subject, and yet it is relatively brief and nontechnical.

Hume’s skeptical arguments regarding the justification of induction are taken as a point of departure, and a variety of traditional and contemporary ways of dealing with this problem are considered.  The author then sets forth his own criteria of adequacy for interpretations of probability.  Utilizing these criteria he analyzes contemporary theories of probability , as well as the older classical and subjective interpretations.

168 pages, Paperback

First published January 1, 1967

2 people are currently reading
62 people want to read

About the author

Wesley C. Salmon

22 books6 followers
Full name: Wesley Charles Salmon.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
11 (55%)
4 stars
4 (20%)
3 stars
4 (20%)
2 stars
0 (0%)
1 star
1 (5%)
Displaying 1 of 1 review
Profile Image for Brian Powell.
204 reviews36 followers
April 3, 2018
Salmon's book covers the broad problem of scientific inference, from induction and its challenges, to potential solutions, to alternative methods of inference like deductivism. This is not a book on induction per se, but given its central role in scientific inference, it sets the tone and directs the course of Salmon's investigations.

According to empiricism, knowledge must be founded on evidence. David Hume's problem of induction concerns how knowledge is actually derived from evidence. It is a logical problem about the relationship between evidence and conclusion, in particular, whether our attempts at extrapolating knowledge of the observed to the unobserved is logically justified. Logically, inductive arguments are essentially deductions with missing premises and this is the problem -- how can we be sure of conclusions built on such shaky ground?

In Chapter 2, Salmon considers a variety of solutions to the problem of induction: some attempt a direct counter, while others consider altogether different routes to scientific inference. One such approach, the hypothetico-deductive method, is investigated as an induction-free process of inference. The idea is that predictions are deduced from hypotheses which are then tested against empirical evidence. The process is apparently fully deductive, but there is one initial, glaring problem: suppose we assess that the collected data confirm the prediction, are we to conclude the hypothesis correct? Not without committing the unforgivable logical crime called "affirming the consequent". The fallacy arises because there could be other hypotheses, different from that being tested, which also happen to support the given set of empirical data. Simply put, we cannot deduce a unique hypothesis from data: "While we are concerned with the status of the general hypothesis -- whether we should accept or reject it -- the hypothesis must be treated as a conclusion to be supported by evidence, not as a premise lending support to other conclusions. The inference from observational evidence to hypothesis is certainly not deductive." (p. 19) Further, hypotheses don't just fall from the sky. While the construction of a hypothesis might be a creative, non-logical affair, it certainly isn't deductive or else it could not be ampliative, hardly a candidate for an informative, generalized hypothesis. As Salmon puts it, "A scientific theory that merely summarized what had already been observed would not deserve to be called a theory. If scientific inference were not ampliative, science would be useless for prediction, postdiction, and explanation." (p. 20)

Next, we come to Prof. Popper's deductivism, another attempt to do inference without induction. Popper maintains that the only logical method available to scientific inference is falsification: it is possible to falsify a general statement by observing only a single contradictory instance or event, whereas we have just noted the impossibility of confirming a general statement by observing particular instances or events (from above, we cannot confirm a single unique hypothesis by a body of empirical evidence). Falsification therefore seems to sidestep the deductive fallacy committed in confirming hypotheses. A hypothesis that survives falsification can be corroborated, gaining favor over competing hypotheses by being more falsifiable (which is another way of saying that the hypothesis has more content). The important point is that Salmon doesn't buy it, that there is more than just deduction in Poppers program. After all, we are learning something in the corroboration of hypotheses, and we know deduction to be non-ampliative. In fact, the process of corroboration is a "nondemonstrative form of inference. It is a way of providing for the acceptance of hypotheses even though the content of these hypotheses go beyond basic statements (particular statements of observed fact). Modus tollens without corroboration is empty; modus tollens with corroboration is induction." (p. 26) Elsewhere, "To maintain that the truth of a deduced prediction supports a hypothesis is straightforwardly inductive." (p. 109) Or, at least something akin to it.

Returning to induction, Salmon explores whether there is any hope in establishing a uniformity of nature on which to base inductive inference. He concludes similarly to Skyrms, "The problem is how to ferret out the genuine uniformities: coincidence vs genuine causal regularity" (p. 42) Compounding the problem is that the uniformity of nature is itself an empirical question: we cannot use induction to infer global uniformity (since induction is what we are trying to justify through this very uniformity) and it doesn't appear (to me) that we can get very far using Popper's deductivism, since induction requires that certain uniformity holds universally, not only that it not be falsified by a local observation. Without an a priori statement of uniformity of nature (as with, for example, Kant's synthetic a priori propositions), we appear stuck, no matter how strongly experience suggests such regularities.

What about a probabilistic interpretation of induction? After all, inductive conclusions are not supposed to hold with certainty, and strong induction is defined by its conclusions holding true with high probability. Chapters 4 and 5 spend a a lot of time discussing probability, investigating and mulling the various interpretations of what it means for something to be probable. The two most popular conceptions of probability, frequency of occurrence and degree of rational belief based on available evidence, both fall short of supplying a version of probability adequate to resolving the problem of induction (all versions end up somehow needing to assume uniformity or invoke induction in circular ways). But that's OK -- by this time Salmon has set the stage for an improved, probabilistic hypothetico-deductive method that incorporates some of Popper's concerns. This is what remains after the smoke settles on the bloody, corps-litterred battlefield of scientific inference: a hopeful procedure built from the exploded shrapnel of the other brave but failed attempts at a solution.

The idea is to use Bayes' theorem, which furnishes the probability of the hypothesis given evidence. Bayes' theorem relates this quantity to the probability of the evidence given the hypothesis: these are the two quantities of interest to the standard hypothetico-deductive method, but there's more. Two additional vital ingredients appear: the prior probability of the hypothesis and the probability that the evidence would obtain even if the hypothesis was false, i.e. under a different hypothesis. This latter quantity is essential for avoiding the logical trap of affirming the consequent that we mentioned earlier. The prior probability favors likely hypotheses: surely subjective, but here is where we can bring prior knowledge to bear on the question of the plausibility of the hypothesis. Nowadays, a hypothesis that cited evil spirits as the cause of a new kind of migraine would be roundly considered less plausible than one based on neurobiology, and rightly so. The prior is key to singling out the plausible hypotheses from the infinitude of riff raff conjectures, saving us the impossible task of testing all of them. Meanwhile, the third ingredient -- the probability of the evidence under a different hypothesis -- is what Popper has in mind when he talks about falsifiability: if this quantity is large, this means the hypothesis under consideration is not strongly falsifiable and therefore only weakly corroborated by the data. Bayes' theorem appears to salvage the hypothetico-deductive method, "it provides a coherent schema in terms of which we can understand the roles of confirmation, falsification, corroboration, and plausibility." (p. 120)

As a Bayesian, I find ending as Salmon does with Bayes' theorem extremely satisfying. A relation so simple and powerful, it is the red bow on the garbage bag of failed inference methods.
Displaying 1 of 1 review

Can't find what you're looking for?

Get help and learn more about the design.