'The Design Inference: Eliminating Chance through Small Probabilities' is not for everybody to read: it is preferable if one has a mathematical background particularly 'Set Theory' (with its mathematical symbols) and 'Probability Theory.' At times it is difficult to follow the book because of its mathematical formality, which the author does try to explain, but his explanations only become clearer to average reader at the end of the book. So certain parts might be somewhat frustrating to the ordinary reader: indeed, the author does recommend nontechnical readers’ “skipping the technical portions” (p. xiii) they might find difficult to understand. I suggest skimming over such parts particularly in chapters 3 and 4.
The book starts by pointing out that there are three ways things can happen: (1) naturally (following the laws of physics, chemistry & biology), (2) by chance, and (3) by design. If something happens all the time, for example, letting go of a ball always falls to the ground, the probability is high (=1): it’s following the laws of nature—in this case the law of gravity. If, blindfolded, someone picks a white ball from among 9 black balls, we can still attribute it to chance because it entails an intermediate probability of 0.1 = 1/10 (odds of 1 to 10). Again, throwing a double six from a pair of dice is not such a big deal since the odds are only 1 to 36 (probability of 1/36 = 0.0278); and picking the ace of spades from an ordinary (bridge/poker) pack of cards, could still be a matter of luck since the odds are still 1 to 52 (probability of 1/52 = 0.0192). But if, blindfolded, one picks a white ball from among a hundred (probability 0.01), a thousand (probability 0.001), or a million black balls (probability 10^-6), at some point one starts to suspect design. In other words, to conclude design we require a small probability of something happening (the smaller the better); but where is the cut-off point?
According to the author, “Probability is a relation between events and experimental arrangements:” the 'ideal likelihood' of something happening “relative to certain background information” (a 'chance hypothesis'—pp. 70, 73). Probability/likelihood is assigned a number (a fraction) between 0 and 1, which might not always be as easy as above to determine since it might entail a thorough knowledge of mathematics and statistics.
The author proceeds with, “Complexity theory measures the difficulty of a problem … complexity measures are measures of difficulty” (pp. 92–93). For example, suppose a safe has a combination lock marked with a hundred numbers (00 to 99) for which 5 turns in alternating directions are required to open the lock. Given only one opportunity, the probability of opening the lock is (1/100^5=) 10^-10, while a complexity measure might be 10^10, say. The complexity measure is therefore a number ranging from zero to infinity. The author then defines complexity as “-log2 of the probability,” and justifies it by stating, “The most convenient way for communication theorists to measure information is in bits [zeros & ones] (this accounts for the base 2 in the logarithm).” For instance, he adds, “the ASCII code … uses strings comprising eight 0s and 1s to represent the characters on a typewriter” (p. 117). This gives a one-to-one correspondence from (1 to 0) probability to (0 to ꚙ) complexity, respectively. (So, in the safe example, the complexity measure is about 33.22.)
Naturally, we would strongly suspect that if someone manages to flip a coin 100 times and obtain heads every time, it is by design; the coin is, most probably, not a fair coin (e.g. both sides are heads, say). Yet, if we flip a fair coin 100 times and record the sequence, although everyone agrees it is a random sequence, it will be practically impossible to repeat the same sequence. The odds of getting the same sequence again are 1 to 2^100 or about 1 to 1.3x10^30: the same as the odds of getting 100 heads in a row with a fair coin. (Lotteries are not won by the same numbers; if it happens, it rouses suspicion.) So what differentiates chance from design?
Imagine someone is one-hundred-odd yards away from a forest in the middle of winter (the trees have no leaves) and shoots an arrow at the trees. It misses the first two rows of trees and lands on the third row, say. One can safely say it was a random shot. Now, if one draws a bull’s-eye circle on the tree around where the arrow landed, we know it is a 'fabrication.' However, if one draws the bull’s-eye circle prior to the arrow shot, and the arrow lands in its center, we know we are dealing with a master archer. In the latter case, the bull’s-eye circle is a 'specification' as opposed to a fabrication. In other words, to differentiate design from chance, besides a small probability, we need a specification.
Still a specification (pattern) can be discovered after the fact; that is what detectives must do: find the circumstantial evidence, reconstruct the crime, and prove the perpetrator guilty beyond reasonable doubt. Moreover, the pattern is not always discernible (detachable) at first blush, as in the case of cryptography, say. Take, for example, the sequence: 0100011011000001010011100101110111, which seems random enough; but look at the same sequence separated by commas: 0,1,00,01,10,11,000,001,010,011,100,101,110,111; it is only the (one-digit, two-digit, and three-digit) binary numbers in sequence. But it’s not so easy to see at first blush.
Consequently, the author defines his 'Law of Small Probability' by stating, “specified events of small probability do not occur by chance” (p. 5). Naturally, this definition begs the question, how small is a small probability? Now, there are two kinds of small probability: (1) on a local scale and (2) on a cosmic scale. A scientific experiment, for instance, defines a 'rejection region' (prediction) within which occurrence the chance hypothesis is statistically excluded.
On the other hand, the 'null hypothesis' or mathematical expectation (the probability calculation) is too good to be true; and although ordinary statistics has no way of rejecting it as true, the design inference considers it fudged if satisfied exactly. For example, if a hundred-coin-flip sequence consists of heads followed by tails for fifty times, we know that it’s probably a fake sequence because it follows the expected theoretical result too closely.
Moreover if, for example, a robber tries to open a combination safe, it depends on how many opportunities he has: if he is allowed only one try, it’s one thing; if he has all night, it’s another. In the Canadian ‘Lotto649’ a player must choose six out of the first 49 natural numbers. The odds of winning this lottery are 1 to 13,983,816 (probability 7.15x10^-8), which is an extremely small probability; but if 14 million tickets are purchased, the odds of someone winning it are just about even. Indeed, there is a winner roughly every three draws.
The author, therefore, introduces the concept of 'probabilistic resources.' I think, this is the highlight of the book, his determining the 'cosmic' probabilistic resources as 10^150 (p. 209), which anchors his Law of Small Probability. He assumes “a noninflationary big-bang cosmology” (p. 214) and derives it from the product (multiplication) of the number of elementary particles in the universe (10^80), the number of seconds the universe has been (or will be) in existence (10^25 seconds—actually 20 billion years is closer to 10^18 seconds) and the number of physical states possible in one second (10^45), which is the inverse of the 'Planck time.' Physics contends that time cannot be finely split indefinitely: everything that occurs in the universe happens in what resembles movie slides of time, each 10^-45 of a second long (known as the Planck time). So long as the probability of any specified event is shown to be strictly less than half of 10^-150, we may safely infer design; however, there is still the remote possibility that we could be wrong: keep in mind that any probability, which is different from 0, is still only a probability.
The author also debunks 'inflationary theory' (pp. 214–17) and the 'anthropic principle' (pp. 60–62, 182, 185)—both of which imply a multi-universe (or 'multiverse')—and 'un-collapsed quantum probabilities' (p. 210n) as scientists’ futile attempts at increasing the probabilistic resources (i.e., scientists clutching at straws). For instance, he challenges the multiverse hypotheses as “universes that are by definition causally inaccessible to us” (p.62), and therefore “the status of possible universes other than our own remains a matter of controversy” (p. 182). If anything, he adds, they are irrelevant to what happens in our universe (p. 214). And regarding quantum probabilities, he states, “A specification is a determinate state. Measurement renders a quantum superposition determinate by producing a pure state, but once it does so we are no longer dealing with quantum superposition.” (p. 210n) He concludes, “This reversal of commonsense logic where we fixate on chance, and then madly rush to invent probabilistic resources necessary to preserve chance is the great pipedream of late twentieth-century cosmology and biology.” (p. 215)
Some of the higher mathematical (like geometric progressions) or statistical equations/formulae are simply given by the author with hardly any explanation or justification (e.g., pp. 67, 84, 176, 188, 203); not to mention that some mathematical symbols are too sketchily defined for the average reader. The critical reader, who is not very familiar with these fields, will naturally remain somewhat unconvinced by the final result. Footnotes showing a step-by-step calculation, derivation, or explanation would have helped convince such meticulous readers.
Finally, the author concedes that some rare disconnected coincidences do not imply design and should be left unexplained: we don’t have to invoke supernatural powers. But not every coincidence is sterile: he recommends testing 'interesting' events, exhibiting intelligence and/or mutual dependence, by running them rigorously through his filter thus eliminating doubt.
I think the book is somewhat disappointing in a couple of respects. First, the author refrains from implying intelligent agency once design is arrived at through his 'Generic Chance Elimination Argument'—although he does concede there is a close connection between the two (pp. xii, 8–9, 19, 36, 60, 62–66, 226–227); indeed, he points out, intelligent agency can also purposely mimic chance (p. 36). Intelligence, he adds, entails a learning ability and manifests itself in a deliberate choice of a course of action.
Second, even though the author refers to the origin-of-life through chance/design question several times (pp. iii, xii, 2, 55–62, 102–103, 182–183), he does not give his final opinion on the matter: he refrains from challenging mainstream origin-of-life evolutionary theory (Richard Dawkins) perhaps to keep his book as 'scientific' as possible. Indeed, the author exonerates Dawkins because the latter claims that Darwinian natural selection (or survival of the fittest) provides a gradual process to reach albeit a very small probability specification. For example, if one flips 100 coins in succession, and every time a coin shows heads is set aside, moving on to the next one, in no time, one can obtain 100 heads: needless to mention, this scenario is a far cry from trying to flip 100 heads in a row, with no tails in between. However, in his book 'The Blind Watchmaker,' Dawkins assumes a viable replicator arising by chance alone before natural selection takes over. But the odds against this happening are still astronomically high. Understandably, the author did not want to go that route; and, admittedly, it is somewhat beyond the scope of his book.
In conclusion, all in all, 'The Design Inference' is a great, ground-breaking book: especially for those who are keenly and honestly interested in the origin of life and our universe.