What do you think?
Rate this book


186 pages, Unknown Binding
First published May 1, 2013
People concerned about peaks (a perfectionist value) will want to know whether the very best aspects of our civilization will surpass the best of what has gone before. People concerned about variety will want to know whether people in the next period will be flourishing in new and different ways. People concerned about troughs will want to know whether the last period involved unprecedented levels of suffering, thinking that it will matter more if no record-breaking suffering happens in the next period. People who care about shape will want to know if things have been getting better than they used to be, caring whether we are on an upward trajectory or not. And people who care about averages and cross-period equality will want to know quite a bit about how things went in the distant past.
In which case is colonization more important, The Last Colony or The Very Last Colony? According to Period Independence, colonization is equally as important in each case. Intuitively though, it is more plausible to claim that colonization is much more important in The Last Colony. The thought seems to be that since 100 trillion years is so much more than 1 billion years, additional flourishing has less value if our descendants have survived for so long. If we accept a Capped Model we can, and probably should, try to accommodate the judgment that colonization is less important
in the case of The Very Last Colony.
Views in the unbounded category include: total views in popula- tion ethics, and variants on those views (such as critical-level views); additively separable theories of self-interest, such as classical hedonism; some theories of diminishing marginal value of popula- tion (such as average utilitarianism, Hurka’s “variable value view” (1983), Ng’s Theory X’ (1989), and Sider’s GV principle (1991))
fanaticism implies a violation of the continuity axiom of expected utility theory on its face, whereas you have to do a bit of work to see that this is true for a reckless agent (we’ll do that work in the next subsection). The continuity axiom says that for any three outcomes A, B, and C, where A is preferred to B which is preferred than C, there is some probability p > 0 such that getting A with probability p and C with probability 1 − p is ranked equally as high as getting B for sure. Fanatical agents violate this rule, since they treat some infinitely good outcomes as better than any chance of a finitely good outcome. For example, let A be going to heaven and being happy forever, let B be the best life any mortal has ever lived, and let C be a normal human life. For any probability p > 0, no matter how small, a fanatical agent prefers (A with probability p and C with probability 1 − p) to getting B for sure.
For almost any two things we might do, each has some probability of producing an infinitely valuable outcome. Therefore, for almost any two things we might do, both are equally good in terms of infinite considerations. [...] On a purely intuitive level, going to heaven for sure is obviously better than going to heaven with probability one in a million.
Infinite Research vs. Utopia: Our descendants reach the limits of technological progress and become very convinced (with probability 1 − 10^−N , for some really huge N ) that achieving an infinite amount of good is impossible. They must decide how some vast amount of resources should be allocated between two projects: creating an extremely good (though only finitely good) utopia, or researching possible methods of achieving an infinitely good outcome.
additively separable approach is mathematically inevitable given the following assumptions:
1. Expected Utility Assumption (for moral theories): The theories in which one has non-zero credence rank prospects using utility functions.
2. Expected Utility Assumption (for decision under moral uncertainty): One’s ranking of prospects (relative to one’s moral uncertainty) satisfies expected utility theory.
3. Pareto Assumption: The ranking of prospects (relative to one’s moral uncertainty) prefers prospects that are better according to some theories and worse according to none.
The result follows directly given a simple reinterpretation of Harsanyi’s Aggregation Theorem (Harsanyi 1955), and I provide a proof in the appendix.
In summary, in order to follow a rational policy, we must be willing to pass up arbitrarily great gains, even at small risks (be timid), be willing to risk everything at arbitrarily long odds for the sake of enormous potential gains (be reckless), or rank our prospects in a non-transitive way.
just pick a ridiculously large finite number—get a team of smart mathematicians together for a month and see what the biggest number they come up with is—and set the upper bound of our utility function there. [...] We can just pick this upper bound large enough that, provided we live in a Small or Medium Universe, we can mostly ignore infinite considerations and just do what would be best with respect to finite
The specific methodological pluralist approach I favor can be summarized as follows:
When comparing finite outcomes, use the approach I developed in the first half of the dissertation (Additionality, Period Independence, Temporal Neutrality, and expected utility theory). Assume, in general, that whatever is best for shaping the far future is best with respect to infinite considerations. If this assumption seems to be mistaken and you must compare infinite considerations and finite considerations, follow a timid approach.
The natural complaint about this approach is that it is inconsistent, and it is. Hopefully, that means that it is possible, in principle, to do better. But it doesn’t mean that we can do better in any practically meaningful sense, and it therefore isn’t a good objection to methodological pluralism. A few examples illustrate this. Temkin (2012, p. 504) points out that Niels Bohr’s model of the atom was known to be internally inconsistent, but was the dominant model for more than a decade because it had more predictive and explanatory power than any of the alternatives. There’s a similar story for Cantorian set theory. Cantor’s approach dominated mathematical study of set theory at the end of the 19th century, and it continued to do so after Russell, Zermelo, and Cantor had proven that Cantor’s theory was inconsistent between 1899 and 1903. Zermelo developed the first axiomatic approach to set theory in 1908, but mathematicians did not stop using set theory in the interim. It seems clear that, in the absence of an approach that was good enough along other dimensions, such as predictive and explanatory power, it was eminently reasonable for these physicists and mathematicians to continue to use the inconsistent theories that they had. The reason this didn’t lead to disaster was that people using inconsistent theories can be careful to avoid reasoning their way into nonsense, even if an unsophisticated automated reasoning machine could not. For a third, well-worn example, we can consider the fact that quantum mechanics and general relativity are inconsistent with each other, but physicists routinely use both in the contexts where they are confident that the theories work. For a final example, imagine that we discover a difficult- to-resolve inconsistency in the American legal code; I’m sure there is one. We would not conclude, on this basis, that any other consistent and basically decent legal code was superior to the American legal code. Instead, we would (rightly) continue to rely on the American legal code as it stood until the legal code was altered in a way that removed the inconsistency without too great a cost. The lesson here is that while inconsistency may be the final word when it comes to truth, it is not the final word when it comes to practice.
I’ll close by summarizing the course of investigation in this dissertation. Having covered a lot of material and developed language for talking about the problems at hand, this summary is somewhat different from what I presented in the introduction.