Pretty good. Learned quite a bunch. My credence in Utilitarianism has increased quite a lot. It seems like Consequentialism could be justified and grounded on purely logical grounds, to do with the supervenience of the normative or ethical on the descriptive, and an extension of that Supervenience principle to something like continuity or impartiality or proportionality.
I tend to have a richer, thicker understanding of wellbeing or the good or the good life than self-described Utilitarians tends to have, one focussed on flourishing rather than on welfare or absence of suffering.
It seems to me that the shift from the typical human life to the flourishing life could be far greater in terms of difference in wellbeing or flourishing than the shift from the life of suffering to the typical human life.
But the authors' argument for the higher weight of suffering over happiness in the last chapter (namely that it's implausible that it is good to experience one year of absolute suffering in order to experience one year of absolute happiness, but that this would be less and less implausible were it more and more years of happiness—thus showing an asymmetric trade-off or difference in weight between the two) seems somewhat plausible, though I suspect that might just be due to ignorance or incomplete knowledge of what absolute happiness or flourishing is truly like.
I also find abhorrent (or, less emotionally, find their views implausible) that they think that the experience machine objection doesn't work because of certain replies against intuition (1, should reread that section; 2, the authors make many, many critiques of intuition throughout!—which I empathise with on one hand yet think it queer that they're shooting the ground they stand on on the other) and they seem to express the average view in population ethics to have equally as counter-intuitive implications as the total view (which has the repugnant conclusion!!). Seems to me they didn't do justice to the average view and that no matter what critiques you give of intuition or, I don't know, status quo bias, the fundamental idea behind the experience machine objection (that perceiving and living in Reality is a better life than living in the experience machine) still stands. So I don't think hedonism is plausible as the authors seem to do. I think I tend towards desire satisfaction and perfectionist theories of the good and of Utilitarianism more than hedonism. But I am finding Hewitt Rawlette and perhaps Spinoza's hedonisms quite attractive—though not if the experience machine or Plato's cave demolish them.
I also found their replies to the demandingness objection and on distinguishing between right/wrong and praise/blame or exoteric right/wrong to be quite good and sound. Whether right/wrong is metaphysically grounded on maximising wellbeing is obviously a different matter from whether something should be praised/blamed or whether it should be told to the public as being right/wrong. They also made a point twice (reread, Mark) on how the contingent/non-essential affairs on present-day humans on Earth shouldn't play too much of a factor to the necessary affairs to do with ethics and morality. The first time it's worded is especially convincing. Reread that part.
Seems to me that any plausible moral theory ought to give the utilitarian conclusion that in a scenario where there is an exhaustive and exclusive choice between either killing everyone or X then we should always go for X. Some theories have ridiculously low bars for X. Suppose, say, X were the violation of someone's bodily rights by cutting of one person's strand of hair or his arm. Any theory which says X is wrong even when the only alternative is the death of everyone encounters a Reductio, I think. One can up that X, to, say, murder or rape or torture of some proper subset of everyone and it seems like there'd still be a reductio. So it seems like aggregating people and their wellbeings and thus a certain flavour of Utilitarianism is required for any moral theory to not face that reductio and thus have some share in plausibility. Aggregation and impartiality are two great hallmarks of Utilitarianism, it seems.
I still think a form of "general contractualism" or Utilitarianism might be the true moral theory. It holds that an action is wrong insofar as it minimises wellbeing or insofar as it is prohibited by the set of principles which minimise the WEIGHT of involved people's REASONS for it. (likewise, right...maximises...required/permitted) A reason would be a vector (recall math) where each element measures a different morally relevant factor, such as life, welfare, pleasure, justice, etc. There would then be a complex way to weigh up reasons against one another (to account for how, say, increasing the pleasures of billions for the death or torture of one is not moral while torturing billions to save the life of one is also not moral while torturing one to save the lives of billions is moral—complex math is required to make sense of this Calculus of Reasons). Not sure yet how well this meshes with my Spinozistic Utilitarianism. Seems to mesh when we take a rich conception of wellbeing.