davidpearce wrote:Thus a negative utilitarian can support creating a posthuman civilisation animated by gradients of intelligent bliss where all your dreams come true.
The problem is that such a world could be much bigger and still contain some pleasure-pain tradeoff. There always is some probability that suffering occurs, either as an error or because it is locally reintroduced in a simulation, or whatever. If one actually had to choose between destroying the world painfully now, and creating a posthuman civilization colonizing the Virgo Supercluster, destroying the world would cause the least suffering. No matter how good one thinks one's bioengineering and civilization-planning skills are, there's almost certainly some error margin, and that means the raw quantity of sentience in the latter case implies more suffering.
Since we're doing thought-experiments, imagine if a magic genie offers me superexponential growth in my bliss at the price of exponential growth in your agony and despair. If I'm a classical utilitarian, then I am ethically bound to accept the genie's offer.
Note that this uses the same personal identity references that should be excluded if one really believes personal identity to be ethically irrelevant (not that anyone really makes all practical choices as if they did). Also note that it is hard for humans to intuit exponential or superexponential growth in intensity of experiences.
If we reframe the thought experiment, it could look like this: I can make the choice to create a loop of some finite n iterations, which does the following:
It creates 2^n seconds of pain for me, then it creates 2^n^n seconds of pleasure of the same intensity for me.
Do I want to take this offer? Yes, with n as high as possible. Do I want you to take this offer away from me against my will? No, it would indeed be callous.