Utilitronium shockwave

Will we transcend our human bodies? Extend our lives? Create superhuman artificial intelligence? Mitigate existential risks? etc.

If you had to answer the genie now, would you ask for a utilitronium shockwave?

Yes
22
71%
No
3
10%
Not sure
6
19%
 
Total votes: 31

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Utilitronium shockwave

Post by Brian Tomasik » Sun Dec 11, 2011 6:18 am

From "The Singularity and Machine Ethics":
Suppose an unstoppably powerful genie appears to you and announces that it will return in fifty years. Upon its return, you will be required to supply it with a set of consistent moral principles which it will then enforce with great precision throughout the universe. For example, if you supply the genie with hedonistic utilitarianism, it will maximize pleasure by harvesting all available resources and using them to tile the universe with identical copies of the smallest possible mind, each copy of which will experience an endless loop of the most pleasurable experience possible.
(We've discussed this several times on Felicifia.)

How many people support vs. oppose a utilitronium shockwave? On a rational level? On a visceral level? You might say you would delay it until we learn more, in case there's something better that we haven't yet discovered. But what would you do if you had to give the genie your answer now?

I strongly support a utilitronium shockwave on a visceral level. This has been true ever since I heard about the idea 6 years ago.

User avatar
Gedusa
Posts: 110
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: Utilitronium shockwave

Post by Gedusa » Sun Dec 11, 2011 1:15 pm

That's another paper for my "to-read" list....

I'm currently the sole "NO" here (Edit: Still the sole NO. The majority is against me. Hmmm). Some people will know my reasoning, but for those of you who don't:

I'm not a realist about metaethics. If you put a gun to my head and forced me to narrow that down, I'd say I was an emotivist, with some caveats. I have lots of random intuitions about what I should be doing, and I try to bring those to some sort of reflective equilibrium (I think that's one of the areas in which I depart from normal emotivists). My intuitions do not collapse down into: "prevent suffering, cause happiness". They probably don't even collapse into: "maximize the satisfaction of preferences". They don't even collapse down into perfect altruism - no matter how I try to force them.
Hence, I remain largely (>50%) selfish. I don't care as much about animals as I do humans. I care about people close to me more than I care about strangers. Of course, I still care, and I can do math - so I still want to stop wild animal suffering etc... And of course I dance a fine line between what's in accordance with my ethics and what's just needed for me to be psychologically healthy...

But anyway, that's a long winded way of saying: "I'm not a utilitarian in the pure sense, therefore I don't endorse utilitronium shockwaves as there are configurations of the universe I would regard as having higher value than that."

So no. I don't endorse it on a visceral or rational level. If presented with this genie, I would work really hard (or pay other people to work really hard) on a way of getting my intuitions into a coherent state - e.g. by getting an Oracle AI to take my brain state and cohere it or something. If the genie said I had ten minutes to answer - I'd probably reel off something about the current preferences of all beings which have preferences, with weighting toward the strength of those preferences and completely banning torture and some other stuff - though that would probably go horribly wrong.

Alan: I seem to recall you're an emotivist as well? I struggle to understand how humans, acting only on their own intuitions, can end up endorsing utilitronium shockwaves :) Can you give me a run-down of what you think the factors that lead your intuitions in this direction were? I kinda get how realists about metaethics might like it, but - bleh!
Last edited by Gedusa on Tue Dec 13, 2011 9:39 am, edited 1 time in total.
World domination is such an ugly phrase. I prefer to call it world optimization

User avatar
Hedonic Treader
Posts: 328
Joined: Sun Apr 17, 2011 11:06 am

Re: Utilitronium shockwave

Post by Hedonic Treader » Sun Dec 11, 2011 2:11 pm

I voted yes, given that the answer is supposed to be immediate and a utilitronium shockwave is relatively well-defined and a significantly better outcome than any outcomes we can realistically expect from the actual future(s) following after you read this.

However, if given the option to ask for details and elaborate on them, the focus on the simplest minds possible would be replaced with a focus on sentient complexity, ideally allowing for sapience and the experience of self-determination, while maintaining the "spreading exponentially", "free from (involunatry) suffering" and "high hedonistic quality" aspects.

If pressed for priorities in a trade-off, increasing success probabilities in preventing large-scale suffering comes first, then allowing for pleasure, then complexity of experiential modes, then sapience and self-determination.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Utilitronium shockwave

Post by Brian Tomasik » Sun Dec 11, 2011 10:27 pm

Thanks for the votes. :)
Hedonic Treader wrote: If pressed for priorities in a trade-off, increasing success probabilities in preventing large-scale suffering comes first, then allowing for pleasure, then complexity of experiential modes, then sapience and self-determination.
Cool. I'm with you on the first two but indifferent on the second two. I don't care about complexity or sapience, except to the extent that they're instrumentally useful for reducing suffering and creating happiness.
Gedusa wrote: I'm not a realist about metaethics. If you put a gun to my head and forced me to narrow that down, I'd say I was an emotivist, with some caveats. I have lots of random intuitions about what I should be doing
Yep, same here. :) However, it usually happens that those feelings align in the direction of "preventing lots of suffering with some much-lower priority for creating happiness."
Gedusa wrote: They don't even collapse down into perfect altruism - no matter how I try to force them.
You're not alone. I'm far from a perfect altruist due to ordinary human weakness. However, if I could create an AI-Alan to replace myself, I probably would make it a perfect altruist.
Gedusa wrote: Alan: I seem to recall you're an emotivist as well? I struggle to understand how humans, acting only on their own intuitions, can end up endorsing utilitronium shockwaves :) Can you give me a run-down of what you think the factors that lead your intuitions in this direction were? I kinda get how realists about metaethics might like it, but - bleh!
:)

I think part of the divergence comes from how we imagine utilitronium. Think about some of your favorite experiences -- say, seeing your best friend after being away for two years. What makes that moment feel so good? The subjective experience of goodness is created by certain brain circuits firing in a particular fashion. What's important is not the actual presence of your friend but the way you feel about it. For example, I could in principle rewire your brain so that you would find your friend repulsive and painful.

Utilitronium could be seen as a limit of the process of making simulated happiness more and more efficient. For instance, you'd probably like the thought of friends in a virtual-reality environment meeting and enjoying each other's company. But the simulation of their environment isn't really so important, so why not just simulate their bodies and brains? Well, their foot bones and lung movements aren't so important either, so maybe just simulate their minds. But their audio processing and smell perception aren't so important, so why not just simulate the portions of their brains that create the feeling of enjoyment? And then multiply those circuits across the universe. Somewhere along that chain, maybe you decline to go further?

User avatar
Gedusa
Posts: 110
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: Utilitronium shockwave

Post by Gedusa » Mon Dec 12, 2011 12:54 am

@ Alan:
Hmm. Creating an AI to replace me with whatever values I wanted is an interesting thought-experiment for what I actually value. I'll have to think about it some more. I think it would still be selfish though. I think we differ in this respect about what values we reject: you would reject the selfish ones based on your collapsed down altruistic values. I wouldn't.

I think I see more clearly where you're coming from now but I don't fully agree (not that that matters to two emotivists!)

I value the physical presence of the friend in the sense that I value them being there virtually and in the physical world equally, but I don't value things as much (if at all) if you just stimulated the brain parts involved in feeling happy at the friend's presence. I think I can collapse this down to: I don't like the idea of wireheading/being in the experience machine. So my intuitions are that if my actions aren't affecting anything in reality then the actions are less worthwhile (though still worthwhile, I like dreams). Hmm. Saying that it's worthwhile at all leaves me open to simulating my mind/environment if there is a certain differential between the utility of the real world and the stimulated one.

But yeah there is a point along that chain where I'd decide to stop. Probably the point between simulating their whole minds and simulating just the bits which create pleasure. That screams "fake" and "not-human" which I instantly assign lower utility too. But surprisingly progression along the chain also gives the whole thing steadily less value. Also: Universes that simple aren't in accordance with my values. There has to be a certain amount of complexity of the type humans/I like for the universe to count as Fun and Worthwhile.

And I do have a vulnerability here. I don't assign utilitronium zero value. And there is less resource cost to utilitronium than to more complex things which I value more. So it's plausible that I could desire a utilitronium universe as there would be more total value due to much lower resource costs. It would depend on the relative resource costs and what value I eventually assigned various outcomes.
World domination is such an ugly phrase. I prefer to call it world optimization

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitronium shockwave

Post by DanielLC » Mon Dec 12, 2011 1:37 am

Exactly how much do you have to specify? I suspect that minds have to be different to count as separate minds, but I don't know how different. I also don't fully understand what happiness is. If all I have to say is "hedonistic utilitarianism", I'd go for it, but otherwise I'm not so sure.

On a visceral level, I can accept it if I just focus on how happy it will be. If I just imagine chunks of utilitronium it doesn't seem so nice, but that's not really what's going on.
Consequentialism: The belief that doing the right thing makes the world a better place.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Utilitronium shockwave

Post by Brian Tomasik » Mon Dec 12, 2011 4:46 am

Gedusa wrote: I think I see more clearly where you're coming from now but I don't fully agree (not that that matters to two emotivists!)
:)
Gedusa wrote: I value the physical presence of the friend in the sense that I value them being there virtually and in the physical world equally, but I don't value things as much (if at all) if you just stimulated the brain parts involved in feeling happy at the friend's presence.
Interesting. So you still might favor a bare-bones simulation of their surrounding environment in order to make the experience "real" but to reduce the computing load as much as possible?
Gedusa wrote: Also: Universes that simple aren't in accordance with my values. There has to be a certain amount of complexity of the type humans/I like for the universe to count as Fun and Worthwhile.
I see -- cool. Seems to be a relatively common intuition.
DanielLC wrote:Exactly how much do you have to specify? I suspect that minds have to be different to count as separate minds
I don't share the sentiment, but again, it's one I've heard a few times before.

User avatar
Gedusa
Posts: 110
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: Utilitronium shockwave

Post by Gedusa » Mon Dec 12, 2011 3:23 pm

Interesting. So you still might favor a bare-bones simulation of their surrounding environment in order to make the experience "real" but to reduce the computing load as much as possible?
Yes.
Seems to be a relatively common intuition.
Yeah I think it's fairly human typical... Possibly that's one the reasons I'm more willing to endorse the human CEV (with tonnes of caveats) than you are.

And I share Daniel's sentiment on different minds. I'm not even sure that (to my values) the whole of the universe being tiled with utilitronium wouldn't be the same as just one speck of dust being utilitronium.
World domination is such an ugly phrase. I prefer to call it world optimization

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Utilitronium shockwave

Post by Brian Tomasik » Tue Dec 13, 2011 9:13 am

Gedusa wrote:
Seems to be a relatively common intuition.
Yeah I think it's fairly human typical... Possibly that's one the reasons I'm more willing to endorse the human CEV (with tonnes of caveats) than you are.
Exactly. Oddballs like me have more to worry about.

User avatar
Arepo
Site Admin
Posts: 1097
Joined: Sun Oct 05, 2008 10:49 am

Re: Utilitronium shockwave

Post by Arepo » Wed Dec 21, 2011 7:51 pm

Lean towards 'yes' (although viscerally a no), but I'm sticking with 'don't know' for now because I don't think the question is - or easily could be - well defined. I don't know about the motivations or abilities of the genie (a creature notorious in popular mythology for twisting wishes to something the wisher neither expected nor wanted), or even quite what I would be asking him for. It would be something like 'maximise happiness', but I'd rather like to have a good definition of the word 'happiness' before I made such a commitment.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitronium shockwave

Post by DanielLC » Wed Dec 21, 2011 9:10 pm

Consequentialism: The belief that doing the right thing makes the world a better place.

User avatar
Arepo
Site Admin
Posts: 1097
Joined: Sun Oct 05, 2008 10:49 am

Re: Utilitronium shockwave

Post by Arepo » Wed Dec 21, 2011 11:26 pm

If the genie is defined so blandly as to make the universe perfect, then I guess I say aye, but that seems like a noninteresting question, basically just equivalent to confirming that I'm broadly utilitarian.

Even then, I don't believe in norms, so if I'm right, the input 'what I should wish for' would probably generate a system error.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.

rehoot
Posts: 160
Joined: Wed Dec 15, 2010 7:32 pm

Re: Utilitronium shockwave

Post by rehoot » Mon Jan 09, 2012 10:58 pm

I voted *no* for many reasons. I personally think that overpopulation is the greatest problem facing society, and I see overpopulation as a negative influence on me personally and on the well-being of other species. I have no reason to believe that constructing a specialized "mind" that likes living in a state of gross overpopulation and lack of diversity is a worthy cause--it necessarily calls for the extinction of ALL current species in the entire universe. For the "created minds" to continue to be "happy," they would need to be created to be ignorant of the past beauty and diversity of the planet or they must be programmed like robots to lack any semblance of free will or critical thought--perhaps to the extent that they live in a fantasy world.

Who would fix the toilettes or sustain the infrastructure to support the artificial fantasies of the pleasure zombies? Perhaps a class of slaves? The one wish wouldn't afford self-replicating robot infrastructure to care for everybody.....

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitronium shockwave

Post by DanielLC » Tue Jan 10, 2012 12:53 am

perhaps to the extent that they live in a fantasy world.
What do you mean by "fantasy world"?
Consequentialism: The belief that doing the right thing makes the world a better place.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Utilitronium shockwave

Post by Brian Tomasik » Tue Jan 10, 2012 4:29 am

I don't think utilitronium minds would have much need for toilets. :) But it's true they might require some robots to maintain the infrastructure and do repairs. These robots would be non-sentient, so they would no more be "slaves" than is a Roomba.

The minds could have knowledge of the past if the designers wanted it that way. It's just that the minds would prefer to have the universe be filled with what seems to us dull utilitronium; they could feel good about the loss of diversity. Of course, in practice, the additional computation to support detailed knowledge about the world might be unnecessary overhead -- probably better to have the minds just feel good rather than feel good about anything in particular.

User avatar
Pablo Stafforini
Posts: 174
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford
Contact:

Re: Utilitronium shockwave

Post by Pablo Stafforini » Wed Feb 29, 2012 5:52 pm

I answered 'Yes'. My only caveat would be this. Such a shockwave would be immediate and (presumably) irreversible. Yet if we are not completely certain that hedonistic utilitarianism (in either its classical or negative varieties) is the correct moral theory, we might want to allow ourselves some time for thinking carefully about this question, rather than rushing to a decision which cannot be undone. However, without having devoted much time to the issue, I'm inclined to believe that in light of present risks of extinction, it is preferable to opt for the shockwave immediately than to run the risk of annihilation by delaying the decision. (Interestingly, one reason for working on existential risk reduction is that we need more time to think whether our long-term survival is or isn't morally desirable.)

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Utilitronium shockwave

Post by Brian Tomasik » Thu Mar 01, 2012 8:19 am

Pablo Stafforini wrote:Yet if we are not completely certain that hedonistic utilitarianism (in either its classical or negative varieties) is the correct moral theory, we might want to allow ourselves some time for thinking carefully about this question
Yeah, but being an emotivist, I don't think there is a "correct" moral theory that humanity will necessarily move toward as it becomes smarter. It's even plausible to me that humanity will eventually move away from the things we now hold dear. LadyMorgana and I discussed this topic on another thread.

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitronium shockwave

Post by DanielLC » Thu Mar 01, 2012 5:31 pm

It's even plausible to me that humanity will eventually move away from the things we now hold dear.
Isn't that any direction at all? Or do you mean that our future values will be more different from our current values than our current world is?
Consequentialism: The belief that doing the right thing makes the world a better place.

User avatar
Pablo Stafforini
Posts: 174
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford
Contact:

Re: Utilitronium shockwave

Post by Pablo Stafforini » Thu Mar 01, 2012 7:52 pm

Alan Dawrst wrote:Yeah, but being an emotivist, I don't think there is a "correct" moral theory that humanity will necessarily move toward as it becomes smarter. It's even plausible to me that humanity will eventually move away from the things we now hold dear. LadyMorgana and I discussed this topic on another thread.
I thought you were going to say that. :-) As a matter of fact, my metaethical views have changed in the past, moving away from moral realism and closer to moral nihilism. This change had the effect of eroding my altruistic concern for a while, but the effect appears to have been short-lived, and currently I'm as disposed to reduce suffering and promote happiness as much as I have ever been (even if I no longer believe that I am under a moral requirement to act in these ways).

Please note however that, as an emotivist, you do think there is a "correct" moral theory, at least under some interpretations of that term. You believe, for instance, that it is wrong to torture animals for fun (in ways that do not produce greater benefits), even if others happen to believe otherwise. The correct moral theory, on your view, is that which expresses your emotional dispositions (suitably weighted, etc.). Moreover, I believe you have written in the past that you were uncertain about various moral questions, such as the weight that you should attach to the relief of suffering versus the promotion of happiness, and the degree to which you care about brain processes that bear but a very distant resemblance to the processes that occur in prototypical instances of suffering. To the degree that further reflection might allow you to clarify your views on these and other matters, you too might want to allow yourself some time before you decide to trigger a utilitronium shockwave.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Utilitronium shockwave

Post by Brian Tomasik » Sat Mar 03, 2012 8:58 am

DanielLC wrote:Or do you mean that our future values will be more different from our current values than our current world is?
I wasn't very clear. By "we hold dear," I meant "we utilitarians."
Pablo Stafforini wrote: but the effect appears to have been short-lived, and currently I'm as disposed to reduce suffering and promote happiness as much as I have ever been
Awesome!
Pablo Stafforini wrote: Please note however that, as an emotivist, you do think there is a "correct" moral theory, at least under some interpretations of that term. You believe, for instance, that it is wrong to torture animals for fun (in ways that do not produce greater benefits), even if others happen to believe otherwise.
Sure. As you say, it depends how you define "correct." I do still want things to be the way I want things to be!
Pablo Stafforini wrote: To the degree that further reflection might allow you to clarify your views on these and other matters, you too might want to allow yourself some time before you decide to trigger a utilitronium shockwave.
Perhaps, although I'm fairly certain about utilitronium, and these points of moral uncertainty don't have much effect on that question. (BTW, great job remembering my past statements about these things. :))

Post Reply