Utilitronium shockwave

Will we transcend our human bodies? Extend our lives? Create superhuman artificial intelligence? Mitigate existential risks? etc.

If you had to answer the genie now, would you ask for a utilitronium shockwave?

Yes
22
71%
No
3
10%
Not sure
6
19%
 
Total votes: 31

Jonatas
Posts: 4
Joined: Wed Jul 21, 2010 9:35 pm

Re: Utilitronium shockwave

Post by Jonatas » Sat Mar 03, 2012 11:55 am

I voted yes, though with certain differences.

I'm not sure the best strategy would be a shockwave, and the name utilitronium resembles a substance, which is likely misleading, though the concept is sound. I see it as more likely that we will rather jump from planet to planet with spaceships and expand in local "islands" limited in size, taking energy and materials from the planets and similar bodies and possibly from nearby stars.

The "utilitronium" will likely consist of rather complex societies, possibly including redundant instances of unlimited intelligence, minds experiencing exquisite combinations of intense good feelings (value production), and insentient working, support and defense machinery.

The minds may be fairly complex rather than the smallest possible.

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Utilitronium shockwave

Post by DanielLC » Sat Mar 03, 2012 8:36 pm

taking energy and materials from the planets and similar bodies and possibly from nearby stars.
I'd expect you'd largely take materials from planets and energy from stars.
The minds may be fairly complex rather than the smallest possible.
No matter how complex they are, they'll still just look like ordinary computronium to us.
Consequentialism: The belief that doing the right thing makes the world a better place.

User avatar
Hedonic Treader
Posts: 328
Joined: Sun Apr 17, 2011 11:06 am

Re: Utilitronium shockwave

Post by Hedonic Treader » Thu Jun 14, 2012 5:59 pm

Jonatas wrote:I'm not sure the best strategy would be a shockwave, and the name utilitronium resembles a substance, which is likely misleading, though the concept is sound. I see it as more likely that we will rather jump from planet to planet with spaceships and expand in local "islands" limited in size, taking energy and materials from the planets and similar bodies and possibly from nearby stars.
Well, it would be an interstellar colonization wave that spreads as fast as it practically can in practically all directions in which there are reachable resources. "Shockwave" may be misleading, because it sounds destructive and uncomplex, but in a cosmological context, it would be a very fast wave-like transition of how star systems and other cosmic objects are internally organized.
The "utilitronium" will likely consist of rather complex societies, possibly including redundant instances of unlimited intelligence, minds experiencing exquisite combinations of intense good feelings (value production), and insentient working, support and defense machinery.
If you don't have outside competition and the replication algorithm is nun-mutating, you don't need defense machinery. Neither would you have a strong need for complex societies, unless you need to convince the originators who might insist on such complexity (see below).

If there is outside competition and/or the replication algorithm isn't perfectly non-mutating, you'd better prepare for a new era of competitive darwinism and find ways to integrate hedonistic utilitarian values into it. In this thought experiment, it sounds like the genie has this figured out, so it could just efficiently create happy minds that have no other purpose than being happy.
The minds may be fairly complex rather than the smallest possible.
Let's be more precise here. I agree that the descriptor "smallest" is misleading and does not logically flow from assuming hedonistic utilitarianism. There are at least three reasons for that:

1) The smallest mind possible might be so alien to us that what we interpret as its pleasure might not be similar enough to our pleasure to actually count in a satisfying way. Unless we have a thorough formalism as to what qualifies as pleasure, and thoroughly trust that formalism, we should assume some level of epistemic uncertainty regarding minds that are significantly unlike our own.

2) "small" of course can't literally mean physically small, but "most efficient in creating pleasure per resource input". It has some relation to physical smallness, but it's not the same descriptor.

3) When people hear "small minds", they feel associated low social status emotions. This is the same reason why people say things like, "I'd rather be a miserable Socrates than a happy pig", even though it's not clear they value happiness much less than intellectual insight when they make actual choices on how to spend their time. It's also possible that "small minds" sound vulnerable to the outside world, or empoverished in terms of experience, even though the thought experiment implies they would have the best experiences possible.

If you actually want to convince people to build a system, you'd probably go for human-like, complex, noble, free, individualistic but also social, self-determined etc. etc. The slightest slip in association, and people will accuse you of intentionally building a dystopia. If utilitarians in the future ever actually find themselves in a situation where they can launch a non-mutation system that generates certain patterns on a large scale, chances are that they will have to convince a majority of non-utilitarians that these patterns are the best use of resources. Those other decision-makers will value different things and often be selfish, so utilitarians would do good to formalize ways to integrate these values into practical hedonism. I'd expect this to be a lot harder than it sounds.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon

User avatar
peterhurford
Posts: 391
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University
Contact:

Re: Utilitronium shockwave

Post by peterhurford » Sat Jul 14, 2012 6:08 am

I don't place much value on extremely heightened pleasure (like "wireheading") as what the utilitarian intends to maximize, so my kind of utilitronium would be simulations of utopias where people live fulfilling lives (without being wireheaded). (Though, I also share the intuition that I wouldn't want to be placed in an experience machine, so I'm not sure how that compares.)

However, and even more relevantly, I personally care nothing for bringing entities into existence for the sake of those entities, so I wouldn't want to bring utilitronium into existence, regardless of how happy it is. The fact that it doesn't exist means that it isn't "harmed" by non-existence. (This doesn't mean I don't care about the future though, since people are going to exist inevitably, I want them to come into an existence with happy lives.)

Some of this is intuition is seeing entities not as "utility recipticles" that are brought into existence as a means of maximizing total utility, but rather valuing each entity in itself, and then approaching calculations for how to help existing (and inevitably to-be-existing) entities impartially, thus popping out my brand of utilitarianism.

Thus, I join the "no"s.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Utilitronium shockwave

Post by Brian Tomasik » Sat Jul 14, 2012 10:48 am

Very interesting, Peter. We disagree on the experience machine, and we disagree on whether organisms are just receptacles for utility.

As far as wireheading itself, the experience needn't be some crude form of pleasure. It could be rich, stimulating lives like the one you're living now, or better. But I take it this wouldn't much change your opinion.

User avatar
Hedonic Treader
Posts: 328
Joined: Sun Apr 17, 2011 11:06 am

Re: Utilitronium shockwave

Post by Hedonic Treader » Sat Jul 14, 2012 12:24 pm

Also please consider the logical implications of your view. It sounds noble to say sentient beings or persons aren't just "receptacles" but that you want to help the individuals. After all, who likes objectification of people? It sounds like callous immorality.

But the prior-existence condition is both wasteful, and it runs into identity problems.

It is wasteful because it would waste desirable experiences. If someone prevented me from gaining a large amount of free pleasure, all else equal, I would consider this a serious offense. Similarly, if someone could reliably create a hedonium shockwave, all else equal, and arbitrarily decides not to, I would consider this the worst ethical mistake ever made.

Now you could argue that these two examples are not equivalent because I am pre-existing while the minds in the hedonium shockwave are not. But I consider my future selves to not be identical to me either. They are (at least slightly) different, time-local entities. If you were to kill me, then the prior-existence condition for my future selves would not be fulfilled - my future selves don't exist inevitably in a world in which you can kill me. That doesn't mean you can't cause harm by killing me, even if it is painless - it would deprive my future selves, who are not identical to me, of pleasure.

David Benatar also argues that one cannot harm a person by not creating them, but one can harm a person greatly by creating them in a state of suffering. Since all human lives contain at least some suffering, he concludes, any new person is harmed by being brought into existence, even if that person considers his or her own life to be very much worth living. The resulting strong version of antinatalism, while logically consistent and taken seriously by some, is usually rejected as an absurdity by most commentors in the general public, many of which do not feel they were harmed by having been brought into existence and are quite glad to be alive.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon

User avatar
peterhurford
Posts: 391
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University
Contact:

Re: Utilitronium shockwave

Post by peterhurford » Fri Jul 20, 2012 9:13 am

I changed my vote on this question, reversing my opinion in part from six days ago. Two weeks ago, I was a preference person-affecting utilitarian. Then, about a week ago, I switched to a hedonistic-ish person-affecting utilitarian. Now, I switched away from the person-affecting view to total utilitarian.

Given that total utilitarians would endorse utilitronium shockwaves, I have modified to strongly support utilitronium shockwaves (as actually the utilitarian-best possible thing we could do). I want to clarify here what I changed and why.

~

Preference Satisfaction Just Confuses Me

While not relevant to my change in vote, I did want to highlight my turn away from preference satisfaction utilitarianism. It can be explained in four confusions that all make much more sense with a hedonistic(ish) happiness(ish)-maximizing analysis.

1.) How do I make sense of preferences that I have, but don't want (don't meta-prefer) other people help me satisfy them? I want to quench my thirst all by myself, thank you. For some preferences, it would make me happier if (and I would meta-prefer that) you helped me, but for others it would make me sadder (and I would meta-prefer you refrain).

2.) How do I make sense of the idea of creating a preference for the sake of fulfilling it? Would it be a moral benefit to make you thirsty just to offer you a drink? Doing so would satisfy your preference, but wouldn't make you any happier.

3.) How do I make sense of preference satisfying population ethics? What good would it be, if at all, for someone to be born, who wouldn't be born otherwise? What good would it be, if at all, to launch a utilitronium shockwave? Is this just creating beings to fulfill their preferences?

4.) Is the best life one in which all our preferences are satisfied? Would an ideal lifeform be one with just a single preference that is satisfied? What about a life form with 1000 preferences, all of which are satisfied?

~

Hedonistic-ish :: Haven't Changed on Wireheading

So sign me up for maximizing "happiness", but don't let this happiness be seen as the kind you can get from wireheading. This isn't a lack of imagination that wireheading won't be intensely pleasurable, but a recognition that a lot of the things I derive happiness from are not just my personal mental states. Yes, happiness takes place entirely within my mental states, but I think it's the wrong level of abstraction to suggest that I only care about my mental states.

However, I take utilitronium to be basically a pocket simulated utopia that does contain all the things that make for a good life according to Fun Theory -- authentic high challenge, an opportunity for a meaningful impact on genuinely important problems, with novelty and autonomy, etc. Some extensively pleasurable and non-addictive designer drugs can improve the mood, but shouldn't be the focus of my utopia.

Sure, you could knock out my boredom/novelty-seeking and just repeat my most favorite experience over and over ad naseum. I'd definitely enjoy it. It would probably bring me a lot of happiness. But I desperately wouldn't want you to do that to me; I prefer the life I have, with novelty and boredom. Wireheading may be fun, but it's not Fun. ...for me, anyway.

This is why I currently put the "ish" on hedonistic-ish -- I'm not sure to what degree current hedonistic utilitarians would want to avoid wireheading or experience machines, seeing this as not the kind of "happiness" that matters.

~

Simulations Good, Experience Machines Bad (For Me)

Along the same lines, please don't sign me up for an experience machine, even if I'd never find out about it. I have a very strong my-native-world bias for where I live and act. I don't care if my current world was revealed to be, all this time, a simulation, as long as it's the native-world I grew up in. This is the world that has the family and people that I care about, the strangers suffering who I want to lift up or see lifted up vis-a-vis utilitarianism and other methods.

You could put me in an experience machine, sure. I think such simulations would be "equally real", the people would still matter, these people could genuinely suffer (to the point where creating an experience machine with net suffering would be a utilitarian evil), and there is genuine meaning and opportunities for utopia here. But I want to be with the people of my native universe (even if I wouldn't notice a difference).

When it comes to utilitarianism, I want to be trans-world (I'd eliminate the as much suffering as I can to the best of my ability, no matter which experience machine the suffering is located in), and when it comes to my self-interest non-utilitarian projects, I want them to unfold in my native world. (Though if my native world wanted to voluntarily enter a utopian simulation along with me, this would be ideal... Thus, the Matrix wouldn't be as bad, though I wouldn't want it run by evil robots or done involuntarily.)

Other people may not share this meta-preference bias of mine, but I don't think such a bias is irrational or odd. For these other people, sure, experience machine them (or wirehead them) if that's what they'd wish and what would make them happy. But don't do it to me, because it wouldn't make me happy. Remember, this is why I'm hedonistic-ish.

~

Anti-Natalism? :: Of What Good Is A Possible Life?

So this clarifies my position on hedonism, I hope. But what about the big move from person-affecting, utilitronium-hating utilitarianism to utilitronium-loving total utilitarianism? The problem I ran into here was indeed anti-natalism, or the idea that we should voluntarally end the population now, because in doing so we could make everyone happier (at the expense of all future generations who don't matter in a person-affecting way).

Let me explain. Person-affecting utilitarianism is the view that the only utilities that matter are those that affect people, as in those who currently exist or will exist. If it turns out the person never will exist, they aren't harmed in any way by being denied their existence (according to person-affecting views), and thus there's no reason to create them. This means that if we end humanity, the future generations would never exist, and thus they wouldn't matter, and there would be no reason to continue the human race (or any race at all), as long as the current generation is happiest.

Total utilitarianism takes a much different view to these potential people -- even if they never would have existed, it's best to create them still as long as they would live a happy life, because this would add happiness to the total, and thus move toward maximizing happiness.

Likewise, I think it would break the asymmetry by suggesting that we should create these potential people for their (now existing) sakes -- while they aren't harmed by not existing, they are certainly helped by existing, and I wouldn't want to deny these possible people the benefit of existence (though only for their sake *after* they exist).

Thus, now I give a big yes to utilitronium shockwaves -- bringing this into existence would bring in more happiness than any other potential alternative (I know of). Additionally, it would be giving untold gillions of entities the opportunity to live in an ideal utopia, thus being a huge boon to all of their sakes (after they exist, of course). So I'd want to do the shockwave, for them.

~

Still Say No to Receptacles :: Happiness For the Sake of People, Not People for the Sake of Happiness

But even though I am now a total utilitarian, I still think that some of my fellow utilitarians have the happiness backwards. I suggest that people don't exist for the sake of happiness, so we can rejoice at adding those happy numbers to our calculations and see a bigger number. Instead, we should remember that happiness is good for these people (indeed by definition), and thus we should rejoice at the people living better (happier) lives.

This harkens on back to why I became a utilitarian, before I even knew the philosophy existed -- a commonsense desire to want to help people for their sakes and then knowledge that I was operating under triage and had to maximize my efforts, meaning some people would have to be regretfully neglected for greater benefits to others. Indeed, some people might even regretfully have to be harmed for outweighing (but not cancelling) benefits to others. We have commensurability without fungibility.

Thus people aren't good only for their happiness; happiness isn't this abstract good thing separate from people (or nonhuman animals or what not) living happy lives. And I think this makes lots of sense still on a total utilitarian view, where we remember that the utilitronium shockwave is good not for our total utility calculations but firmly good for the utilitronium itself.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.

User avatar
peterhurford
Posts: 391
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University
Contact:

Re: Utilitronium shockwave

Post by peterhurford » Fri Jul 20, 2012 9:18 am

Hedonic Treader wrote:But I consider my future selves to not be identical to me either. They are (at least slightly) different, time-local entities. If you were to kill me, then the prior-existence condition for my future selves would not be fulfilled - my future selves don't exist inevitably in a world in which you can kill me. That doesn't mean you can't cause harm by killing me, even if it is painless - it would deprive my future selves, who are not identical to me, of pleasure.
I think that's very interesting. I do consider my future selves identical to me, though I do agree they are slightly different, time-local entities. I just think those time-local entities are also me, due to the continuity involved.

For me, the harm in death (even if painless) is not "it would deprive my future selves, who are not identical to me, of pleasure" but rather that "it would deprive my future selves, who are indeed identical to me, of pleasure". It's not really a big difference.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Utilitronium shockwave

Post by Brian Tomasik » Sun Jul 22, 2012 3:55 am

peterhurford wrote: Given that total utilitarians would endorse utilitronium shockwaves, I have modified to strongly support utilitronium shockwaves (as actually the utilitarian-best possible thing we could do).
Hooray. :)
peterhurford wrote: Preference Satisfaction Just Confuses Me
Me too. In addition to what you listed, there's the issue of what counts as a preference. Am I satisfying an apple's preference to fall if I drop it on Issac's head? If not, we may end up defining preferences in terms of things that satisfy hedonic desires, but in that case, we've just gone back to hedonism anyway.
peterhurford wrote: Sure, you could knock out my boredom/novelty-seeking and just repeat my most favorite experience over and over ad naseum. I'd definitely enjoy it. It would probably bring me a lot of happiness. But I desperately wouldn't want you to do that to me; I prefer the life I have, with novelty and boredom.
Interesting. I disagree, but hopefully it doesn't matter that much if Fun isn't too much more expensive to simulate than fun. (Hard to say without getting into more detail.)
peterhurford wrote: But I want to be with the people of my native universe (even if I wouldn't notice a difference).
Very interesting. Again, I don't share the sentiment myself.

I do care about actually reducing suffering, rather than being deluded into thinking that I have reduced suffering. But from a selfish perspective, it doesn't matter to me at all in what world I find myself having positive emotions.
peterhurford wrote: But even though I am now a total utilitarian, I still think that some of my fellow utilitarians have the happiness backwards.
I'm probably one of them. I do think organisms are made so that they can hold happiness.

But I also wonder how much of our disagreement is just a matter of sentiments attached to different linguistic ways of framing the issue. Certainly I want to make organisms happier for their sakes.
peterhurford wrote: I do consider my future selves identical to me, though I do agree they are slightly different, time-local entities. I just think those time-local entities are also me, due to the continuity involved.
But surely the scale is continuous rather than binary. There's no hard distinction between things that are "you" and "not you." We may fix an arbitrary cutoff point for ease of exposition, but it's not a fundamental fact of the world.

User avatar
Hedonic Treader
Posts: 328
Joined: Sun Apr 17, 2011 11:06 am

Re: Utilitronium shockwave

Post by Hedonic Treader » Mon Jul 23, 2012 9:43 am

Thanks for your reflections, Peter. It's interesting (and unfortunately rare) to see people change their minds and be explicit about the reasons.
But I want to be with the people of my native universe (even if I wouldn't notice a difference).
Isn't this logically inconsistent with the rejection of preference utilitarianism? After all, if you wouldn't notice a difference (and no others were worse off), then by definition it can't be worse unless you count meeting such preferences as intrinsically valuable.

I actually don't mind satisfying such preferences for strategic reasons, or if they're free of costs. But a real-world bias can be very costly. Consider the difference between the resource costs of flying real planes for sports, or playing computer games in which players fly planes for sports. Maybe the former should be possible in a world of real humans who won't voluntarily make concessions, and who only agree to be innovative in a market-based society that allows them such waste. But it's easy to see how wasteful this distinction can be, and even now, it's clearly not seen as a human right to fly real planes if you can't afford it.

From a utilitarian perspective, then, I would say choosing the real plane over the simulated one is a mistake. Of course, if you are deceived about the state of your perception and the real world, this may add new threats such as counterfeit utility (you think the world is okay, but you're in an experience machine and in reality, others are suffering) or questionable sustainability or power dynamics. But let's assume we could all migrate into an upload world that simulates - and gradually enhances - our current environment without deceiving us about the real world, while using the physical resources to allow more minds to experience the same advantage, I would see this as clearly and significantly preferable from a utilitarian view.

I think we can converge on a consensus based on two points: 1) You need to convince real people of any plan that is supposed to be sustainable, and if real people insist on maintaining this bias, we need to make the concession strategically. 2) It's better to have some wasteful utopia than a big future filled with suffering, or a "utopian" system that crashes shortly after it starts.
For me, the harm in death (even if painless) is not "it would deprive my future selves, who are not identical to me, of pleasure" but rather that "it would deprive my future selves, who are indeed identical to me, of pleasure".
Fair enough. It depends entirely on how we define "identity", of course. As long as it doesn't make much of a difference in practical decisions, we don't need to converge on this in order to get things done.
But I desperately wouldn't want you to do that to me; I prefer the life I have, with novelty and boredom. Wireheading may be fun, but it's not Fun. ...for me, anyway.
My prediction is that people who are free to design their own experiences will gravitate toward wireheading instead of Fun, even those who now say otherwise. Think how much money and time people spend on having - relatively repetitive - sexual experiences. This is fun, but not Fun. It's just mechanical animalistic idiosyncratic behavior. Yes, there are variations, but let's be honest, the core of the thing is always essentially the same. I think both Fun and mere pleasure can be - and are already being - superstimulated through technology, but I tentatively predict that with increasing capability to sustainably superstimulate both, the pleasure will win out. I expect most free people in a utopia would spend their time on relatively repetitive, and highly pleasurable, activities. Either way, I agree with Brian: If Fun is somewhat pleasurable and free from suffering and not too resource-costly, it's not that big a deal either way. This is another point where we should converge strategically anyway if we're going to convince non-utilitarians: No "dragging people to the pleasure chambers", as David Pearce once put it.
Thus people aren't good only for their happiness; happiness isn't this abstract good thing separate from people (or nonhuman animals or what not) living happy lives.
I think once you drop the prior-existence condition, this semantic distinction seems to no longer have any practical impact, so I don't mind the difference. Calling people "receptacles" does not win them over anyway. :)
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon

User avatar
peterhurford
Posts: 391
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University
Contact:

Re: Utilitronium shockwave

Post by peterhurford » Sun Jul 29, 2012 10:04 pm

Brian Tomasik wrote:Interesting. I disagree, but hopefully it doesn't matter that much if Fun isn't too much more expensive to simulate than fun. (Hard to say without getting into more detail.)
My mind is probably too expensive to simulate. I think these kind of issues can be understood and resolved by breaking down the concept of "happiness", and exploring wireheading / experience machine thought experiments.

~
Brian Tomasik wrote:I do think organisms are made so that they can hold happiness.

But I also wonder how much of our disagreement is just a matter of sentiments attached to different linguistic ways of framing the issue. Certainly I want to make organisms happier for their sakes.
I definitely think it's a linguistic framing thing that's probably trivial. Though I am with Hedonic Treader that people don't like being thought of as receptacles. I do suspect it would make people-as-happiness-receptacles and happiness-for-the-sake-of-people are analytically identical.

~
Hedonic Treader wrote:Isn't this [preference to be with people of my native universe, even if I wouldn't notice the difference] logically inconsistent with the rejection of preference utilitarianism? After all, if you wouldn't notice a difference (and no others were worse off), then by definition it can't be worse unless you count meeting such preferences as intrinsically valuable.
Yes, it likely is logically inconsistent, but it's not a personal intuition I yet want to jettison, even if it may ultimately be metaphysically confused or confused for a different reason. I suspect that further understanding the nature of "happiness" will either (a1) make sense of this preference, (a2) provide a direct rationale for satisfying it, and (a3) unifying happiness and preference approaches; or (b1) conclusively demonstrate that preference and happiness approaches are distinct, (b2) conclusively demonstrate the superiority of happiness approaches, and (b3) provide a reason to completely ignore this native-world preference.

~
Hedonic Treader wrote:it's easy to see how wasteful this distinction can be, and even now, it's clearly not seen as a human right to fly real planes if you can't afford it.
To be fair, I did say I wouldn't mind being in a simulated world provided my native-world associates were put into the simulation along with me.

~
Hedonic Treader wrote:It depends entirely on how we define "identity", of course. As long as it doesn't make much of a difference in practical decisions, we don't need to converge on this in order to get things done.
Indeed, I agree here. Though I do like working out philosophy of identity just for the fun of it.

~
Hedonic Treader wrote:My prediction is that people who are free to design their own experiences will gravitate toward wireheading instead of Fun, even those who now say otherwise. Think how much money and time people spend on having - relatively repetitive - sexual experiences.
Perhaps. My prediction is that people want both hedonia (pure pleasure; "liking") and eudamonia (connection to genuineness, meaning, purpose; "approval"), but different people want one or the other to different degrees. Those who are deep into hedonia would want wireheading whereas those who are deep into eudamonia would not want it, or think it abhorrent.

And as far as I can tell, directly stimulating eudamonia is possible but self-defeating on some level, equivalent to self-deception. It would be like taking a utilitarian and deceiving them into thinking they are reducing tons of suffering -- it's just not the point.

Thus fun would be hedonia, and Fun would be hedonia + eudamonia. I bet you can simulate minds that don't require or care for eudamonia, but I suspect my kind of utilitarianism would rebel against that.

Further reflection on my intuitions, further philosophical developments in utilitarianism, and further scientific developments into understanding happiness/well-being/etc. are necessary, I think, before this problem can be resolved. I'd be cautious against wireheading before we know better, though we definitely can't preclude it as a possibility or rule it out completely.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.

User avatar
Hedonic Treader
Posts: 328
Joined: Sun Apr 17, 2011 11:06 am

Re: Utilitronium shockwave

Post by Hedonic Treader » Mon Jul 30, 2012 10:16 pm

To be fair, I did say I wouldn't mind being in a simulated world provided my native-world associates were put into the simulation along with me.
I see. So the real-world bias is more about consistency of social connections than about a bias for physically embodied interaction. I sometimes get the latter. It also sometimes comes down to epistemic purity, i.e. not being deceived about the nature of the physical context one is in. I can certainly empathize with that, if only for strategic reasons (a simulated mind is a sitting duck for any enemy who has superior power and knowledge over the physical context of the implementation).
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon

User avatar
peterhurford
Posts: 391
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University
Contact:

Re: Utilitronium shockwave

Post by peterhurford » Mon Jul 30, 2012 10:46 pm

Hedonic Treader wrote:It also sometimes comes down to epistemic purity, i.e. not being deceived about the nature of the physical context one is in.
I also wouldn't want to be deceived about my physical context. If I'm to be simulated, not only do I (1) want to be simulated alongside a share of my current social contacts, but I also want (2) to be informed and consent to that happening.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Utilitronium shockwave

Post by Brian Tomasik » Sun Aug 05, 2012 5:41 am

Hedonic Treader wrote: I think we can converge on a consensus based on two points: 1) You need to convince real people of any plan that is supposed to be sustainable, and if real people insist on maintaining this bias, we need to make the concession strategically.
Unless someone creates an AI in the basement that takes over the galaxy without asking permission.
Hedonic Treader wrote: 2) It's better to have some wasteful utopia than a big future filled with suffering
Yes.
Hedonic Treader wrote: Calling people "receptacles" does not win them over anyway. :)
True. But I, for one, am proud to be a receptacle. ;)
peterhurford wrote: My prediction is that people want both hedonia (pure pleasure; "liking") and eudamonia (connection to genuineness, meaning, purpose; "approval")
Yes, but fundamentally, genuineness and meaning are not different from raw pleasure. They're still all feelings that hedonistic utilitarians care about, and they can still all be wireheaded.

In other words: The feeling of genuineness can be faked. :)

I see you already noted this...
peterhurford wrote: And as far as I can tell, directly stimulating eudamonia is possible but self-defeating on some level, equivalent to self-deception. It would be like taking a utilitarian and deceiving them into thinking they are reducing tons of suffering -- it's just not the point.

User avatar
peterhurford
Posts: 391
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University
Contact:

Re: Utilitronium shockwave

Post by peterhurford » Sun Aug 05, 2012 6:39 am

Brian Tomasik wrote:In other words: The feeling of genuineness can be faked. :)
That genuinely scares me. Luckily I'll be long dead before we have a utopia of minds that derive infinite utility merely from having elbows.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Utilitronium shockwave

Post by Brian Tomasik » Sun Aug 05, 2012 7:11 am

peterhurford wrote: Luckily I'll be long dead before we have a utopia of minds that derive infinite utility merely from having elbows.
Hey, are you being down on Kermit?

User avatar
Hedonic Treader
Posts: 328
Joined: Sun Apr 17, 2011 11:06 am

Re: Utilitronium shockwave

Post by Hedonic Treader » Sun Aug 05, 2012 11:26 am

Brian Tomasik wrote:Unless someone creates an AI in the basement that takes over the galaxy without asking permission.
Yes, but the probability of that happening seems rather low. I'd be more concerned about this if we already had almost-human level AI and a lot of different factions working on an arms race of drastic secret improvements. But even then it's not clear a hard takeoff (followed by world domination) is that likely. You'd expect the rest of the world band together against any first-mover.
In other words: The feeling of genuineness can be faked.
This scares me too, but presumably for a different reason than it scares Peter. I wouldn't intrinsically care about whether my own experience of genuineness is fake. But the practical consequences on epistemic sanity and consequently the ability to affect my life and the world in desirable ways could be drastic. I watched a Thomas Metzinger lecture yesterday in which he mentioned two psychiatric patients staring out the window, one of them (genuinely) believing he was causing the sun's movement with his will, the other one believing he was controlling cars and pedestrians. Irregardless of their state of subjective well-being, there is a reason such people can't live on their own. They would also have a hard time organizing any kind of defense against any hostile force.

In a way, it's quite clear that a lot of our naive realism is already fake genuineness. Your body image is just your brain's working model of your body, even though it feels real to you. This is already true and inevitable. At least we can say evolution has created a good enough capacity to model oneself and the world for fitness functions. But since there's a tradeoff between detailed genuineness (precise models) and cognitive overhead, we would expect it to be relatively streamlined for these functions (though not absolutely).

This is one of the reasons we might not want to care about genuineness too much except for instrumental reasons: In a way, most of the things we care about have always had strong elements of fake genuineness (naive realism, changing constructions of identity, retrospective distortions etc.)
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon

User avatar
peterhurford
Posts: 391
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University
Contact:

Re: Utilitronium shockwave

Post by peterhurford » Sun Aug 05, 2012 5:21 pm

Hedonic Treader wrote:This scares me too, but presumably for a different reason than it scares Peter. [...] This is one of the reasons we might not want to care about genuineness too much except for instrumental reasons: In a way, most of the things we care about have always had strong elements of fake genuineness (naive realism, changing constructions of identity, retrospective distortions etc.)
I disagree with you guys because I still have a strong intuition about caring about getting things done; wanting to do actual things and enjoy them, not just bliss out. I disagree with you guys on Molly the Mathematician. I mean, I like blissing out on occasion, but I wouldn't want it to become my entire life because I want to do actual things and have accomplishments that aren't just me being deceived into feeling accomplished or having the feelings of accomplishment. I don't think that's naive.

It's really weird thinking about it. I'm interested in figuring out the basis for this intuition.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.

User avatar
Hedonic Treader
Posts: 328
Joined: Sun Apr 17, 2011 11:06 am

Re: Utilitronium shockwave

Post by Hedonic Treader » Sun Aug 05, 2012 7:13 pm

Peter, I think the basis for your intuition is an evolved instinct about the instrumental downsides I mentioned above; a wariness of being deceived, especially in a world of hostile others, or maybe a wariness of being wrong about important life-sustaining aspects.

There can be a conflict of epistemic states and hedonistic states, e.g. it is possible that one feels better not knowing something, while at the same time feeling bad about the state of not knowing. I sometimes play games with a commitment not to reload old save games in case my character dies. This adds suspense because I can lose the in-game progress of many hours if I step into a trap or lose one battle. I would not want my computer to cheat in my favor, and I would want to know if it did. I also follow the commitment in case I lose. However, it feels much better to win than to lose. The point here is, given my current psychology, knowing that the program cheats to let me win would take the suspense and sense of in-game progress away, which is the core hedonistic element of the game. Since the game doesn't have any instrumental value for the real world - it's just entertainment - I would not mind being attached to a "suspense and sense of progress" machine instead. But since I don't have such a machine, I care about following the rules and my epistemic state about following the rules, because my enjoyment depends on it.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon

User avatar
peterhurford
Posts: 391
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University
Contact:

Re: Utilitronium shockwave

Post by peterhurford » Sun Aug 05, 2012 7:38 pm

Hedonic Treader wrote:Since the game doesn't have any instrumental value for the real world - it's just entertainment - I would not mind being attached to a "suspense and sense of progress" machine instead.
I think that's where we differ -- I want to place value not just in the suspense and sense of progress, but in the actually playing and winning the damn game. I don't want to just feel like I win; I want to actually win. Going back to Molly the Mathematician, I want to have actually discovered the proof, even if I die not knowing I had done so. I wouldn't want to just think that I did.

And I don't think it's purely an epistemic thing, like knowing that I was deceived is the problem, for I wouldn't want to be deceived even if I would never find out about it.

I think it could be possible that we differ in our meta-preferences -- just because I don't want to wirehead doesn't mean that it wouldn't be good for you to do so. One thing I'm curious about is that I actually have a very high life satisfaction, probably do to (what I guess is) an abnormally high happiness set point. I wouldn't mind having my happiness set point artificially raised, as long as I still got to live my life as I do (or a utopian equivalent).

Overall, I just think that happiness / well-being / flourishing / fulfillment are all poorly understood. They're understood well enough to ground an intuitive-level utilitarianism for everyday life, but not good enough to adequately resolve thought experiments about utopias.
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.

Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest