A few dystopic future scenarios

Will we transcend our human bodies? Extend our lives? Create superhuman artificial intelligence? Mitigate existential risks? etc.
User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: A few dystopic future scenarios

Post by Brian Tomasik » Thu Jul 19, 2012 10:35 am

DanielLC wrote:I'm not sure bad is stronger than good. I think good things happen more often, but bad things are more intense.
Right, this may be true in terms of overall hedonics for organisms in the world, but the question we're asking here is what is the maximal possible intensity per unit time. In artificial hedonium/dolorium, it's this intensity that will be simulated nonstop.

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: A few dystopic future scenarios

Post by DanielLC » Thu Jul 19, 2012 6:57 pm

I don't think we're anywhere near the maximal intensity. We feel what we feel as intense as we do because it's the intensity that maximizes genetic fitness. I guess there's slight evidence that it's easier to make something suffer, but that would be countered by a slightly higher probability of creating hedonium.
Consequentialism: The belief that doing the right thing makes the world a better place.

User avatar
Pablo Stafforini
Posts: 174
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford
Contact:

Re: A few dystopic future scenarios

Post by Pablo Stafforini » Sun Aug 19, 2012 9:38 pm

Hedonic Treader wrote:David Pearce kind of suggested that it may even be just an accident, that evolution could have stumbled into a different solution (gradients of bliss) and that we can shift into that through hedonic enhancement without leaving the Darwinian paradigm. (He doesn't actually express it like this, but I think it's the gist of the abolitionist project). I'm not sure how probable that is, given the apparent robustness of the asymmetry in evolutionary history. Then again, that robustness may be a sign of a local optimum, and a complete redesign could get us to a new one.
I have thought that "the wisdom of nature" heuristic might provide an objection to the abolitionist project. If creatures could be motivated by gradients of bliss rather than by states involving both pain and pleasure, why haven't such creatures evolved naturally? Your suggestion that it was just an accident might provide an adequate response to the objection. The hypothesis seems hard to test, though, since there are no relevant examples of convergent evolution; there is just one data point.

CarlShulman
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: A few dystopic future scenarios

Post by CarlShulman » Fri Dec 07, 2012 8:37 pm

" No, actually, conditional on humans surviving, the most likely scenario is that we will be outcompeted by Darwinian forces beyond our control."

Brian,

You have big unargued-for assumptions here, leaving out inhuman singletons, for instance. Nick Bostrom, the author of the piece you linked, does not consider the scenarios included in his paper to take up most of the relevant probability mass.

Your estimate of <5% chance of human control of the future is at the extreme left tail of people who have considered the topic, e.g. lower than people at the Future of Humanity Institute or Singularity Institute would say, or surveys of risk experts, AI experts, and surveys of attendees at conferences with AI risk-related talks and workshops.

http://www.philosophy.ox.ac.uk/__data/a ... report.pdf
Gives an AI extinction risk of 5% by 2100 (higher than 'conventional' risks, and highest as a portion of the associated catastrophic risk, but still not massive)

(Also Google "Machine Intelligence Survey," the pdf on the FHI site is down, but you can Quick View the report from Google; this group was more selected specifically for interest in AI risk, but still a majority thought human-level AGI would be very good to neutral/mixed in its impact)

And you leave out opportunities for happiness to be created by civilizations out of our control (which in expectation I think exceed the pain) perhaps because of your negative-leaning utilitarian perspective, which counts the bads but largely ignores goods.:

Scientific simulations concerned with characteristics of intelligent life and civilizations would disproportionately focus on intelligent life, and influential intelligent life at that, with a higher standard of welfare.

Humans have used our increased power for extensive 'wireheading': foods prepared for taste, Hollywood, sex with contraception, drugs, pornography, art, video games. Eurisko wireheaded. Some wireheading AIs would have morally valuable states: certainly this possibility is linked to the possibility of suffering subroutines.

And of course suffering subroutines must be contrasted with reward subroutines.

Baby universes (a pretty remote possibility) require talking about measure in an infinite multiverse, which is a bit tricky for this context, but basically given the existence of infinite baby universes there are infinite instances of suffering and happiness no matter what, all one might do is affect measure. And while there is no satisfactory 'preferred' measure for physicists and cosmologists, on various accounts creating baby universes would leave the measure unchanged, or could be more than offset by other actions affecting measure. And in any case if we start considering unlikely possibilities of generating infinite quantities of stuff, then the expected production of sapience/consciousness by intelligent beings goes to infinity.

Also, if we use the Self-Indication Assumption, or the Pascalian total utilitarian equivalent (with one-boxing decision theory) then our attention is focused on worlds in which civilizations are just frequent enough that they can colonize a very large fraction of the universe. Then most of our impact will be in states of affairs where in fact much or most of the animal life is reachable by sapients, and if the sapients convert galactic accessible resources into happy sapient life many orders of magnitude more efficiently than nonsapient life produces net suffering, then the expectation for the universe as a whole is positive (unreachable wild animals as a rounding error).

See the discussion of the SIA in this paper: http://www.nickbostrom.com/aievolution.pdf

Savage ideologies must be offset against both altruistic and selfish ideologies: creating large amounts of happiness for the good is also an important part of many ideologies. And loyalty to one's group or oneself favors providing more happiness to the above. Resources could be used to increase the longevity, population, and happiness of the in-group, or hurt the outgroup (and with advanced technology the ability to convert resources into life will be greater than in historical times), and the latter is less attractive. Paying large costs to attack others is much less attractive than attacking to steal.

"Torture as warfare" is offset by "heaven in peace". As you discuss in comments, it is a wise and common disposition (for evolutionary, sociocultural, and other reasons) to resent and resist extortion, but to accept mutually beneficial deals. Such trade is positive-sum, while warfare is negative-sum. Such factors would bias inhuman civilizations to producing more goods than bads in interactions with any aliens or other groups concerned with happiness and suffering.

The spread of wild animal life to other stars is offset by the prospect of immensely greater population density in advanced technological civilization. The powerful tend to see to their own pleasure even at the expense of the helpless, but if it is possible to convert the resources sustaining helpless animals into vast numbers of powerful sapient beings, that would tend to happen. See also Robin Hanson's post, "Nature is Doomed":

http://www.overcomingbias.com/2009/09/n ... oomed.html
Last edited by CarlShulman on Sat Dec 08, 2012 2:52 pm, edited 3 times in total.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: A few dystopic future scenarios

Post by Brian Tomasik » Sat Dec 08, 2012 3:56 am

Thanks, Carl! To what sorts of scenarios would you assign large probability mass? What kinds of values might inhuman singletons have? Are those different from paperclippers?

CarlShulman
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: A few dystopic future scenarios

Post by CarlShulman » Sat Dec 08, 2012 3:01 pm

I have updated my comment above to include more detail on some omissions and dubious points bearing on the OP suggestion that the expected net value of the future is negative for non-negative utilitarians who think happiness matters, which is false by my best estimate.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: A few dystopic future scenarios

Post by Brian Tomasik » Sat Dec 08, 2012 7:12 pm

Thank you, Carl. Your reply is excellent.

Carl's points

Carl and I discussed these issues privately as well to clarify some of his points. I'll do my best here to explain what he said as I walk through his arguments.

How likely that human values control the future?

Carl asked for my probability that, conditional on humans not going extinct by other means (nanotech, biowarfare, etc.), the future would be shaped by human values rather than something that outcompetes us. I said <5%, because keeping intelligences under your control seems really hard. Not only might it require a singleton, but it also requires that the humans controlling the singleton know what they're doing rather than creating paperclippers, which seem to be the default outcome unless you're really careful. As Eliezer has said, even one mistake in the chain of steps needed to get things right would spell failure for human values.

Carl suggested that there are other outside-the-box scenarios for maintaining control of the future that I haven't considered. And as his reply notes, my probability is lower than that of anyone at FHI or SIAI, which means I should revise it upward.

How bad would UFAIs be?

I use the term "unfriendly AIs" (UFAIs) to denote AIs that are not controlled by human values. The terminology doesn't imply that they'll necessarily do worse things than human-controlled AIs would -- indeed, they might actually cause less total suffering.

I had presumed that even non-negative-leaning utilitarians might agree that UFAIs would be a net bad outcome. I figured UFAIs wouldn't have much incentive to produce good things (e.g., hedonium), and at the same time, they might do things that would be net bad, like running sentient simulations of nature in order to learn about science.

Carl pointed out that sentient simulations would focus on the more intelligent minds that would be more likely to be net happy. This is true, but you need only simulate a few ant colonies to outweigh a whole bunch of simulations of happy rich people. So I'm doubtful that even for a regular utilitarian, sentient simulations would be positive.

That said, Carl makes other good points: If we got a wirehead AI of the right type (i.e., one where its wireheading was actually pleasure rather than just computations I don't care about), that would be a good thing. And yes, there might also be reward subroutines. Suffering predominates over happiness for wild animals because (a) most wild animals die shortly after birth, and (b) suffering in general is more intensely bad than pleasure is good. I hope (a) wouldn't apply to subroutines, and in any event, death wouldn't be painful for them. Maybe (b) wouldn't apply either, because the cognitive algorithms for pleasure and pain might be symmetric? Or is there something fundamental about the algorithm for suffering that makes it inherently more bad than pleasure is good? As I noted before, "P(D > H) > P(H > D), even if it's not a big difference."

Baby universes

"[Carl:] basically given the existence of infinite baby universes there are infinite instances of suffering and happiness no matter what, all one might do is affect measure"

I don't know how to deal with infinite ethics, but I think my preferred ethical approach would say that it is bad to create new universes even if there are already infinitely many of them and even if doing so doesn't change the relative balance of happiness vs. suffering that they contain.

Are most wild animals reachable?

Carl makes an interesting argument, which might be illustrated as follows.

Hypothesis 1: There isn't much life in the universe.
Hypothesis 2: There are lots of wild animals, but few minds like my own that can shape technology and undertake cosmic rescue missions.
Hypothesis 3: There are lots of wild animals but also lots of minds like my own that can shape technology and undertake cosmic rescue missions.

By the Self-Indication Assumption (SIA), Hypothesis 3 is orders of magnitude more favored compared with Hypothesis 2, because under Hypothesis 3, there are orders of magnitude more copies of Brian. So even if Hypothesis 3 was disfavored on other grounds (our knowledge of cosmology, astrobiology, etc.), Hypothesis 3 still wins out in the end, as long as the number of extra copies of Brian in Hypothesis 3 is more than the prior-probability advantage of Hypothesis 2 over Hypothesis 3.

Even if you don't buy SIA, the claim is that the same preference for Hypothesis 3 over 2 can come from one-boxing total utilitarianism. I think the argument is something like, "If we are in Hypothesis 3, then we can do way more total good in the universe than if we're in Hypothesis 2, so we should act as though we're in Hypothesis 3." This is true as far as things like undertaking cosmic rescue missions -- if there's even a small probability we (and our copies in other parts of spacetime) can help vast numbers of wild animals, the expected value may be high enough to justify it. However, this same kind of reasoning doesn't apply when we're talking about whether, say, we want to create new universes like ours. In that case, our probabilities should follow what we actually think is the case, rather than the way we act on the off chance that it'll have high payoff. I may misunderstand here, so I welcome being corrected.

Savage vs. nice ideologies

Carl makes good points here, especially as far as the observation that selfishness/stealing are more desirable than costly punishment.

Ways forward, revisited

Most of Carl's points don't affect the way negative utilitarians or negative-leaning utilitarians view the issue. I'm personally a negative-leaning utilitarian, which means I have a high exchange rate between pain and pleasure. It would take thousands of years of happy life to convince me to agree to 1 minute of burning at the stake. But the future will not be this asymmetric. Even if the expected amount of pleasure in the future exceeds the expected amount of suffering, the two quantities will be pretty close, probably within a few orders of magnitude of each other. I'm not suggesting the actual amounts of pleasure and suffering will necessarily be within a few orders of magnitude but that, given what we know now, the expected values probably are. It could easily be the case that there's way more suffering than pleasure in the future.

If you don't mind burning at the stake as much as I do, then your prospects for the future will be somewhat more sanguine on account of Carl's comments. But even if the future is net positive in expectation for these kinds of utilitarians (and I'm not sure that it is, but my probability has increased in light of Carl's reply), it may still be better to work on shaping the future rather than increasing the likelihood that there is a future. Targeted interventions to change society in ways that will lead to better policies and values could be more cost-effective than increasing the odds of a future-of-some-sort that might be good but might be bad.

As for negative-leaning utilitarians, our only option is to shape the future, so that's what I'm going to continue doing.

CarlShulman
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: A few dystopic future scenarios

Post by CarlShulman » Sat Dec 08, 2012 7:27 pm

Brian, you should also modify the OP to take this exchange into account.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: A few dystopic future scenarios

Post by Brian Tomasik » Sat Dec 08, 2012 7:40 pm

CarlShulman wrote:Brian, you should also modify the OP to take this exchange into account.
Haha, yes, I was planning to do that but forgot. :)

I don't know how best to incorporate the discussion without making things messy, but maybe the best approach is to copy my "Ways forward, revisited" section into the front to make sure people see it?

CarlShulman
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: A few dystopic future scenarios

Post by CarlShulman » Sat Dec 08, 2012 9:01 pm

"If the expectation values of H and D are roughly linked, and open colonization and evolution cause strong selection effects against using resources on H and D, H-D may not dominate the expected utility of a big future after all."

Hedonic Treader,

For a given expected quantity of H, a given expected quantity of D, and a utility function that values them additively, the distribution of H and D across futures can't affect expected utility.

Maybe you meant to say that differences in H-D will contribute less to differences in actual realized utility across scenarios. But unless they are very close to equal the penalty will not be very large relative to the orders of magnitude of efficiency gains. If you have a ratio of 3H:1D, then the H-D value is 2, whereas the difference in H-D value between a world with 3H and a world with 1 D is 4, twice the net of the mixed world. With a H:D ratio of 1.5:1, the net H-D would be 0.5, vs a gap of 2.5, five times as great. And we would not expect H and D to be perfectly correlated.

mwaser
Posts: 1
Joined: Sun Dec 09, 2012 10:32 pm

Re: A few dystopic future scenarios

Post by mwaser » Sun Dec 09, 2012 10:35 pm

I directly address a lot of these issues in my recent article at Transhumanity.net at http://transhumanity.net/articles/entry ... telligence

User avatar
Hedonic Treader
Posts: 328
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Post by Hedonic Treader » Mon Dec 10, 2012 4:33 pm

Brian wrote:It would take millions of years of happy life to convince me to agree to 1 minute of burning at the stake.
Do you hold a corresponding belief of the following sort? "There are neurons encoding pain intensity in such a way that the encoded intensity when burning at the stake is literally 5 x 10^10 higher than the average encoded pleasure intensity of happy life"?

In other words, do you expect to find neurons (coding pain intensity) that fire 10 orders of magnitude more frequently than corresponding neurons (coding pleasure intensity)?
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: A few dystopic future scenarios

Post by Brian Tomasik » Mon Dec 10, 2012 6:42 pm

Hedonic Treader wrote: In other words, do you expect to find neurons (coding pain intensity) that fire 10 orders of magnitude more frequently than corresponding neurons (coding pleasure intensity)?
No, definitely not. However, I believe our degree of concern about something doesn't have to track the literal number of neurons of a component part.

From a conversation with Ruairi yesterday:
Ruairi: I had thought maybe there was some way to decide my exchange rate such as "1gram of neurotransmitter X released at point A is as good as 1 gram of neurotransmitter Y realeased at point B is bad". But thinking about it more now there doesn't seem to be any reason why a measure like this is any more objective than just deciding oneself

Brian: As far as 1 g neurotransmitter vs. 1 g another neurotransmitter, that could give us some grounding for comparison, but there's a lot more going on. For example, the subjective goodness/badness of stuff involves a lot of manipulation by the conscious brain, activity in the ventral pallidum for pleasure, evaluation combined with raw experience, etc.

In other words, the relevant things are high-dimensional and potentially qualitative. For example, when you feel the same pain, it can seem a lot worse or not depending on whether you know it's causing tissue damage, etc. There's a classic study about people walking across a bridge and mistaking fear for romantic attraction. The same chemicals can feel very different depending on context. There's also longer-term evaluation of an experience (how bad was that?) which involves conscious reflection, etc.

Anyway, point is just that there's a lot of messiness to consider. I think quantitative stuff is relevant and can shape our intuitions, but it will take a while for neuroscience to refine our understanding of what's going on that we care about.

Ruairi: Hm, but even if neuroscience does come up with something, there's no reason we should care what really is there? But maybe I just do care.

Brian: Yes, even if neuroscience comes up with things, we don't have to care. However, we might choose to care because it will change our intuitions. For example, if you didn't know that animals were physiologically similar to humans, you might not care about them at first. You could still not care about them after learning the similarities, but the similarities generally change your intuitions.
Discussing exchange rates with Peter on Facebook, I suggested I might be willing to relax my exchange rate to 1 minute of torture for maybe thousands/hundreds of years of happy life, since I'm undoubtedly biased by scope insensitivity. My exact feelings on this change depending on my mood.

User avatar
Hedonic Treader
Posts: 328
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Post by Hedonic Treader » Mon Dec 10, 2012 8:08 pm

Brian Tomasik wrote:Discussing exchange rates with Peter on Facebook, I suggested I might be willing to relax my exchange rate to 1 minute of torture for maybe thousands/hundreds of years of happy life, since I'm undoubtedly biased by scope insensitivity. My exact feelings on this change depending on my mood.
Yes, it's the same for me. On the one hand, imagining something like burning alive for one minutes creates very strong aversion. On the other hand, I've probably had more than one minute of total agony if I were to aggregate seconds of pain that happened independently, but with high intensity, in my actual past. And I don't feel that this, in and by itself, negates the value of my life so far (the other day-to-day unpleasantness does a lot of that though). And I have an intuition that these judgments should correspond, i.e. that it shouldn't matter whether these seconds were in succession or not. In addition, I also have the intuition that if neuroscience came up with a way to literally measure pleasantness/unpleasantness intensity encodings in the brain, that would probably increase my disposition to accept them more directly, i.e. with less of the meta-valuation. When Kahneman talks about the experiencing self vs. the remembering self (and I would add an anticipating or imagining self for things like burning at the stake), I mostly come down on caring about the experiencing self, not so much about the other selves.
Carl wrote:For a given expected quantity of H, a given expected quantity of D, and a utility function that values them additively, the distribution of H and D across futures can't affect expected utility.
Yes, and that wasn't the claim I was trying to make.
Maybe you meant to say that differences in H-D will contribute less to differences in actual realized utility across scenarios. But unless they are very close to equal the penalty will not be very large relative to the orders of magnitude of efficiency gains. If you have a ratio of 3H:1D, then the H-D value is 2, whereas the difference in H-D value between a world with 3H and a world with 1 D is 4, twice the net of the mixed world. With a H:D ratio of 1.5:1, the net H-D would be 0.5, vs a gap of 2.5, five times as great. And we would not expect H and D to be perfectly correlated.
You're correct, if the expected quantity of H and D are different enough despite sharing common causes (e.g. the existence of powerful enough factions who explicitly care about hedonistic utility), and if these quantities are big enough compared to the rest of the utility distribution, their efficiency gains can dominate the calculus. However, I think the conditions for both H and D share common elements, and are both quite narrow in comparison to the conditions of the existence of a general landscape of hedonistic utility (e.g. evolving/colonizing sentience not deliberately optimized for intensity/duration of pleasantness/unpleasantness). It's not clear to me H-D dominates the big picture.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: A few dystopic future scenarios

Post by Brian Tomasik » Tue Dec 11, 2012 3:09 am

Hedonic Treader wrote: I've probably had more than one minute of total agony if I were to aggregate seconds of pain that happened independently, but with high intensity, in my actual past.
I think I have not. I've experienced plenty of severe pain, but I don't think the sum total of it equals one minute of burning at the stake. My perceived badness of pain is extremely nonlinear with respect to "objective" measures of intensity. For example, I'm the kind of person who would rather have nausea and stomach pain for 2.5 hours to avert vomiting rather than throw up in 30 seconds and get it over with from the beginning.
Hedonic Treader wrote: if neuroscience came up with a way to literally measure pleasantness/unpleasantness intensity encodings in the brain, that would probably increase my disposition to accept them more directly, i.e. with less of the meta-valuation.
Haha, sure, but the tadpoles being eaten alive right now don't have this neuroscience perspective with which to allay their meta-valuations. :)
Hedonic Treader wrote: I mostly come down on caring about the experiencing self, not so much about the other selves.
Me too.

User avatar
Hedonic Treader
Posts: 328
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Post by Hedonic Treader » Tue Dec 11, 2012 10:03 am

Brian Tomasik wrote:I think I have not. I've experienced plenty of severe pain, but I don't think the sum total of it equals one minute of burning at the stake. My perceived badness of pain is extremely nonlinear with respect to "objective" measures of intensity. For example, I'm the kind of person who would rather have nausea and stomach pain for 2.5 hours to avert vomiting rather than throw up in 30 seconds and get it over with from the beginning.
Maybe you're right and pain intensity is really very non-linear on the extreme end. I've had my encounters with scalding hot water and so on, but maybe you can't sum it up to equal one minute of burning at the stake. However, neither of us has ever burned at the stake (I hope), and the converse is also quite plausible: Maybe once the pain becomes constant, adrenalin, shock or psychological mechanisms set in and the total experience becomes a blur. How would we know this without having had the experience? And even if we had, memories might not be accurate.

For what it's worth, my memory says that throwing up is less bad than it seems when I anticipate it during nausea.

EDIT: One more thought: If we really care disproportionally about the extremes of pain and suffering, the very lowest hanging fruit would be hedonic enhancement that takes the edge off of those extremes. A biotech intervention that reduces peak agony intensity by 30%, say, should then yield practically world-shifing utility increases ceteris paribus. That can't be too hard, compared to other strategies.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon

User avatar
Ruairi
Posts: 385
Joined: Tue May 10, 2011 12:39 pm
Location: Ireland

Re: A few dystopic future scenarios

Post by Ruairi » Tue Dec 11, 2012 5:00 pm

Brian Tomasik wrote:My perceived badness of pain is extremely nonlinear with respect to "objective" measures of intensity. For example, I'm the kind of person who would rather have nausea and stomach pain for 2.5 hours to avert vomiting rather than throw up in 30 seconds and get it over with from the beginning.
This doesn't effect your values tho right? Just what you happen to enjoy less than other things?

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: A few dystopic future scenarios

Post by Brian Tomasik » Tue Dec 11, 2012 8:27 pm

Hedonic Treader wrote: Maybe once the pain becomes constant, adrenalin, shock or psychological mechanisms set in and the total experience becomes a blur.
Could be. We can hope. If we assigned 50% probability that it's not as bad as it seems, the expected pain would be reduced by almost 1/2.
Hedonic Treader wrote: For what it's worth, my memory says that throwing up is less bad than it seems when I anticipate it during nausea.
I'm not sure. Throwing up is pretty bad, but it's possible the anticipation exaggerates. Even if I could choose now from the cold-headedness of my armchair which I would prefer, I think I'd still go for the 2.5 hours of agony. OTOH, I haven't vomited since ~1999, so it's possible my brain has built up the illusion that it would be worse than it is.
Hedonic Treader wrote: A biotech intervention that reduces peak agony intensity by 30%, say, should then yield practically world-shifing utility increases ceteris paribus.
Could be! But how are you going to install those into billions of one-day-old minnows being eaten? At this point it's just more feasible to reduce populations of short-lived, r-selected animals.
Ruairi wrote: This doesn't effect your values tho right?
Well, it's a case study to suggest that maybe other people don't realize how bad severe pain is relative to how bad I think it is. Correspondingly, my pain-pleasure exchange rate will tend to be more lopsided than theirs. There are (at least) two possibilities here, both of which could be partly true:
  1. My memories about the severity of the bad stuff are different from theirs. Theirs might be wrong, or mine might be wrong, but either way we should move our estimates toward each other.
  2. Due to individual differences, my experience of the badness of pain is actually worse than theirs for the same kinds of experiences. In this case, we would still move our exchange rates in each other's directions in the sense that, averaged over the population, some people will have higher exchange rates and some will have lower exchange rates. Those who thought pain wasn't so bad will realize that some people think it actually is. I who thought pain was really bad will realize that some people think it's not.

User avatar
Hedonic Treader
Posts: 328
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Post by Hedonic Treader » Tue Dec 11, 2012 11:33 pm

Brian Tomasik wrote:
Hedonic Treader wrote: A biotech intervention that reduces peak agony intensity by 30%, say, should then yield practically world-shifing utility increases ceteris paribus.
Could be! But how are you going to install those into billions of one-day-old minnows being eaten?
Cyborgification by self-replicating nanites, of course! :D

Seriously, I agree there is no good way to prevent the suffering of trillions of wild fish at this point. Killing everything off isn't very popular or feasible either even though human industries/agriculture do some of it for profit. Enacting some artificial selection pressure on wild animal populations to increase welfare traits like lower peak pain intensity is just one option for the future, as is the "welfare state" option and the extinction option.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon

User avatar
Ruairi
Posts: 385
Joined: Tue May 10, 2011 12:39 pm
Location: Ireland

Re: A few dystopic future scenarios

Post by Ruairi » Thu Jan 03, 2013 1:22 am

Today I had a few concerns, apologies for the writing style, this is how I write when I'm talking to myself:

If the technology to make artificial sentients of any kind becomes (easily) available in the future we could simply persuade people to do like “SETI at home” and run tonnes of happy sentients.

What if it's not easily available?

It doesn’t seem too hard that we could convince people to run something like this and would likely exceed any suffering simulations? Also they might be run on gradients of bliss instead of suffering.

Or would this work? We can’t do the same thing with the meat industry. Whats the economic cost of simulating a mind?

So the real danger is spreading wild life to other planets? How likely is this?

Do we really know what’s likely at all? Or is it better to improve values because were so unsure what future technology will be but it seems that: power + current values = bad?

Suffering sentients also maybe unlikely because they could work on gradients of bliss too. However given current situation of nature suffering seems to work better in evolution? If these sentients will dominate calculations not too hard to push for welfare there? Ask to use gradients of bliss? Not very "out there" either really.

Perhaps we should be pushing for simulations to be made if it would be easy for their welfare to be made high?

"Happy at Home"

Post Reply