A few dystopic future scenarios

Will we transcend our human bodies? Extend our lives? Create superhuman artificial intelligence? Mitigate existential risks? etc.

A few dystopic future scenarios

Postby Brian Tomasik » Tue Dec 13, 2011 8:50 am

Summary, written 7 Dec 2012:

A growing number of people believes that reducing the risk of human extinction is the single most cost-effective undertaking for those who want to do good in the world. I fear that if humans survive, the future will likely not be as sanguine as is often presumed. I enumerate a few suggestive (not exhaustive) bad scenarios that might result both from human-inspired AIs and from "unfriendly" AIs that might outcompete human values. I conclude that rather than trying to increase the odds of Earth-originating AI at all (which could have negative expected value), we might do better to improve the odds that such AI is of the type we want. In particular, for some of us, this may mean the best thing we can do is shape the values of society in such a way that an AI which develops a few decades/centuries from now will show more concern for the suffering of non-human animals and artificial sentients whose feelings are usually ignored.

(See "Rebuttal by Carl Shulman" at the bottom of this post.)

Introduction, written 6 Dec 2012:

Advocates for reducing extinction risk sometimes assume -- and perhaps even take for granted -- that if humanity doesn't go extinct (due to nanotech, biological warfare, or paperclipping), then human values will control the future. No, actually, conditional on humans surviving, the most likely scenario is that we will be outcompeted by Darwinian forces beyond our control. These forces might not just turn the galaxy into nonsentient paperclips; they might also run sentient simulations, employ suffering subroutines, engage in warfare, and perform other dastardly deeds as defined and described below. Of course, humans might do these things as well, but at least with humans, people make the presumption that human values will be humane, even though this may not be the case when it comes to human attitudes toward wild animals or non-human-like minds.

So when we reduce asteroid or nanotech risk, the dominant effect we're having is to increase the chance that Darwinian-forces-beyond-our-control take over the galaxy. Then there's some smaller probability that actual human values (the good, the bad, and the ugly) will triumph. I wish more people gung-ho about reducing extinction risk realized this.

Now, there is a segment of extinction-risk folks who believe that what I said above is not a concern, because sufficiently advanced superintelligences will discover the moral truth and hence do the right things. There are two problems with this. First, Occam's razor militates against the existence of a moral truth (whatever that's supposed to mean). Second, even if such moral truth existed, why should a superintelligence care about it? There are plenty of brilliant people on Earth today who eat meat. They know perfectly well the suffering that it causes, but their motivational systems aren't sufficiently engaged by the harm they're doing to farm animals. The same can be true for superintelligences. Indeed, arbitrary intelligences in mind-space needn't have even the slightest inklings of empathy for the suffering that sentients experience.

In conclusion: Let's think more carefully about what we're doing when we reduce extinction risk, and let's worry more about these possibilities. Rather than increasing the odds that some superintelligence comes from Earth, let's increase the odds that, if there is a superintelligence, it doesn't do things we would abhor.

The scenarios, written 13 Dec 2011

Robert Wiblin has asked for descriptions of some example future scenarios that involve lots of suffering. Below I sketch a few possibilities. I don't claim these occupy the bulk of probability mass, but they can serve to jump-start the imagination. What else would you add to the list?

Spread of wild-animal life. Humans colonize other planets, spreading animal life via terraforming. Some humans use their resources to seed life throughout the galaxy. Since I would guess that most sentient organisms never become superintelligent, these new universes will contain vast numbers of planets full of Darwinian agony.

Sentient simulations. Given astronomical computing power, post-humans run ancestor simulations (including torture chambers, death camps, and psychological illnesses endured by billions of people). Moreover, scientists run even larger numbers of simulations of organisms-that-might-have-been, exploring the space of minds. They simulate trillions upon trillions of reinforcement learners, like the RL mouse, except that these learners are sufficiently self-aware as to feel the terror of being eaten by the cat.

Suffering subroutines. This one is from Carl Shulman. It could be that certain algorithms (say, simple reinforcement learners) are very useful in performing complex machine-learning computations that need to be run at massive scale by advanced AI. These subroutines might become sufficiently similar to the pain programs in our own brains that they actually suffer. But profit and power take precedence over pity, so these subroutines are used widely throughout the AI's Matrioshka brains. (Carl adds that this situation "could be averted in noncompetitive scenarios out of humane motivation.")

Ways forward, written 5 Dec 2012

If indeed the most likely outcome of human survival is to create forces with values alien to ours, having the potential to cause astronomical amounts of suffering, then it may actually be a bad thing to reduce extinction risk. At the very least, reducing extinction risk is less likely to be an optimal use of our resources. What should we do instead?

One option, as suggested by Bostrom's "The Future of Human Evolution," is to work on creating a global singleton to reign in Darwinian competition. Obviously this would be a worldwide undertaking requiring enormous effort, but perhaps there would be high leverage in doing preliminary research, raising interest in the topic, and kicking off the movement.

Doing so would make it more likely that humans, rather than minds alien to humans, control the future. But would this be an improvement? It's hard to say. While unfriendly superintelligences would be unlikely to show remorse when running suffering simulations for instrumental purposes, it's also possible that humans would run more total suffering simulations. The only reasons for unfriendly AIs to simulate nature, say, are to learn about science and maybe to explore the space of minds that have evolved in the universe. In contrast, humans might simulate nature for aesthetic reasons, as ancestor simulations, etc. in addition to the scientific and game-theoretic reasons that unfriendly AIs would have. In general, humans are more likely to simulate minds similar to their own, which means more total suffering. Simulating paperclips doesn't hurt anyone, but simulating cavemen (and cavemen prey) does.

So it's not totally obvious that increasing human control over the future is a good thing either, though the topic deserves further study. The way forward that I currently prefer (subject to change upon learning more) is to work on improving the values of human civilization, so that if human-shaped AI does control the future, it will act just a little bit more humanely. This means there's value in promoting sympathy for the suffering of others and reducing sadistic tendencies. There's also value in reducing status-quo bias and promoting total hedonistic utilitarianism. Two specific cases of value shifts that I think have high leverage are (1) spreading concern for wild-animal suffering and (2) ensuring that future humans give due concern to suffering subroutines and other artificial sentients that might not normally arouse moral sympathy because they don't look or act like humans. (2) is antispeciesism at its broadest application. Right now I'm working with friends to create a charity focused on item (1). In a few years, it's possible I'll also focus on item (2), or perhaps another high-leverage idea that comes along.

In his original paper on existential risk, Bostrom includes risks not just about literal human extinction, but also risks that would "permanently and drastically curtail" the good that could come from Earth-originating life. Thus, my goal is also to reduce existential risk, but not by reducing extinction risk -- instead by working to make it so that if human values do control the galaxy, there will be fewer wild animals, subroutines, and other simulated minds enduring experiences that would make us shiver with fear were we to undergo them.

Rebuttal by Carl Shulman, written 8 Dec 2012:

Carl wrote a thorough response to this piece in a later comment.

Brian's response, written 8 Dec 2012:

Brian wrote a reply to Carl. It included the following conclusion paragraphs.

Most of Carl's points don't affect the way negative utilitarians or negative-leaning utilitarians view the issue. I'm personally a negative-leaning utilitarian, which means I have a high exchange rate between pain and pleasure. It would take thousands of years of happy life to convince me to agree to 1 minute of burning at the stake. But the future will not be this asymmetric. Even if the expected amount of pleasure in the future exceeds the expected amount of suffering, the two quantities will be pretty close, probably within a few orders of magnitude of each other. I'm not suggesting the actual amounts of pleasure and suffering will necessarily be within a few orders of magnitude but that, given what we know now, the expected values probably are. It could easily be the case that there's way more suffering than pleasure in the future.

If you don't mind burning at the stake as much as I do, then your prospects for the future will be somewhat more sanguine on account of Carl's comments. But even if the future is net positive in expectation for these kinds of utilitarians (and I'm not sure that it is, but my probability has increased in light of Carl's reply), it may still be better to work on shaping the future rather than increasing the likelihood that there is a future. Targeted interventions to change society in ways that will lead to better policies and values could be more cost-effective than increasing the odds of a future-of-some-sort that might be good but might be bad.

As for negative-leaning utilitarians, our only option is to shape the future, so that's what I'm going to continue doing.


Why a post-human civilization is likely to cause net suffering, written 24 Mar 2013:

If I had to make an estimate now, I would give ~75% probability that space colonization will cause more suffering than it reduces. A friend asked me to explain the components, so here goes.

Consider how space colonization could plausibly reduce suffering. For most of those mechanisms, it seems at least as likely that they will increase suffering. The following sections parallel those above.

Spread of wild-animal life

David Pearce coined the phrase "cosmic rescue missions" in referring to the possibility of sending probes to other planets to alleviate the wild extraterrestrial (ET) suffering they contain. This is a nice idea, but there are a few problems.
  • We haven't found any ETs yet, so it's not obvious there are vast numbers of them waiting to be saved from Darwinian misery.
  • The specific kind of conscious suffering known to Earth-bound animal life may be rare. Most likely ETs would be bacteria, plants, etc., and even if they're intelligent, they might be intelligent in the way robots are without having emotions of the sort that we care about.
  • Space travel is slow and difficult.
  • It's unclear whether humanity would support such missions. Environmentalists would ask us to leave ET habitats alone. Others wouldn't want to spend the resources to do this unless they planned to mine resources from those planets in a colonization wave.
Contrast this with the possibilities for spreading wild-animal suffering:
  • We could spread life to many planets (e.g., Mars via terraforming, other Earth-like planets via directed panspermia). The number of planets that can support life may be appreciably bigger than the number that already have it. (See the discussion of f_l in the Drake equation.)
  • We already know that Earth-bound life is sentient, unlike for ETs.
  • Spreading biological life is slow and difficult like rescuing it, but disbursing small life-producing capsules is easier than dispatching Hedonistic Imperative probes or berserker probes.
  • Fortunately, humans might not support spread of life that much, though some do. For terraforming, there are obvious survival pressures to do it in the near term, but probably directed panspermia is a bigger problem in the long term, and that seems more of a hobbyist enterprise.
Sentient simulations

It may be that biological suffering is a drop in the bucket compared with digital suffering. Maybe there are ETs running sims of nature for science / amusement, or of minds in general for psychological, evolutionary, etc. reasons. Maybe we could trade with them to make sure they don't cause unnecessary suffering to their sims. If empathy is an accident of human evolution, then humans are more likely empathetic than a random ET civilization, so it's possible that there would be room for improvement through this type of trade.

Of course, post-humans themselves might run the same kinds of sims. What's worse: The sims that post-humans run would be much more likely to be sentient than those run by random ETs because post-humans would have a tendency to simulate things closer to themselves in mind-space. They might run ancestor sims for fun, nature sims for aesthetic appreciation, lab sims for science experiments, pet sims for pets. Sadists might run tortured sims. In paperclip-maximizer world, sadists may run sims of paperclips getting destroyed, but that's not a concern to me.

Finally, we don't know if there even are aliens out there to trade with on suffering reduction. We do, however, know that post-humans would likely run such sims if they colonize space.

Suffering subroutines

A similar comparison applies here as far as humans likely being more empathetic than average, but humans also being more likely to run these kinds of things in general. Maybe the increased likelihood of humans running suffering subroutines is less than of them running sentient simulations because suffering subroutines are accidental. Still, the point remains that we don't know if there are ETs to trade with.

What about paperclippers?

Above I was largely assuming a human-oriented civilization with values that we recognize. But what if, as seems mildly likely, human colonization accidentally takes the form of a paperclip maximizer? Wouldn't that be a good thing because it would eliminate wild ET suffering as the paperclipper spread throughout the galaxy, without causing any additional suffering?

Maybe, but if the paperclip maximizer is actually generally intelligent, then it won't stop at tiling the solar system with paperclips. It will have the basic AI drives and will want to do science, learn about other minds via simulations, engage in conflict, possibly run suffering subroutines, etc. It's not obvious whether a paperclipper is better or worse than a "friendly AI."

Evidential/timeless decision theory

We've seen that the main way in which human space colonization could plausibly reduce more suffering than it creates would be if it allowed us to prevent ETs from doing things we don't like. However, if you're an evidential or timeless decision theorist, an additional mechanism by which we might affect ETs' choices is through our own choices. If our minds work in similar enough ways to ETs', then if we choose not to colonize, that makes it more likely / timelessly causes them also not to colonize, which means that they won't cause astronomical suffering either. (See, for instance, pp. 14-15 of Paul Almond's article on evidential decision theory.)

It's also true that if we would have done net good by policing rogue ETs, then our mind-kin might have also done net good in that way, in which case failing to colonize would be unfortunate. But while many ETs may be similar to us in failing to colonize space, fewer would probably be similar to us to the level of detail of colonizing space and carrying a big stick with respect to galactic suffering. So it seems plausible that the evidential/timeless considerations asymmetrically multiply the possible badness of colonization more than the possible goodness of it?

Black swans

It seems pretty likely to me that suffering in the future will be dominated by something totally unexpected. This could be a new discovery in physics, neuroscience, or even philosophy more generally. Some make the argument that because we know so very little now, it's better for humans to stick around for the option value: If they later realize it's bad to spread, they can stop, but if they realize they should, they can proceed and reduce suffering in some novel way that we haven't anticipated.

Of course, the problem with the "option value" argument is that it assumes future humans do the right thing, when in fact, based on examples of speculations we can imagine now, it seems future humans would probably do the wrong thing most of the time. For instance, faced with a new discovery of obscene amounts of computing power somewhere, most humans would use it to run oodles more minds, some nontrivial fraction of which might suffer terribly. In general, most sources of immense power are double-edged swords that can create more happiness and more suffering, and the typical human impulse to promote life/consciousness rather than to remove them suggests that negative and negative-leaning utilitarians are on the losing side.

Why not wait a little longer just to be sure that a superintelligent post-human civilization is net bad in expected value? Certainly we should research the question in greater depth, but we also can't delay acting upon what we know now, because within a few decades, our actions might come too late. Tempering enthusiasm for a technological future needs to come soon or else potentially never.
User avatar
Brian Tomasik

 
Posts: 1086
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Gedusa » Tue Dec 13, 2011 9:50 am

In the "Sentient Simulations" category, you've missed out superintelligent AI's simulating lot's of beings which suffer in order to predict the future well. You've also missed the possibility that humans already simulate conscious beings when they want to predict someone's behavior - which I find pretty terrifying, given how many humans there are and how many humans really dislike other humans and daydream about causing them suffering.

A gem I came across only recently was Robin Hanson's brief discussion on "Conditional Morality":
Our evolved moral intuitions are context dependent. We are built to be nicer to each other when times are good, to invest in an attractive reputation. We are also built to form alliances with some in order to counter threats by others; the further in social distance are the threats we perceive, the wider a circle of allies we collect in response. Since we are now richer and have interactions with more distant others, we are nicer to a wider range of allies....

These theories make different predictions about futures where we become poorer and our interactions become more local... the conditional morality theory predicts that the social circle to whom we are nice would narrow to the range of our ancestors with similar poverty and interaction locality.

So if Hanson's singularity comes true, then we might expect humans to be less caring about other people - due to care effectively being something which only appears when people are very wealthy. Indeed, we might expect this in any scenario where per capita wealth drops.

And I think that those scenarios are terrifying and I'd really, really like to see the guys over at FHI/SI do some research on how likely they are to happen, if the risk is high... a big extinction taking out the biosphere as well as us is in order.
World domination is such an ugly phrase. I prefer to call it world optimization
User avatar
Gedusa

 
Posts: 111
Joined: Thu Sep 23, 2010 8:50 pm
Location: UK

Re: A few dystopic future scenarios

Postby Brian Tomasik » Tue Dec 13, 2011 10:08 am

Thanks, Gedusa! I'm (not) glad to have extra bad outcomes added to the list. :)

I don't worry about creating conscious minds when I predict others' behavior, because it seems as though the feelings of those minds loop back onto my own feelings (which is what gives rise to my empathy, etc.). But it is a thought worth exploring. In principle there's no reason this looping-back of emotions should happen, so AIs might very well do away with it to avoid bogging themselves down with mercy for the suffering of others.

Nice quote about conditional morality. What's more, it's plausible that whatever force takes over our world will have no morality whatsoever. Or it might have something it considers "morality" but that we find evil (e.g., Nazi torture or religious punishment).
User avatar
Brian Tomasik

 
Posts: 1086
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby DanielLC » Tue Dec 13, 2011 8:00 pm

You've also missed the possibility that humans already simulate conscious beings when they want to predict someone's behavior


"Please don't wake up. I don't want to die"
Consequentialism: The belief that doing the right thing makes the world a better place.
DanielLC

 
Posts: 686
Joined: Fri Oct 10, 2008 4:29 pm

Re: A few dystopic future scenarios

Postby Hedonic Treader » Wed Dec 14, 2011 3:02 am

Sentient interstellar replicators. Somewhat between spreading wildlife and sentient AI subroutines. A wildlife-like Darwinian process doesn't need to be restricted to biological animals inhabiting planets. Imagine the launch of an artificial interstellar self-replicating probe that uses cosmic resources to copy itself with variation. This could lead to very different phenotypes with parasitic, predatory, pioneering etc. survival strategies. Such a replication process, once launched, could even spread to other galaxies.

If the decision-making algorithms of these entities are partially driven by suffering-like error signals, they could have negative hedonistic value experiences. The scope could be very vast, hedonistic quality control could be problematic due to the openly Darwinian nature of the ecosystems (low energy = starvation signal, integrity violation = pain signal, possible threat detection = fear signal etc). The difference to sentient subroutines is that these entities don't need to be a part of generalized super-AI, they could be individualistic, bound to individual physical phenotypes, comparatively simple, and competitive in a Darwinian sense.

[Hypothetically, the first probes could contain non-mutation strategies to prevent a Darwinian process. Hypothetically, they could operate on gradients of bliss if feasible. Hypothetically, they could be the substrate of choice for a utilitronium shockwave.]
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader

 
Posts: 327
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Brian Tomasik » Wed Dec 14, 2011 3:22 am

Hedonic Treader wrote:The scope could be very vast, hedonistic quality control could be problematic due to the openly Darwinian nature of the ecosystems

Fascinating. Yes indeed.

Hedonic Treader wrote:The difference to sentient subroutines is that these entities don't need to be a part of generalized super-AI, they could be individualistic, bound to individual physical phenotypes, comparatively simple, and competitive in a Darwinian sense.

Good point. I wonder whether these are more or less common across the multiverse than are suffering subroutines.
User avatar
Brian Tomasik

 
Posts: 1086
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Hedonic Treader » Mon Apr 09, 2012 2:39 am

Personal sadism and local power concentration. In our current world, people insist on their private spheres free from surveillance. This extends at least to homes and personal computers in free societies. However, at the same time, we allow these people to have children, "own" pets, and program arbitrary algorithms within these unsupervised private realms. Predictably, despite being illegal, children are abused, pets are tortured, and computer algorithms... well, hopefully they're not sentient yet. People assure each other both the right to such privacy and the right to have near absolute power over other sentient beings in these spheres, even though the abuse is illegal if detected. It is possible that a similar power distribution principle will be translated to a posthuman future, where individual entities with local absolute power assure each other freedom from meddling, while powerless third parties are affected within each private locale. (It may even be a majority of sentients who find themselves in the powerless category.) One might hope that abuse for instrumental purposes (like political torture) will be obsolete, but not all abuse is instrumental; current humans can derive great pleasure from hurting others. Sadism is a huge part of the human condition, sexual and otherwise, and it seems plausible that these torturous as well as privacy-seeking psychological tendencies may find a translation into posthuman nature. Due to the large scale of such a civilization, this could become a huge numerical problem if not addressed properly.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader

 
Posts: 327
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Brian Tomasik » Mon Apr 09, 2012 6:25 am

Personal sadism is one of my worries as well. Homo sapiens enjoyment systems can be pretty messed up. Just search for {torture sims} and see what horrifying things we humans have fun doing. (There are too many examples to mention here. :? )
User avatar
Brian Tomasik

 
Posts: 1086
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Brian Tomasik » Sat Jun 02, 2012 8:13 am

The below is copied from a Facebook discussion. I thought I'd include it here as well to keep everything in the same place.

-------
Even if things go roughly according to the normal state of affairs that we see now, the outcome could be bad if humans who don't share our utilitarian values want to spread nature into the cosmos. Of course *we* would prefer that the nature not have lots of suffering, but not everyone feels this way. (Ned Hettinger: "Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support.")

Even though factory farming will one day be abolished (except in ancestor simulations?), there may be other forms of enslavement or brutal treatment that are driven by economics. For example, suppose that a certain form of negative reinforcement learning proved especially useful for computational purposes, and this learning process was sufficiently sophisticated that we regarded it as suffering. Would post-humans care about it enough to use other, more expensive algorithms?

"as people get richer - which most economists prognose as the most probable for the world at large"
But some like Robin Hanson argue that in the far future, uploads will almost certainly expand their populations until they once again hit subsistance levels. Granted, Hanson himself is not pessimistic about this, but I'm not sure we can be confident about his sanguine attitude. For example, what if minimizing on pleasure is cheaper?

The worst possible outcomes would likely result if things spiral out of our control. The future is very likey to be determined by Darwinism as much as the past has been, and it's quite plausible that everything humans value will be wiped out by agents that can out-compete us. They needn't care about being humane to their reinforcement-learning algorithms, or even to each other (cf. what many animals do to their rivals in the wild). Maybe wars would break out. Maybe one group would seize control and rule the universe by force.

Empathetic sympathy is not universal among animals -- many animals don't show sympathy to non-relatives of the same species, and practically no predators feel bad about eating their prey. [Edited: AIs may have game-theoretic reasons to cooperate with other AIs comparably powerful, but empathy for the powerless (e.g., a suffering minnow in Nigeria) seems maladaptive in the long run unless social pressures preserve this stance as a fitness-enhancing trait.]

These later scenarios that I've been painting would certainly fall into the category of "existential risk" by Bostrom's definition -- they are bad outcomes that we wish to avoid. However, the risk of these possibilities is actually increased when we reduce "extinction risk," because these can only happen if humans survive long enough to develop strong AI. If the probability of one of these things given survival is p, then every 1% by which we reduce the risk of human extinction in other ways, we increase the risk of these by p * 1%.
User avatar
Brian Tomasik

 
Posts: 1086
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby DanielLC » Sun Jun 03, 2012 12:35 am

The future is very likey to be determined by Darwinism as much as the past has been, and it's quite plausible that everything humans value will be wiped out by agents that can out-compete us. They needn't care about being humane to their reinforcement-learning algorithms, or even to each other (cf. what many animals do to their rivals in the wild)


We could just use the more effective algorithms until we out-compete them, then use the more ethical ones. Of course, that assumes there are utilitarians at the helm.
Consequentialism: The belief that doing the right thing makes the world a better place.
DanielLC

 
Posts: 686
Joined: Fri Oct 10, 2008 4:29 pm

Re: A few dystopic future scenarios

Postby Hedonic Treader » Sun Jun 03, 2012 9:56 am

DanielLC wrote:We could just use the more effective algorithms until we out-compete them, then use the more ethical ones. Of course, that assumes there are utilitarians at the helm.

Well, that assumes there is a helm. With Darwinism, there usually isn't one. If replicators freely propagate through space without a common non-mutation algorithm, there may only ever be very local helms. And how much suffering are utilitarians willing to create to out-compete what they consider non-utilitarian rivals? If they are forced to callously play this efficiency race perpetually, then what difference does it make that they consider themselves utilitarians?
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader

 
Posts: 327
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Hedonic Treader » Sun Jun 03, 2012 10:05 am

Alan wrote:In humans, sympathy seems to result when "intentional stance" predictive systems bleed over into "mirror neuron" motivational systems, which causes us to feel sorry for others. An AI designed from scratch could likely overcome this configuration and clearly separate the two functions of "other-mind prediction" vs. "self rewards/punishments."

Applies to humans too. As soon as humans can self-modify, involuntary empathy may be on the list of things to go. But it's not a strong prediction since it's not clear that people would like to self-modify this way, and others might trust and/or like them less. On the flip side, involuntary suffering can probably be edited out as well. It's not clear to me in what direction the utility distribution would go once mind re-design becomes feasible.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader

 
Posts: 327
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Brian Tomasik » Sun Jun 03, 2012 12:02 pm

Hedonic Treader wrote:On the flip side, involuntary suffering can probably be edited out as well. It's not clear to me in what direction the utility distribution would go once mind re-design becomes feasible.

Or, at least, unintentional suffering can be edited out. There will always remain the risk of intentional torture, for purposes of threats/warfare, or (hopefully less often) for sadistic entertainment. :?
User avatar
Brian Tomasik

 
Posts: 1086
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Hedonic Treader » Sun Jul 15, 2012 2:00 am

In this blog post, Carl Shulman introduces dolorium and hedonium as concepts and makes two assumptions relating them to the future. Hedonium (H) is resource use optimized for pleasant experience such as wireheading, Dolorium (D) is resource use optimized for unpleasant experience.

One assumption is that

hedonistic utilitarians could approximate the net pleasure generated in our galaxy by colonization as the expected production of hedonium, multiplied by the "hedons per joule" or "hedons per computation" of hedonium (call this H), minus the expected production of dolorium, multiplied by "dolors per joule" or "dolors per computation" (call this D).


In other words, what really matters in the future is H-D; the rest of sentient life has a comparatively small impact on the utility total because of its different optimization focus.

The other assumption is an optimistic one: Since H and D aren't constrained by fitness considerations, the current finding that bad is stronger than good in darwinian life doesn't have to apply, and we can instead assume symmetry. Furthermore, we can assume a surplus of H over D under realistic assumptions:

Even quite weak benevolence, or the personal hedonism of some agents transforming into or forking off hedonium could suffice for this purpose.


So the future would look good for hedonistic utilitarianism.

I think the assumption of symmetry because H and D aren't constrained by fitness considerations is a valid one, but it may reduce our expectation value of both H and D in any scenario in which resource use is mostly driven by darwinian algorithms. Assume a space colonization event resulting in an open evolution of cultures, technologies, space-faring technological and biological phenotypes etc. How many of them will produce either H or D? Wireheading temptations can locally generate H, game-theoretic considerations can result in D (threats of supertorture as an extortion instrument). But assuming a relatively low level of global coordination, both H and D will probably only exist in small quantities: There will be ordinary selection effects against wireheads; Darwinism favors reproduction optimizers instead.

Furthermore, the expectation values of H and D seem to be linked: In scenarios in which a high quantity of H can be expected, high quantities of D are also more probable, and vice versa. Assume a scenario in which powerful factions have explicit hedonistic goals and want to produce H. Those are exactly the kinds of scenarios in which we would see rivals credibly threatening to produce large quantities of D in order to extort resource shares for their own fitness from the hedonistic factions. Conversely, if D has no practical use because no one powerful enough will care about it, H is also much less likely because the powerful factions all care about other things than hedonism (probably just survival and reproduction of their idiosyncratic patterns).

If the expectation values of H and D are roughly linked, and open colonization and evolution cause strong selection effects against using resources on H and D, H-D may not dominate the expected utility of a big future after all.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader

 
Posts: 327
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby DanielLC » Sun Jul 15, 2012 11:28 pm

With Darwinism, there usually isn't one. If replicators freely propagate through space without a common non-mutation algorithm, there may only ever be very local helms.


Use a common non-mutation algorithm. Once it's sufficiently outcompeted everything else, set it to make happy beings. When a new threat appears, the utilitarian beings will be able to easily overpower it through numbers, or through a relatively small number of unhappy soldiers.
Consequentialism: The belief that doing the right thing makes the world a better place.
DanielLC

 
Posts: 686
Joined: Fri Oct 10, 2008 4:29 pm

Re: A few dystopic future scenarios

Postby Brian Tomasik » Mon Jul 16, 2012 2:58 am

Thanks, HedonicTreader! I really do like this argument, even if I'm not sure whether I agree with it. In particular, I'm not sure whether H does in fact equal D. I'm also not sure if I care more about D even if H == D.

Hedonic Treader wrote:In other words, what really matters in the future is H-D

Or, rather, H*(amount of happiness at level H) - D*(amount of suffering at level D).

Hedonic Treader wrote:bad is stronger than good

What a great paper -- thanks!

Hedonic Treader wrote:There will be ordinary selection effects against wireheads; Darwinism favors reproduction optimizers instead.

Yes. Quite sad.

Hedonic Treader wrote:Furthermore, the expectation values of H and D seem to be linked

Another obvious reason for the connection is that you need to know how to create extreme happiness/suffering, and that would take quite a bit of work to figure out.

Hedonic Treader wrote:If the expectation values of H and D are roughly linked, and open colonization and evolution cause strong selection effects against using resources on H and D, H-D may not dominate the expected utility of a big future after all.

Yes, could be. It's very hard to say, but at the same time, there aren't many questions more important than this one.
User avatar
Brian Tomasik

 
Posts: 1086
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Hedonic Treader » Mon Jul 16, 2012 7:39 am

Brian Tomasik wrote:In particular, I'm not sure whether H does in fact equal D.

I think the argument from symmetry is not a bad one. Of course, this doesn't make the hypothesis certain, just plausible. The evolved intensity asymmetry (bad is stronger than good) may have specific fitness-related functions. David Pearce kind of suggested that it may even be just an accident, that evolution could have stumbled into a different solution (gradients of bliss) and that we can shift into that through hedonic enhancement without leaving the Darwinian paradigm. (He doesn't actually express it like this, but I think it's the gist of the abolitionist project). I'm not sure how probable that is, given the apparent robustness of the asymmetry in evolutionary history. Then again, that robustness may be a sign of a local optimum, and a complete redesign could get us to a new one.

I'm also not sure if I care more about D even if H == D.

I came around to caring about both equally. I think most of the intuition that D matters more comes from our experience of the asymmetry, which would not apply to H and D by hypothesis. Another part is a feeling of injustice, or specific compassion for the worst-case perspectives, which are delocalized from the other perspectives, including the high-pleasure ones. There is no real objection to that, but I found that I still bite the bullet. I wouldn't apply a pain-avoidance premium to my own life, given equal intensities and qualities of pleasure and pain. Since my personal egoistic policy and my utilitarianism should collapse in the special case of solipsism, it would be logically inconsistent to apply a value asymmetry to utilitarianism that I would not accept for my egoism. (I would not want to waste a pleasure surplus.)

Or, rather, H*(amount of happiness at level H) - D*(amount of suffering at level D).

Yes, that's what I meant to express. Thanks for the correction.

Hedonic Treader wrote:There will be ordinary selection effects against wireheads; Darwinism favors reproduction optimizers instead.

Yes. Quite sad.

Thankfully, there will also be selection effects against suffering maximizers. We kind of take it for granted, but maladaptive sadists fare equally ill in Darwinism as wireheads. This is a huge advantage.

Another obvious reason for the connection is that you need to know how to create extreme happiness/suffering, and that would take quite a bit of work to figure out.

Yes. The knowledge needed to create H also make the feasibility of D more probable and vice versa. The big question is if there is an asymmetry in the likelihood for this knowledge to be used on H vs. D. Plausible motivations for H are obvious, plausible motivations for D may be blackmail or out-group hatred or sadism. I think that if the blackmail function were out of the picture, the expected quantity of H could be higher.

DanielLC wrote:Use a common non-mutation algorithm. Once it's sufficiently outcompeted everything else, set it to make happy beings. When a new threat appears, the utilitarian beings will be able to easily overpower it through numbers, or through a relatively small number of unhappy soldiers.

Hm, yes, even though I would suspect that the most resource-efficient forms of hedonium would not provide a strength in numbers on any useful metric.

I think your suggested strategy could make sense in a race for a stable singleton that will dominate the local universe forever, or if there are local sub-clusters to be conquered, bought or colonized, and if these clusters can be defended more easily than conquered once in possession. Such clusters could be physical (e.g. star systems that can be flung into isolation or otherwise guarded efficiently). Or virtual, if there is some kind of superstructure that rigidly enforces property rights or resource claims through agreements or legal rule systems no one has sufficient incentive or power to break.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader

 
Posts: 327
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby Brian Tomasik » Mon Jul 16, 2012 2:12 pm

I'm continually impressed by your insights on these matters, Hedonic Treader. Thanks for the great discussion!

Hedonic Treader wrote:The evolved intensity asymmetry (bad is stronger than good) may have specific fitness-related functions.

Yes, that seems quite possible, and symmetry is somewhat compelling theoretically. However, the fact that we have one data point on the negative side makes our posterior probabilities slightly asymmetric: P(D > H) > P(H > D), even if it's not a big difference.

Hedonic Treader wrote:I came around to caring about both equally. I think most of the intuition that D matters more comes from our experience of the asymmetry, which would not apply to H and D by hypothesis. Another part is a feeling of injustice, or specific compassion for the worst-case perspectives, which are delocalized from the other perspectives, including the high-pleasure ones. There is no real objection to that, but I found that I still bite the bullet.

I know what you mean. I'm inclined to bite the bullet some of the time, but at other times I refuse. It can depend quite a bit on my mood at the time. :)
User avatar
Brian Tomasik

 
Posts: 1086
Joined: Tue Oct 28, 2008 3:10 am
Location: USA

Re: A few dystopic future scenarios

Postby Hedonic Treader » Tue Jul 17, 2012 5:12 am

Brian Tomasik wrote:I'm continually impressed by your insights on these matters, Hedonic Treader. Thanks for the great discussion!

Thanks! I find my thoughts circle back to this topic repeatedly because of the high stakes involved.

However, the fact that we have one data point on the negative side makes our posterior probabilities slightly asymmetric: P(D > H) > P(H > D), even if it's not a big difference.

Yes, unfortunately this seems to be the case.

I know what you mean. I'm inclined to bite the bullet some of the time, but at other times I refuse. It can depend quite a bit on my mood at the time. :)

I know the phenomenon quite well. Rationally speaking, such value judgments shouldn't shift with, say, current blood sugar levels, but they often do. There's evidence that this even impacts judges so that the length of prison sentences or probability of probation correlate with it. The problem with changing values often is that we end up playing games against our own past and future selves, which is inefficient.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon
User avatar
Hedonic Treader

 
Posts: 327
Joined: Sun Apr 17, 2011 11:06 am

Re: A few dystopic future scenarios

Postby DanielLC » Tue Jul 17, 2012 7:50 pm

I'm not sure bad is stronger than good. I think good things happen more often, but bad things are more intense. I suspect that it adds to around zero, although even if that's the case, since the pain of death is bad and has no long-term psychological effects (for the simple reason that you won't be around to have them), it would still come out net bad.
Consequentialism: The belief that doing the right thing makes the world a better place.
DanielLC

 
Posts: 686
Joined: Fri Oct 10, 2008 4:29 pm

Next

Return to Utilitarian future

Who is online

Registered users: No registered users

Bookmark and Share
Site Meter
cron