Friendly AI and utilitarianism

For ethics in the real world - bioethics, law, effective altruist outreach etc.
lukeprog
Posts: 9
Joined: Mon Jan 12, 2009 5:41 am

Friendly AI and utilitarianism

Postby lukeprog » Sun Aug 14, 2011 7:43 pm

Edit: This thread now contains many overlapping discussions. I'm keeping an updated index of the posts in the narrow discussion between Alan and I here.

I'm currently a researcher for Singularity Institute because I think that producing research helpful for creating Friendly AI is the most important (and satisfying) thing I can do with my time. (For the basics, see my Singularity FAQ.

Some members of this forum have expressed doubts about the worthiness of the Friendly AI project. If I'm right about the value of Friendly AI, then I'd like to persuade others of its value. If I'm wrong about the value of Friendly AI, then I'd like to be persuaded of that so I can spend my time doing something else.

I'd like to focus a discussion not on the plausibility of an intelligence explosion or on many other possible topics, but instead on the issues of whether Friendly AI would be 'good' for the universe.

Alan Dawrst, in particular, has expressed some misgivings about SI's mission:

A main reason why I’m less enthusiastic about SIAI is that the organization’s primary focus is on reducing existential risk, but I really don’t know if existential risk is net good or net bad. As I said in one Felicifia discussion: “my current stance is to punt on the question of existential risk and instead to support activities that, if humans do survive, will encourage our descendants to reduce rather than multiply suffering in their light cone. This is why I donate to Vegan Outreach, to spread awareness of how bad suffering is and how much animal suffering matters, with the hope that this will eventually blossom into greater concern for the preponderate amounts of suffering in the wild.”

“Safe AI” sounds like a great goal, but what’s safe in the eyes of many people may not be safe for wild animals. Most people would prefer an AI with human values over a paperclipper. However, it’s quite possible that a paperclipper would be less likely to cause massive suffering than a human-inspired AI. The reason is that humans have motivations to spread life and to simulate minds closer to their own in mind-space; simulations of completely foreign types of minds don’t count as “suffering” in my book and so don’t pose a direct risk. (The main concern would be if paperclippers simulated human or animal minds for instrumental reasons.) In other words, I might prefer an unsafe AI over a “safe” one. Most unsafe AIs are paperclippers rather than malevolent torturers.


I'd like to clarify my understanding of this position. Are we using total utilitarianism or average utilitarianism to make the moral calculus? Negative or positive utilitarianism? Are we using a person-affecting view or not? Is there a special concern for terrestrial animal suffering, or instead for suffering in general? (We may be approaching a transition point after which most conscious minds will be made not of meat but of non-meat substrates; is there a reason to care more about the suffering of minds that run on meat?)

I hope Mr. Dawrst will be interested to engage me directly, just so the conversation can be manageable, but of course others are welcome to join the conversation as well.

Cheers,

Luke
Last edited by lukeprog on Tue Aug 23, 2011 2:50 am, edited 1 time in total.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Friendly AI and utilitarianism

Postby Brian Tomasik » Sun Aug 14, 2011 9:11 pm

Thanks for the post, Luke! It's great to discuss these things in a public forum.

lukeprog wrote:Are we using total utilitarianism or average utilitarianism to make the moral calculus? Negative or positive utilitarianism? Are we using a person-affecting view or not?

Total hedonistic utilitarianism, without person-affecting view. As far as negative vs. ordinary utilitarianism, I waffle between the two positions depending on what thought experiment you throw at me. For the sake of this discussion, let's assume negative utilitarianism, because that's what I would default to if I had to make a decision now. (This is non-pinprick negative utilitarianism. Suffering only starts to lexically override pleasure when it becomes as bad as torture or being eaten alive.)

lukeprog wrote:Is there a special concern for terrestrial animal suffering, or instead for suffering in general? (We may be approaching a transition point after which most conscious minds will be made not of meat but of non-meat substrates; is there a reason to care more about the suffering of minds that run on meat?)

I'm not entirely sure; I waffle on this question as well. At least, I'm pretty sure that carbon doesn't matter. If you constructed an animal made of other elements that was essentially a cell-for-cell replacement of a meaty animal, then I would care about it -- probably equally with the original. If it wasn't cell-for-cell identical but still exhibited the same behavior, used nearly the same neural architecture, and ran nearly the same cognitive algorithms, then I think that would also count.

Things become more murky when we think about simulated animals that don't live in the real world. If you constructed a cell-for-cell replaced bionic brain, without the attached body, but gave it inputs as though it was acting in the world, then yes, that would count as well. I'm less certain when I consider an electronic brain that approximates the algorithms of animal brains but using a different physical instantiation, e.g., the hardware of digital computers. If, hypothetically, the hedonic experience of animal brains is importantly tied with neural-network calculations, then I'm not sure whether a computer simulating such calculations using floating-point arithmetic (rather than real, physical electrical impulses) would count. If the computer were to use a more different approach (e.g., support vector machines with numerical matrix calculations), my uneasiness increases. And a giant lookup table almost certainly doesn't pass. :D

I mentioned "hedonic experience [...] importantly tied" to these brain processes, because I don't care nearly as much about the operations that go on as part of an organism's non-conscious functioning (especially with regard to things like, say, the circulatory system or renal system that are divorced from the brain). There's some sort of self-reflective awareness of the goodness and badness of one's emotional states that's necessary for a mind to matter morally, and digital brains would need to have that in order for me to start caring about them.

Incidentally, I'm curious to hear your own thoughts on these questions. :) Knowing each other's value systems helps us watch out for biases in reasoning. (I'm sure my own intellectual calculations are sometimes biased by my negative-utilitarian leanings.)

User avatar
Pablo Stafforini
Posts: 174
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford
Contact:

Re: Friendly AI and utilitarianism

Postby Pablo Stafforini » Mon Aug 15, 2011 12:00 am

Are we using total utilitarianism or average utilitarianism to make the moral calculus? Negative or positive utilitarianism? Are we using a person-affecting view or not? Is there a special concern for terrestrial animal suffering, or instead for suffering in general?


I would like to point out that the problem of creating "friendly" AI will very likely retain its enormous significance for utilitarians regardless of the answers one gives to these questions. This is because an AI has the potential for making huge changes to both the total and the average amount of utility, for creating huge quantities of both happiness and suffering, and for having a huge impact on both terrestrial and extraterrestrial sentient beings. So even if one is uncertain about how to answer some of these questions, one can still be relatively certain that a Singularity will be an incalculably important event.

What is less clear is whether the event will be incalculably good or incalculably bad.

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Friendly AI and utilitarianism

Postby DanielLC » Mon Aug 15, 2011 4:42 am

My biggest objection is the Doomsday argument.

Beyond that, I'm not convinced you guys actually know what you're talking about. It's hard enough to write a few thousand lines of code without bugs. From what I understand, SIAI is trying to find an algorithm for intelligence with a human-acceptable utility function, implement that, and get it correct on the first try. That doesn't seem feasible.

I also don't see why you can be certain AI will go foom. Its intelligence will be positive feedback, but who's to say that the feedback coefficient will stay above one? Given how little we know, if it is above one, it's probably well above one, and will stay that way for a bit, so it's not that unlikely that it will go foom, but it's something like 40%. That doesn't really make a big difference, but if SIAI thinks it's 99.9%, that makes me wonder about how well they're avoiding bias. Just because Elizer taught me most of what I know about rationality doesn't mean that he's perfect at applying it.
Consequentialism: The belief that doing the right thing makes the world a better place.

User avatar
Ruairi
Posts: 385
Joined: Tue May 10, 2011 12:39 pm
Location: Ireland

Re: Friendly AI and utilitarianism

Postby Ruairi » Mon Aug 15, 2011 10:40 am

just as regards existential risk, the extinction of humans means no more factory farms which imo equals net positive utility.

it does mean we can never solve wild animal suffering or suffering on other planets and my knowledge of all this stuff is incredibly limited so i dunno how possible these things are but with my current understanding existential risk doesnt seem to me like something utilitarians should focus on

edit: also "Intelligence is what allows us to eradicate diseases, and what gives us the potential to eradicate ourselves with nuclear war." super intelligence defintely doesnt mean super empathy, i used to read a lot and sometimes still do about primitive tribes and if anything they say much more emphatic than modern humans. also i wouldnt say they're less intelligent in any way, using very complex tools just worked for us evolutionarily

User avatar
Hedonic Treader
Posts: 328
Joined: Sun Apr 17, 2011 11:06 am

Re: Friendly AI and utilitarianism

Postby Hedonic Treader » Mon Aug 15, 2011 11:34 am

DanielLC wrote:Beyond that, I'm not convinced you guys actually know what you're talking about. It's hard enough to write a few thousand lines of code without bugs. From what I understand, SIAI is trying to find an algorithm for intelligence with a human-acceptable utility function, implement that, and get it correct on the first try. That doesn't seem feasible.

It's probably theoretically feasible, but the success probability is really low imho. So is the success probability of our current actions affecting other high-yield futuristic utility shifts, such as very sophisticated hedonic enhancement (Abolitionism HI style). The relevant questions, imho, are:
a) does SIAI compete for money\effort with other efficient charities, and if so, is additional funding for SIAI worth this opportunity cost?
b) does FAI reaseach actually cause net harm, such as generating sentient suffering on a large scale, maybe due to a botched or CEV-derived utility function? (note that this expected net harm seems to be inversely correlated with the general probability of AI foom, ie. the higher this possible harm, the higher the payoff of successful non-harmful FAI)

I also don't see why you can be certain AI will go foom. Its intelligence will be positive feedback, but who's to say that the feedback coefficient will stay above one? Given how little we know, if it is above one, it's probably well above one, and will stay that way for a bit, so it's not that unlikely that it will go foom, but it's something like 40%.

There is no need for certainty here. 40% probability of such a utility-shifting tipping point would be huge!

That doesn't really make a big difference, but if SIAI thinks it's 99.9%, that makes me wonder about how well they're avoiding bias.

Can you cite where they state that number? I don't remember reading it, and I didn't find it in the FAQ.

Ruairi wrote:just as regards existential risk, the extinction of humans means no more factory farms which imo equals net positive utility.

Factory farming is just one fragment of the total systematic suffering creation factors. Even if human civilization persists, factory farming probably will be phased out in the relatively near-term future, or at least animals will be domesticated to the point where their psychology is truely adapted to their lives in factory farms, ie. they no longer feel fear or desire to move around much etc. Either way, it's unlikely that factory farming is a dominant factor in the long-term utility landscape of our future light cone.

EDIT: One additional point that should be considered: Even though the practical feasibility both of FAI and bioethical Abolitionism are low at the moment, highlighting and mainstreaming the relevant problems and arguments are of general societal utility imho. I don't expect SIAI to implement FAI, and I don't expect David Pearce to create a superhappy race of posthumans in his basement, but I'm very glad the associated resources, and the discussion space around it, do exist.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon

User avatar
Mike Radivis
Posts: 32
Joined: Thu Aug 04, 2011 7:35 pm
Location: Reutlingen, Germany
Contact:

Re: Friendly AI and utilitarianism

Postby Mike Radivis » Mon Aug 15, 2011 1:37 pm

The main problem is a ethical/memetical one: Do people accept suffering as legitimate, necessary or even desirable? If that's the case, I doubt that friendly AI (using CEV, or anything else) will create a world that utilitarians find pleasant. Thus, my main priority is to spread memes that are close to utilitarian thinking. Alan Dawrst does the right thing, I guess:
This is why I donate to Vegan Outreach, to spread awareness of how bad suffering is and how much animal suffering matters, with the hope that this will eventually blossom into greater concern for the preponderate amounts of suffering in the wild.


Honestly, I think SIAI focuses too much on purely technical problems. Those are secondary and boring in my eyes - unless the primary value-centered problems aren't "solved".

User avatar
Hedonic Treader
Posts: 328
Joined: Sun Apr 17, 2011 11:06 am

Re: Friendly AI and utilitarianism

Postby Hedonic Treader » Mon Aug 15, 2011 2:10 pm

Mike Radivis wrote:The main problem is a ethical/memetical one: Do people accept suffering as legitimate, necessary or even desirable? If that's the case, I doubt that friendly AI (using CEV, or anything else) will create a world that utilitarians find pleasant.

This particular argument does not apply to FAI versions with a simpler goal system, such as "minimize suffering, create pleasure". This means that the "or anything else" clause is false. Furthermore, it does not apply to a goal system that considers people's revealed preferences, if those preferences align with avoidance of suffering (which I think they mostly do). A properly done CEV would also probably cover this, but I'm skeptical of CEV for conceptual and practical reasons.

Mike Radivis wrote:Honestly, I think SIAI focuses too much on purely technical problems. Those are secondary and boring in my eyes - unless the primary value-centered problems aren't "solved".

If you think that the priority of problems depends on how boring they appear to you, you're essentially seeking entertainment rather than general utility maximization. That is legitimate if you care mostly about your own well-being, but in that case, other entertainment venues are far more efficient, including those that generate virtual warm fuzzies.
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon

lukeprog
Posts: 9
Joined: Mon Jan 12, 2009 5:41 am

Re: Friendly AI and utilitarianism

Postby lukeprog » Mon Aug 15, 2011 10:46 pm

Alan,

Thanks for your reply. Others have raised interesting topics, but I must stay focused on our more narrow discussion.

You've named the positions you waffle among when it comes to normative ethics. For the moment you're working with total hedonistic utilitarianism, and for this discussion we'll assume non-pinkprick negative utilitarianism.

I'm less clear on whether you think mind substrates and programming matter. My first guess is that you're focused on conscious subjective experience, and you have some guesses about which types of substrates and programming could manifest conscious subjective experience. But if those guesses turned out to be wrong (say, conscious subjective experience could be implemented by a lookup table), then what you'd care about is conscious subjective experience. Is that about right?

(In general, please assume I haven't read your essays. If you wish to make a particular point by linking to an essay of yours, please do so.)

Per your request, I'll try to explain the relevant bits of my own view. I'm currently trying to explain my metaethical views here, but it will be some time before that blog post sequence is complete.

I see at least two ways of thinking about ethics. One method, what I call austere meraethics, requires us to define what we mean by moral terms, and then it becomes a scientific project to figure out what in the world corresponds to those moral terms according to our stipulated definitions. We would be doing 'austere metaethics' if we give a precise definition of 'morally good' in terms of total hedonic negative utilitarianism and then argue about whether or not Friendly AI is likely to be morally good according to that definition of 'morally good'.

Another method, what I call 'empathic metaethics', admits that we don't really know what we mean by terms like 'morally good', because whenever we propose a definition that seems to fit our intuitions about our intended meaning for the term, somebody else comes up with counterexamples. Those doing austere metaethics probably admit this, too, but they forge ahead and say "Okay, let's work within the framework of a stipulated definition anyway, so we can answer some questions." Those doing empathic metaethics refuse to accept imperfect stipulated definitions for moral terms and instead works on the project of decoding the cognitive algorithms that generate our concepts like 'moral goodness'. But that project is incomplete, so it leaves us fumbling in the dark as to what is and isn't 'morally good' because we don't know what 'morally good' means, even to ourselves. Still, it can be useful to point out that a given stipulated definition for 'moral goodness' doesn't actually capture our brain's intended meaning for that term.

It can be useful to keep these projects separate. Most moral philosophy I've read combines the two processes, so that arguments against a particular view of morality simultaneously try to show that certain conclusions do not follow from stipulated definitions of moral terms while also trying to show that the stipulated definitions of moral terms don't capture our brain's concept of 'moral goodness' and other moral terms. (To add to the mess, most moral philosophy I've read tries to walk toward our brain's concepts of moral terms by playing with bizarre sci-fi thought experiments rather than by doing cognitive science.)

One way to proceed is by analyzing the implications for Friendly AI given a framework of total hedonistic non-pinkprick negative utilitarianism, while keeping in mind that total hedonistic non-pinkprick negative utilitarianism may not capture even what you mean (non-stipulatively) by 'morally good'. Sound good?

Luke
Last edited by lukeprog on Tue Aug 16, 2011 1:04 am, edited 2 times in total.

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Friendly AI and utilitarianism

Postby DanielLC » Mon Aug 15, 2011 11:04 pm

Can you cite where they state that number? I don't remember reading it, and I didn't find it in the FAQ.


I just made up the number, and come to think of it, it probably should have been smaller. I got the impression from Less Wrong that Elizer at least seems to think it's near certain. For a while I figured he just thought it was probably enough. I can't remember what I saw that made me think he thinks it's near certain.
Consequentialism: The belief that doing the right thing makes the world a better place.

Alexander Kruel
Posts: 16
Joined: Tue Aug 16, 2011 12:08 pm
Location: Germany
Contact:

Re: Friendly AI and utilitarianism

Postby Alexander Kruel » Tue Aug 16, 2011 2:20 pm

DanielLC wrote:I just made up the number, and come to think of it, it probably should have been smaller.


Here and here are the only given numerical probability estimates by Eliezer Yudkowsky (SIAI) of risks from AI that I know of:

Eliezer Yudkowsky wrote:...I don’t think the odds of us being wiped out by badly done AI are small. I think they’re easily larger than 10%. And if you can carry a qualitative argument that the probability is under, say, 1%, then that means AI is probably the wrong use of marginal resources...


Eliezer Yudkowsky wrote:Existential risks from AI are not under 5%. If anyone claims they are, that is, in emotional practice, an instant-win knockdown argument unless countered; it should be countered directly and aggressively, not weakly deflected.


I think that what the Singularity Institute for Artificial Intelligence (SIAI) actually means by "risks from AI" is explosive recursive self-improvement (FOOM) and therefore believe that Eliezer Yudkowsky means that the probability of a negative FOOM event is not under 5% and easily larger than 10%.

I recently asked lesswrong and a few artificial general intelligence researchers for their opinion on risks from AI. Follow the links for more information.

My personal estimates are not based on extensive research or contemplation and are very volatile. Although I believe that FOOM is a possibility with a probability that might be as high as 30%, I refuse to take action at this point and concentrate on acquiring the educational background that is necessary to evaluate the available evidence to arrive at a judgement with smaller error bars.

Nevertheless, since I came across the lesswrong community I perceive them to be overconfident and to some degree overly committed to the cause of mitigating risks from AI. I got into quite a few discussions and thought a little bit about the possibility that the probability of risks from AI might actually be much lower or that the methods used to estimate the probability are flawed. That is the reason for why many of my current pronouncements regarding that topic are overall negative, I simply perceive there to be a lack of skepticism. Yet I nonetheless believe that the SIAI does important work and should be supported. I generally agree with 99.9% of what the lesswrong community and the SIAI believes.

If you keep the above in the back of your mind and want to read up on some of the possible problems check out the following posts I wrote:

1. GiveWell, the SIAI and risks from AI
2. Why I am skeptical of risks from AI
3. Objections to Coherent Extrapolated Volition
4. Open Problems in Ethics and Rationality

(Post #4 highlights some general problems that I'm currently unable to fathom and that make me reluctant to put more weight on reflective rationality rather than my intuition when it comes to charitable giving.)

Also see:

SIAI’s Short-Term Research Program

In a comment on the above post Luke wrote, "...the most exciting developments in this space in years (to my knowledge) are happening right now...Stay tuned." -- we'll see :roll:

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Friendly AI and utilitarianism

Postby DanielLC » Tue Aug 16, 2011 5:23 pm

Does foom just mean an unfriendly AI? I was referring to any AI. I don't see why an AI would necessarily be able to self-improve itself consistently until it becomes super-intelligent.
Consequentialism: The belief that doing the right thing makes the world a better place.

Alexander Kruel
Posts: 16
Joined: Tue Aug 16, 2011 12:08 pm
Location: Germany
Contact:

Re: Friendly AI and utilitarianism

Postby Alexander Kruel » Tue Aug 16, 2011 5:39 pm

DanielLC wrote:Does foom just mean an unfriendly AI? I was referring to any AI.


FOOM just stands for very fast recursive self-improvement. The Singularity Institute actually tries to make an AI undergo explosive recursive self-improvement to take over the universe according to a mathematically precise and binding definition of human-"friendliness", before another group can launch its unfriendly AI and burn the cosmic commons.

DanielLC wrote:I don't see why an AI would necessarily be able to self-improve itself consistently until it becomes super-intelligent.


They basically argue by definition here, according to the SIAI any artificial general intelligence by definition is capable of recursive self-improvement or otherwise it is just a narrow AI. This is what Ben Goetzel calls the "Scary Idea".

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Friendly AI and utilitarianism

Postby DanielLC » Tue Aug 16, 2011 7:35 pm

So, a human-level AI that can no longer self-improve significantly is just a narrow AI? An AI that self-improves to the point that it can match a small country before petering out of improvements is just a narrow AI?
Consequentialism: The belief that doing the right thing makes the world a better place.

Alexander Kruel
Posts: 16
Joined: Tue Aug 16, 2011 12:08 pm
Location: Germany
Contact:

Re: Friendly AI and utilitarianism

Postby Alexander Kruel » Tue Aug 16, 2011 7:50 pm

DanielLC wrote:So, a human-level AI that can no longer self-improve significantly is just a narrow AI? An AI that self-improves to the point that it can match a small country before petering out of improvements is just a narrow AI?


They believe that intelligence is maximally instrumentally useful in the realization of almost any terminal goal an AI might be equipped with. Consequently almost any AI will seek to improve its intelligence until it hits diminishing returns, which won't happen until it reached vastly superhuman intelligence.

Any AI that does not fit this criterion is either deliberately designed to improve slowly (or not at all), or is not a general intelligence. The latter is more probable than the former because the parties that are most likely to design the first artificial general intelligence will either be corporations or the military, both of which are interested in maximizing certain tasks. Consequently the first artificial general intelligence is unlikely to be deliberately slowed down. And even if it is, it might just bypass such scope boundaries if they are not part of its utility function.

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Friendly AI and utilitarianism

Postby DanielLC » Tue Aug 16, 2011 9:26 pm

...which won't happen until it reached vastly superhuman intelligence.


This is my problem. How do you know when it will hit diminishing returns?
Consequentialism: The belief that doing the right thing makes the world a better place.

Alexander Kruel
Posts: 16
Joined: Tue Aug 16, 2011 12:08 pm
Location: Germany
Contact:

Re: Friendly AI and utilitarianism

Postby Alexander Kruel » Tue Aug 16, 2011 9:59 pm

DanielLC wrote:
...which won't happen until it reached vastly superhuman intelligence.


This is my problem. How do you know when it will hit diminishing returns?


That is a problem indeed, I wrote a bit about it here. The important question you have to ask is how likely it is that it won't hit diminishing returns before reaching superhuman intelligence.

Jason Kilwala
Posts: 8
Joined: Thu May 13, 2010 11:39 pm

Re: Friendly AI and utilitarianism

Postby Jason Kilwala » Tue Aug 16, 2011 10:36 pm

For the sake of this discussion, let's assume negative utilitarianism, because that's what I would default to if I had to make a decision now. (This is non-pinprick negative utilitarianism. Suffering only starts to lexically override pleasure when it becomes as bad as torture or being eaten alive.)


Note that this implies that (barring consideration of preexisting animals in space) annihilation is optimal on account of existence giving rise to a nonzero probability of some occurance of torture. This seems to be a very strong statement.

Things become more murky when we think about simulated animals that don't live in the real world. If you constructed a cell-for-cell replaced bionic brain, without the attached body, but gave it inputs as though it was acting in the world, then yes, that would count as well. I'm less certain when I consider an electronic brain that approximates the algorithms of animal brains but using a different physical instantiation, e.g., the hardware of digital computers. If, hypothetically, the hedonic experience of animal brains is importantly tied with neural-network calculations, then I'm not sure whether a computer simulating such calculations using floating-point arithmetic (rather than real, physical electrical impulses) would count. If the computer were to use a more different approach (e.g., support vector machines with numerical matrix calculations), my uneasiness increases. And a giant lookup table almost certainly doesn't pass.


Can you say more about your sense that a giant lookup table almost certainly doesn't pass? I'm inclined to agree that a giant lookup table doesn't pass, but apparently with less confidence than you; in particular I don't know why I feel this way. I guess I associate subjective experience with simultaneity and it seems like a giant lookup table wouldn't involve simultaneous computations in the way that my brain does. But there are questions of how simultaneous does simultaneous need to be and what my bottom line is here which I have only a very poor understanding of.

User avatar
Hedonic Treader
Posts: 328
Joined: Sun Apr 17, 2011 11:06 am

Re: Friendly AI and utilitarianism

Postby Hedonic Treader » Wed Aug 17, 2011 8:47 am

a giant lookup table almost certainly doesn't pass.

I think smbc solved this one today. 8-)
"The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it... Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient."

- Dr. Alfred Velpeau (1839), French surgeon

Kaj Sotala
Posts: 1
Joined: Wed Aug 17, 2011 8:53 am

Re: Friendly AI and utilitarianism

Postby Kaj Sotala » Wed Aug 17, 2011 9:05 am

Alexander Kruel wrote:I think that what the Singularity Institute for Artificial Intelligence (SIAI) actually means by "risks from AI" is explosive recursive self-improvement (FOOM) and therefore believe that Eliezer Yudkowsky means that the probability of a negative FOOM event is not under 5% and easily larger than 10%.

(...)

My personal estimates are not based on extensive research or contemplation and are very volatile. Although I believe that FOOM is a possibility with a probability that might be as high as 30%, I refuse to take action at this point and concentrate on acquiring the educational background that is necessary to evaluate the available evidence to arrive at a judgement with smaller error bars.


I believe your excessive focus on criticism of FOOM, as well as the excessive focus of people in general on FOOM, is somewhat of a red herring. I currently find AI co-operative advantages to be a threat that is much less speculative than FOOM, yet it's something that would alone be enough to decisively tilt the scales in favor of AI.


Return to “Applied ethics”

Who is online

Users browsing this forum: No registered users and 2 guests