Sentience and brain size

"The question is not, Can they reason? nor, Can they talk? but, Can they suffer?" - Jeremy Bentham
User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Sentience and brain size

Post by Brian Tomasik » Tue May 05, 2009 3:39 am

Edit on 4 Aug. 2013:
My current thoughts on this topic: "Is Brain Size Morally Relevant?"

Edit on 23 June 2012:
I wrote the following post in May 2009, back when I was confused about consciousness and didn't fully comprehend the reductionist interpretation of it. So forgive some of the language, but most of the content is still sensible and important.

My current position on this topic is as follows. I remain uncertain whether I want to give greater weight to more powerful brains (i.e., ones with greater computational throughput). On the one hand, Bostrom's thought experiment (discussed further below) is compelling. But on the other hand, I have an intuition that what matters most is a single organism's unified experience, and I might bite the bullet and say that if a single mind is divided into two separate minds, then the moral weight has thereby doubled. After all, there's no law of conservation of consciousness analogous to the law of conservation of mass/energy. If the context changes, by converting one unified conscious actor into two, then our assessment can change.

Let me put it another way. It may be that the amount of neural tissue that responds when I prick my finger is more than the amount that responds when a fruit fly is shocked to death. (I don't know if this is true, but I have almost a million times more neurons than a fruit fly.) However, the pinprick is trivial to me, but to the fruit fly, the pain that it endures before death comes is the most overwhelming thing in the world. There's a unified quality to the fruit fly's experience that isn't captured merely by number of neurons, or even a more sophisticated notion of computational horsepower of the brain. See also this discussion about Harry the human and Sam the snail.

Another factor that may be relevant is the "clock speed" of the brain in question. If smaller animals tend to have higher clock speeds (at least within a given class of animals, like mammals), then other things being equal, they would get more weight in our calculations, although it's not clear how big the effect size would be.

In addition, if anthropics are not weighted by brain size, then not giving moral weight to brain size begins to seem more plausible as well.

There are many more subtleties and arguments to bear on this tough question, but I shall leave that to the discussion which follows. So without further ado, here's the original post.


Original piece from May 2009:
--------------------------
Several people have suggested to me the idea that capacity for suffering may vary in rough proportion with -- or at least according to some approximately monotonic function of -- brain size. (I'll use "brain size" as a shorthand term referring to the amount of neural tissue an organism has. Perhaps a more relevant measure, though one for which it's harder to find good statistics, is the amount of neural tissue devoted specifically to producing pain emotions, rather than, say, vision processing or planning.) When I first heard this idea, I found it somewhat surprising. Introspectively, consciousness feels like a binary state: I'm aware and feeling emotions now, but I don't do either to any degree when I'm under general anesthesia. I can't recall feeling "more intensely conscious" at one time rather than another, unless you count the groggy state right before falling asleep. On the other hand, personal introspection doesn't prove much, because during my adult life, I've always had the same brain size, so that even if sentience did vary with brain size, I wouldn't know it. (I might be able to notice the difference if my child self had experienced less intense emotions than my adult self. Introspectively this doesn't feel true, but I don't remember my feelings as a very young child well enough to be sure.)

There do seem to be some good arguments for the size-proportionality position.
  • As the cortical homunculus notion suggests, regions of the body that are more sensitive (e.g., the hands) have far more neurons devoted to them in the brain. My feelings of touch are much more refined (and somewhat more intense) on my hands than on my back. Presumably the same would be true of pain nerves.
  • Painkillers work by reducing the number of pain signals that are produced or the number that actually reach the spinal cord and brain. A smaller number of signals translates into less intense experience.
  • We suspect that other abilities of the brain also vary with its size, notably intelligence. That ability is apparently not binary, despite the fact that it feels somewhat binary to any one individual. (I do have a sense of my intelligence varying from moment to moment, but not drastically.)
However, there are some objections.
  • If sentience scales with brain size, then would men have more intensity of emotion than women, since their brains are larger? And should we worry less about pain in children than adults? Maybe -- politically unpopular ideas needn't be incorrect.
  • Intelligence does not seem to be related to absolute brain size but more to relative brain size. Men and women seem to have basically the same IQs even though men have an extra 4 billion brain cells. Moreover, brain size scales very closely with body size (especially lean body mass), so that in fact, whales have much bigger brains (4000-7000 g) than humans (1300-1700 g). Indeed, the standard metric of interest in intelligence studies is not absolute brain size but brain-to-body mass ratio, or Encephalization Quotient (EQ). Shouldn't we expect a similar sort of trend for emotional intensity?
  • In his "Toward Welfare Biology," Yew-Kwang Ng proposes the principle that because hedonic experiences are energetically costly, evolution generally endows organisms with the minimal levels of emotion needed to motivate them to action, perhaps to differential degrees depending on the situation. Ng is an economist rather than a biologist, so I'm not sure how realistic this assumption is, but it does seem true that intense emotion is more metabolically taxing -- thus explaining why we're close to hedonic neutrality most of the time. The question for the sentience-scaling view is then why evolution would give larger-brained organisms substantially more intense emotions in order to motivate them to do similar sorts of things (eat, mate, avoid predators). Maybe the reply would be that, because of their greater prevalence of neurons, larger organisms simply experience greater emotional intensity for the same level of metabolic cost? Or maybe larger-brained organisms, being perhaps more intelligent, have a larger number of options available to them and so need a larger number of levels of pleasure and pain for motivating a wider range of possible responses?
  • If consciousness results from implementing the right algorithm, then maybe it doesn't matter exactly how that algorithm is run? This suggests the notion that consciousness is either on or off, at least for serial computers. As an illustration, there are lots of functions for computing factorial(n) -- some fast, some slow, some simple, some complex -- but whether a given function computes factorial(n) is either true or false. It doesn't depend on lines of code or the computational burden of running the code. (On the other hand, the number of instances of factorial(n) that can be run on a given machine does depend on the latter factor.)
  • Certain qualia, such as the redness of red, do seem to be binary -- indeed, that's the whole premise behind David Chalmers's "fading qualia" thought experiment. Might consciousness itself be the same way? This is especially interesting viewed in light of the suggestion that consciousness should be substrate-independent. I'm not sure if qualia can be produced by Turing-machine simulations, but if so, what does "brain size" look like in this context? Of course, we do know that different degrees of pain and pleasure are possible, so this last point may be a red herring.
  • On the painkiller point, it may be that fewer pain signals translates into less pain for a given organism. But maybe what matters are the relative amounts of pain signals vs. other signals the brain receives. Larger brains have to process more total inputs. Maybe a tiny brain receiving only a few pain signals feels subjectively worse than a large brain that, while receiving a larger number of pain signals, is distracted by lots of other information. (The gate control theory of pain may be relevant here?)
The question of if and how emotional intensity scales with brain size is, in my view, extremely important, because it affects the urgency we attach to potential suffering by insects in the wild, assuming they can feel pain at all. If, for instance, insect suffering is 1000 times less intense than human suffering, then we should discount the 10^18 insects in the world not only by their probability of sentience (say, 0.1, to use a round number) but also by their reduced emotional intensity if sentient (0.001). In that case, there would be effectively 10^14 human-equivalent insects, much closer to the 10^10 humans that exist.

There's a fundamental problem here, though. A good Bayesian will not pick a point estimate for the ratio of insect sentience to human sentience but will maintain a probability distribution over plausible ratios. In view of the uncertainty on this question, I think it's reasonable to maintain some probability on the hypothesis that insects do suffer about as much as humans. For instance, maybe we assign probability 0.2 to insects being able to suffer at least half as much. But in that case, an upper bound on the "expected ratio" of sentience between insects and humans is 0.1, which implies much more concern for insects than a ratio like 0.001, even if the latter is the value we consider most likely.

Note that if number of neurons devoted to pain processing is the relevant measure, then the disparity between insects and humans will probably be smaller than straight division based on brain mass would suggest, since I assume humans devote a larger fraction of neurons to non-hedonic brain functions. On the other hand, humans are endothermic and, so a friend tells me, have a larger proportion of synapses than insects. Exactly how to incorporate these factors into a weighting even if we do apply one is not obvious.
Last edited by Brian Tomasik on Sat Jul 11, 2009 3:19 pm, edited 5 times in total.

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Post by DanielLC » Wed May 06, 2009 12:40 am

I figure pain is what makes you stop trying to do that, and pleasure is what makes you want to do it more. Do insects learn? If not, they can't feel pain or pleasure. What's more, without something like that, there's no way to say what an insect finds painful, and what it finds pleasurable, or wether or not their lives are worth living.

Sentience is an approximation of the degree to which one can feel happiness and sadness. It is not binary. The question is, if there's more "mass" getting happy, does that make it happier?

I figure it's all about information. Happiness changes the information generated. With more information content of a mind, there will be more change from happiness, and thus more happiness.

As far as I can figure, qualia should not cause anything. It should thus be impossible to tell if you have qualia. Your insistence on there being qualia is an artifact of the way you think. Even if there somehow is a result of qualia, it should be possible to get the result another way. There should be no way to tell if you're a p-zombie or not. That all just seems so wrong, but no matter how much I think about it, I can't think of any reason it would be wrong, or any logical alternative. I guess that just means I can't contribute meaningfully to whether or not it would be possible to notice if your sentience changes.

I find it very difficult to believe that hedonic experiences are energetically costly, or at least that that's the main reason that beings don't experience more. They don't experience more because, if they did, they wouldn't respond to the environment correctly. They'd change what they do based on their last result to much, when it matters how it went before that, and before that.
Consequentialism: The belief that doing the right thing makes the world a better place.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Wed May 06, 2009 1:51 am

I figure pain is what makes you stop trying to do that, and pleasure is what makes you want to do it more.

I would guess it's not quite that simple. I don't think, say, reinforcement learning algorithms induce pleasure and pain in their simulated agents. Similarly, I doubt that plants feel pleasure when they move toward sunlight.

Do insects learn?

Indeed they do. One 1986 paper, "Invertebrate Learning and Memory: From Behavior to Molecules," reviewed studies on organisms like bees, slugs, molluscs, snails, leeches, locusts, and fruit flies, and concluded (pp. 473-76):
The progress achieved over the last 10-15 years in studying a wide variety of forms of learning in simple invertebrate animals is quite striking. There is now no question, for example, that associative learning is a common capacity in several invertebrate species. In fact, the higher-order features of learning seen in some invertebrates (notably bees and Limax) rivals that commonly observed in such star performers in the vertebrate laboratory as pigeons, rats, and rabbits.

[... W]e have reason to hope that the distinction between vertebrate and invertebrate learning and memory is one that will diminish as our understanding of underlying mechanisms increases.
I figure it's all about information. Happiness changes the information generated. With more information content of a mind, there will be more change from happiness, and thus more happiness.

So the idea is that smarter organisms feel more intensity because more "goes on" in their brain?

Viewed from the standpoint of information transmission, one might argue that more neural mass doesn't mean more sentience, because bigger organisms need more tissue in order to send the same basic signal to the brain -- namely, "I'm in pain. Do something."

There should be no way to tell if you're a p-zombie or not.

Does the fact that we both appear to be members of the same species help? It doesn't provide certainty, but it would seem to make me think it highly likely that you experience qualia because I experience qualia, no?

Your point about energy costs makes sense.

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Post by DanielLC » Wed May 06, 2009 4:40 pm

GLaDOS wrote:Although the euthanizing process is remarkably painful, 8 out of 10 Aperture Science engineers believe that the companion cube is most likely incapable of feeling much pain.
I just felt like quoting that.

I don't think, say, reinforcement learning algorithms induce pleasure and pain in their simulated agents.

How would you know? I agree that it's probably more complicated, but I'd guess that said algorithms feel less pleasure and pain.


So the idea is that smarter organisms feel more intensity because more "goes on" in their brain?

Viewed from the standpoint of information transmission, one might argue that more neural mass doesn't mean more sentience, because bigger organisms need more tissue in order to send the same basic signal to the brain -- namely, "I'm in pain. Do something."


It's telling the same message to more brain. Every part of the brain reacts in its own way. Think of it like this: The free market is one giant hive mind. When something bad happens, it feels pain. But it's not the same as just one person feeling pain. Each piece of the market is affected slightly differently. As such, a market containing a million people feels about a million times as much pain as one person.

There should be no way to tell if you're a p-zombie or not.

Let me rephrase this, there should be no way for you to tell if you're a p-zombie or not.


In my psychology class they said that people do not generally respond nearly as well to punishment as to reenforcement. Does that mean that humans feel significantly more happiness than sadness? Does anyone know if other animals respond similarly?

A problem with my Pavlovian idea of happiness is learned helplessness in which a person doesn't think they have control over there situation, and thus does not react in the Pavlovian manner, but still shows other signs of pleasure or pain. For example, a person would tell you that they're happy/sad.
Consequentialism: The belief that doing the right thing makes the world a better place.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Wed May 06, 2009 5:52 pm

How would you know? I agree that it's probably more complicated, but I'd guess that said algorithms feel less pleasure and pain.

If they do feel any amount of pleasure and pain comparable to that of ordinary organisms, then running such simulations (with lots of rewards and no punishment) would be an extraordinarily cheap way to produce utility. I ought to start running some right now with my spare computing cycles! Indeed, we could create a "Happiness@home" project similar to SETI's in which people run massive numbers of enormously blissful simulated agents.

I'm not being completely facetious here, because I do think that post-humans ought to fill the universe with utilitronium. But I'm guessing that consciousness requires more than, say, a Python object with a field "Happiness = +5," even one that responds to events in a simulated environment. Perhaps you could elaborate on your position here?

As such, a market containing a million people feels about a million times as much pain as one person.

Interesting. And yet, I only feel like I'm one mind. Or is that an illusion? Are there really lots of "minds" in my head that all feel like they're the only one?

In my psychology class they said that people do not generally respond nearly as well to punishment as to reenforcement. Does that mean that humans feel significantly more happiness than sadness?

Do you know what was meant by "responding better" to one than the other? Could that just mean that reinforcement is more effective than punishment at changing behavior (regardless of how it feels subjectively)? I'd be interested to hear more, because I'm not aware of such research. Indeed, the concept of negativity bias would seem to suggest the opposite.

Your learned-helplessness point is well taken.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Possible experiments

Post by Brian Tomasik » Wed May 06, 2009 6:15 pm

It seems that the dependence of emotional intensity on neural mass may be a testable hypothesis, at least in theory. For example, scientists could conceivably take away people's nerve tissue and see whether the intensity of their experiences decreased. (Indeed, maybe one could ask that question of patients who have undergone hemispherectomies.)

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Post by DanielLC » Wed May 06, 2009 9:56 pm

Maybe I was a bit too fast saying that sentience is just how much emotion you feel. I figure a person losing brain mass would feel less happiness because there's less of them to feel it, but that wouldn't make them think they have less happiness. The parts of them you take out wouldn't respond to the emotion, but the person wouldn't notice, as they wouldn't act as if without the emotion. They just wouldn't be there, and thus would have no effect on how the person feels.

You could give someone a drug to make them feel like time passes slower. (This is a thought experiment. I don't actually know of such a drug.) It wouldn't actually make them think faster. It would just make them think more time has passed. They'd think they feel happy (or sad) longer, and therefore more. Would they actually feel the emotion faster? If so, what if you completely destroyed their ability to judge the passage of time?

If they do feel any amount of pleasure and pain comparable to that of ordinary organisms, then running such simulations (with lots of rewards and no punishment) would be an extraordinarily cheap way to produce utility.

I doubt it's anywhere near what, say, an ant would feel. Possibly on par with an ant, if it's done very well. Also, there's no obvious way to say whether somethings being more or less like it was before. As a simple example (if you have background with computers), let's say you have a program that outputs numbers, and it normally picks around 2. It then picks -1, and starts picking around 1, it seems like it would be doing something closer. On the other hand, if it picks 4294967295 (2^32-1) and then starts picking around 1, you'd say that it's doing something further. The problem here is that the only difference is the first program uses signed integers, and the second uses unsigned integers. Both programs are doing exactly the same thing. The only difference is how you interpret the binary value "1111111111111111".

Unfortunately, the post-humans won't have much better an ability do deal with this. There is simply no objective way to measure pleasure. They can only guess what it is and measure (or optimize) that.

Interesting. And yet, I only feel like I'm one mind. Or is that an illusion? Are there really lots of "minds" in my head that all feel like they're the only one?

I find it rather hard to believe that qualia can think. You think you're the only mind because of how your brain works. Your conscious mind is probably the only one that even realizes it's a mind at all. Unfortunately, this is all too closely related to that qualia paradox I mentioned earlier.

Of course, you can't feel like you're two minds, as you can only feel your own mind.

Could that just mean that reinforcement is more effective than punishment at changing behavior (regardless of how it feels subjectively)?

Yes. That's what I meant.

My psychology teacher suggested that it was because you feel happy with getting away with something when you're not punished, which, if correct, would mean that it is subjectively not as bad.
Consequentialism: The belief that doing the right thing makes the world a better place.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Wed May 06, 2009 11:04 pm

I figure a person losing brain mass would feel less happiness because there's less of them to feel it, but that wouldn't make them think they have less happiness.

That sounds somewhat plausible. In that case, the brain-mass hypothesis would be harder to test than I thought.

Both programs are doing exactly the same thing. The only difference is how you interpret the binary value "1111111111111111".

Nice example. Related is this quote from Eliezer Yudkowsky, which I referenced in a piece on the hard problem of consciousness:
If you redesigned the brain to represent the intensity of pleasure using IEEE 754 double-precision floating-point numbers, a mere 64 bits would suffice to feel pleasures up to 10^308 hedons... in, um, whatever base you were using. [...]

Now we have lost a bit of fine-tuning by switching to IEEE-standard hedonics. The 64-bit double-precision float has an 11-bit exponent and a 52-bit fractional part (and a 1-bit sign). So we'll only have 52 bits of precision (16 decimal places) with which to represent our pleasures, however great they may be. An original human's orgasm would soon be lost in the rounding error... which raises the question of how we can experience these invisible hedons, when the finite-precision bits are the whole substance of the pleasure.

We also have the odd situation that, starting from 1 hedon, flipping a single bit in your brain can make your life 10154 times more happy.

And Hell forbid you flip the sign bit. Talk about a need for cosmic ray shielding.

But really - if you're going to go so far as to use imprecise floating-point numbers to represent pleasure, why stop there? Why not move to Knuth's up-arrow notation?

For that matter, IEEE 754 provides special representations for +/- INF, that is to say, positive and negative infinity. What happens if a bit flip makes you experience infinite pleasure? Does that mean you Win The Game?

Now all of these questions I'm asking are in some sense unfair, because right now I don't know exactly what I have to do with any structure of bits in order to turn it into a "subjective experience". Not that this is the right way to phrase the question. It's not like there's a ritual that summons some incredible density of positive qualia that could collapse in its own right and form an epiphenomenal black hole.

But don't laugh - or at least, don't only laugh - because in the long run, these are extremely important questions.
I'm rather skeptical that these suggestions have much to do with qualia at all. Even if the functionalists are right that qualia can be produced by executing the right algorithm on an arbitrary Turing machine (which itself I find dubious), I suspect it might be impossible for someone to specify "amount of happiness / pain" just by encoding a symbolic number. Rather, I'm imagining that one execution of the "simulate happiness" loop produces some fixed amount of experience, and you have to execute the loop lots of times to get lots of experience.

But I really don't know. I agree with Eliezer that "these are extremely important questions"!

Unfortunately, the post-humans won't have much better an ability do deal with this.

So do you think the hard problem is insoluble (or perhaps ill-formed)?

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Post by DanielLC » Thu May 07, 2009 4:51 am

I'm rather skeptical that these suggestions have much to do with qualia at all. Even if the functionalists are right that qualia can be produced by executing the right algorithm on an arbitrary Turing machine (which itself I find dubious), I suspect it might be impossible for someone to specify "amount of happiness / pain" just by encoding a symbolic number. Rather, I'm imagining that one execution of the "simulate happiness" loop produces some fixed amount of experience, and you have to execute the loop lots of times to get lots of experience.

You could interpret anything in an infinite number of ways, so I figure the Turing machine would have to be sufficiently simple. Other than that, I find it more likely that people have qualia because of the way they think, not because of some chance physical process. Of course, there's much more to it than a number. Even my idea, which I figure is over-simplified, requires something that continually does things closer to how it's doing them right now. There's no simple way to create ridiculous amounts of happiness with it (unless you count ridiculously fast computers).

So do you think the hard problem is insoluble (or perhaps ill-formed)?

It's insoluble. Even on the off chance that there is some measurable effect of qualia, there's still no way to tell that the qualia is an intermediate step. It would be indistinguishable from whatever is causing the qualia to cause the result directly.
Consequentialism: The belief that doing the right thing makes the world a better place.

davidpearce
Posts: 39
Joined: Thu May 07, 2009 8:27 am

Re: Sentience and brain size

Post by davidpearce » Thu May 07, 2009 8:32 am

Electrode studies suggest large parts of the brain contribute very little to the intensity of consciousness, whereas stimulation of extraordinarily small areas can be agonizingly painful or intensely pleasurable.
Perhaps it's the particular architecture and gene expression profile of a firing nerve cell (and its connectivity) that matters most to the intensity of experience rather than brain size per se.

However, it might be illuminating to conduct a variant of the Wada test http://en.wikipedia.org/wiki/Wada_test
and ask subjects to report on their subjective intensity of awareness after one cerebral hemisphere is anaesthetized at a time.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Thu May 07, 2009 7:59 pm

Other than that, I find it more likely that people have qualia because of the way they think, not because of some chance physical process.

I guess the question is whether "the way they think" is all that matters for consciousness. If so, and if this thinking process is algorithmic, then it can be implemented on, say, a Lego universal Turing machine, as the functionalists claim.

Regarding "chance physical processes," does whether something happens by chance make a difference? If the population of China just happens to make cell-phone calls corresponding to the algorithm for the way you respond when you stub your toe, does that fail to produce pain? What if the Chinese population sets out to make the phone calls deliberately? These are just general questions for anyone who wants to answer. As I've suggested, I'm skeptical but uncertain about the existence of consciousness in either case.

Perhaps it's the particular architecture and gene expression profile of a firing nerve cell (and its connectivity) that matters most to the intensity of experience rather than brain size per se.

Interesting suggestion. Of course, we might suspect that an organism's number of such firing nerve cells is roughly proportional to total brain size (?), in which case the implications for how much we care about various animals would be the roughly the same.

User avatar
RyanCarey
Site Admin
Posts: 717
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Sentience and brain size

Post by RyanCarey » Mon May 11, 2009 4:22 am

I agree with DanielLC that Pavlovian change is linked to consciousness. The fact that doing things that are evolutionarily bad for us (touching a hot object) tends to hurt and that doing things that are evolutionarily good for us (eating food, sex) tends to feel pleasurable can be no coincidence.

Without asserting that this idea is necessarily true, I'll clarify this idea and explore it a little. What causes consciousness is a change-generator. For us, that means an impulse that moves along a neuron and tunes us to respond to our environment. We're conscious when messages reach our decision-making centres. We consciously experience these messages whether they're true or false. Possibly, this is equivalent to remodelling of neurons.

Sometimes, a decision arises from misunderstanding of the environment: the brain can be conscious of false perception just as it can be conscious of true information. When you touch a hot object, you may feel pain. But pain in the arm can also occur when there's nothing wrong with your arm, like if your pain is referred from a heart attack. You can even experience arm pain after your arm has been amputated.

I would guess it's not quite that simple. I don't think, say, reinforcement learning algorithms induce pleasure and pain in their simulated agents. Similarly, I doubt that plants feel pleasure when they move toward sunlight.
It might depend on whether we interpret such a plant's movement as behaviour or change in behaviour. But I'm not sure I see the problem with seeing plants to be conscious. What's really problematic is that so many of the processes that occur inside our human brains are unconscious. I think we need to formulate a theory that explains why we feel what goes on in our cerebral cortex but not the stuff that goes on in the bits of our brain that are close to the spine and that are evolutionarily older.

Interesting. And yet, I only feel like I'm one mind. Or is that an illusion? Are there really lots of "minds" in my head that all feel like they're the only one?
I don't know, but I'll share an analogy: maybe it's kind of like the interaction between magnets. When you put a couple of small magnets next to eachother, they align themselves so that they stick together. Then, their magnetic fields combine and they exert a magnetic force on the things at either end of the newly formed larger magnet.
You can read my personal blog here: CareyRyan.com

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Mon May 11, 2009 1:55 pm

I agree with DanielLC that Pavlovian change is linked to consciousness. The fact that doing things that are evolutionarily bad for us (touching a hot object) tends to hurt and that doing things that are evolutionarily good for us (eating food, sex) tends to feel pleasurable can be no coincidence.

Need it be Pavlovian change? Either nurture (learning) or nature (genes) can encode our revulsion toward hot objects and attraction toward food and sex. Indeed, I strongly suspect that it's genetic hard-coding at work in each of those examples. Suppose the sharp pain of hot objects was due to classical conditioning rather than evolutionary hard-wiring. Then aversion to hot objects would be learned as a conditioned response in reaction to the association between hot objects (conditioned stimulus) and subsequent damage to the hand (unconditioned stimulus) that produces the unconditioned response of not wanting tissue damage. But organisms don't have to pick up hot objects and observe tissue damage to learn that hot objects hurt.

The pleasure of sex is even more clearly not due to classical conditioning. What would be the unconditioned response that reinforces it? Having a child? (That implies a delay of at least 9 months in the learning process.) What could possibly reinforce sex with condoms?

But I'm not sure I see the problem with seeing plants to be conscious.

Interesting -- I think that's a rare position, except among, say, panpsychists. The standard arguments against plant sentience are
  • Plants have no nervous system, and yet the nervous system seems necessary for consciousness in humans.
  • Consciousness evolved to improve organisms' fitness by allowing them to react intelligently to novel situations. Stress serves the purpose of informing an organism about tissue damage and/or motivating it to avoid danger. Plants can't locomote or make decisions and so would seem to derive no evolutionary benefit from consciousness.
I think we need to formulate a theory that explains why we feel what goes on in our cerebral cortex but not the stuff that goes on in the bits of our brain that are close to the spine and that are evolutionarily older.

Yes, that's a fascinating and important question. :P

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Post by DanielLC » Mon May 11, 2009 3:00 pm

When I mentioned Pavlov, I was referring to operant condition, and not classical conditioning. I don't think anybody was confused by this, but I think I should correct it anyway.

Before you touch something hot, you don't see anything wrong with it. After you touch it, you want to avoid touching hot objects. Ergo, touching hot objects generates negative utility.

I suspect that it's not only the cerebral cortex that generates qualia. It's just the only part that generates the qualia of thinking that we have qualia.
Consequentialism: The belief that doing the right thing makes the world a better place.

User avatar
RyanCarey
Site Admin
Posts: 717
Joined: Sun Oct 05, 2008 1:01 am
Location: Melbourne, Australia

Re: Sentience and brain size

Post by RyanCarey » Wed May 13, 2009 7:39 am

I'd misused the phrase pavlovian conditioning too (sorry).

I suppose we have to resolve some features of consciousness that might seem contradictory:
1) We can feel pleasure or pain from events (e.g. sexual activity) the first time that they occur and long before we see their outcomes. In this way, our consciousness predicts the evolutionary favourability of an event really accurately.
2) We can have consciousness of things that aren't really happening. You can experience pain referred from your heart to your arm. You can experience (phantom) pain from an arm that has long been amputated. So our consciousness can is sometimes rather inaccurate at representing what has occurred.

Pavlovian conditioning (association of one event with another) can only explain some consciousness. As Alan explained, we don't learn that sex is enjoyable because every time a baby comes afterward! More seriously, Pavlovian conditioning can transfer our experience of one event from another event. But eventually, there must be some event that we were conscious of first. And we need to decide the origin of that consciousness.

Now, the idea that feelings are what guides our behaviour is intuitive. Then consciousness is an evolutionary adaptation. This explains 1. Then, 2 can be described as a misfiring of evolution. However, it must face the objection "but we already understand the behaviour of all of the components of our brain to be determined by their their previous physical states. Why include emotion in our model of the brain's operation?".
You can read my personal blog here: CareyRyan.com

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Wed May 13, 2009 2:48 pm

Ryan, I agree with your explanations of 1 and 2. And your final question is a good one -- indeed, if I understand it correctly, it's precisely the hard problem of consciousness: "Why doesn't all this information-processing go on 'in the dark,' free of any inner feel?" (to quote David Chalmers).

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Wed Jul 08, 2009 9:59 pm

A friend of mine responded to the "Towards Welfare Biology"-related point from the original post as follows:
I would just say that hedonic experiences basically ARE patterns of activity propagating through a nervous system in order change that nervous system's large-scale behavior, both momentary (approach/flight) and in the long term (positive and negative reinforcement, allocation of attention to learning new patterns of whatever level of complexity a given nervous system needs to allocate attention to in order to learn, etc).
I also had the following exchange with him:
[Friend]: How could pain experience possibly not be proportional to brain size? Do only the first set of neurons in a given brain count? If you put many brains parallel to one another in a single skull do the qualia go away?

[Me]: What about the suggestion that conscious experience only requires doing the right algorithm, regardless of how much hardware it's run on? That seems implausible, but perhaps not massively so. For instance, what if conscious experience of pain were like opening a text file on an operating system: You can do it just as well on a mobile-phone OS or on Windows Visa, but the computational burden of doing so is very different.

[Friend]: Is opening two copies of a text file on two versions of Vista different from opening one copy on one computer with twice the voltage in its logic gates?

[Me]: I guess it depends on what sense of "different" is relevant to talk about. The former allows two people in two different places to read the file, while the latter doesn't. But maybe at the fundamental physical level, they're identical insofar as they both involve the same sorts of physical manipulations -- just, in the two-copies case, those operations can be more separated in space and time. I guess the question hinges on where exactly the consciousness comes from?
A second friend recommended "reading up on the literature on what it is to implement a computation," including Nick Bostrom's article, "Quantity of Experience: Brain-Duplication and Degrees of Consciousness." That piece includes a thought experiment suggesting that, indeed, one version of Vista running with twice the wire voltage is equivalent to two versions running separately: The basic idea is to imagine gradually splitting the wires of the logic gates down the center with an insulator and then eventually separating the two halves. Bostrom goes on to propose thought experiments suggesting that the number of instances of a conscious experience can be not just one, or two, or ten, but potentially 0.85 or 1.6. These thought experiments are done with computer components, but footnote 8 (p. 189) suggests that the same types of ideas would apply to biological brains-in-vats.

Bostrom replies to my objection that consciousness seems binary (p. 196):
‘‘How can this be the case?’’ one might ask. ‘‘Either the experience occurs or it doesn’t. How can there be a question of quantity, other than all or nothing?’’ But [when we degrade the reliability of a computer implementing a consciousness algorithm] the underlying reality, the system upon which the experience supervenes, does not change abruptly from a condition of implementing the relevant program to a condition of not implementing it. Instead, the supervenience base changes gradually from one condition to the other. It would be arbitrary and implausible to suppose that the phenomenology did not follow a similarly gradual trajectory.
Footnote 11 (p. 196) adds:
The point here is that systems that have associated phenomenal experience can have it in varying amounts or degrees of ‘‘intensity,’’ even when the duration and the qualitative character of the experience does not vary. Moreover, this particular quantity of degree does not come only in integer increments. Formally, this is no more mysterious than the fact that sticks come in different lengths and that length is a continuous variable (at least on the macroscopic scale).
At the end of the paper, Bostrom notes that an alternative response to the "fading qualia" thought experiment (also mentioned in the opening post) is that, as biological neurons are replaced by silicon, the qualia that the mind generates do not fade in a qualitative sense (e.g., red becomes less red) but merely in the fractional degree to which that experience is being implemented.

I didn't see much discussion of Bostrom's paper in the academic literature, though perhaps there are some responses? There's an email-list discussion here.
Last edited by Brian Tomasik on Thu Jul 09, 2009 7:09 pm, edited 2 times in total.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Thu Jul 09, 2009 4:27 am

Felicifia participant EmbraceUnity posted some interesting contributions to this discussion here.

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Post by DanielLC » Thu Jul 09, 2009 4:33 am

In response to the idea that it might be true that consciousness is binary and insects are as conscious as humans, it seems at least as likely that consciousness is binary and every tiny piece of the human brain is as conscious as the full human.
Consequentialism: The belief that doing the right thing makes the world a better place.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Thu Jul 09, 2009 4:41 am

DanielLC, what if all those little parts of the human brain are not running the full consciousness algorithm? Windows Vista opening a text file consists of a number of function calls to lots subroutines, but none of those subroutines individually would make the file open.

Perhaps this analogy doesn't hold well in practice, because brains -- unlike operating systems on conventional computers -- run in parallel?

Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest