Sentience and brain size

"The question is not, Can they reason? nor, Can they talk? but, Can they suffer?" - Jeremy Bentham
User avatar
Brian Tomasik
Posts: 1117
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Postby Brian Tomasik » Thu Aug 01, 2013 3:37 am

Yes, that's exactly right. :) I tried to save the argument by saying
Maybe this is like finding that my envelope contains a low-ish amount of money.

in which case the argument is no longer fallacious, except insofar as my prior probabilities for how much money the envelopes should contain is itself tainted by the kind of reasoning I suggested.

User avatar
Brian Tomasik
Posts: 1117
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Postby Brian Tomasik » Thu Aug 01, 2013 4:09 am

Comment from Carl Shulman:
Brian:
Harry the human is doing his morning jumping jacks, while listening to music. Suddenly he feels a pain in his knee. The pain comes from nociceptive firings of 500 afferent neurons. At the same time, Harry is enjoying his music and the chemicals that exercise is releasing into his body, so his brain simultaneously generates 500 "I like this" messages. Harry is unsure whether to keep exercising. But after a minute, the nociceptive firings decrease to 50 neurons, so he decides his knee doesn't really hurt anymore. He continues his jumping routine.
Meanwhile, Sam the snail is sexually aroused by an object and is moving toward it. His nervous system generates 5 "I like this" messages. But when he reaches the object, an experimenter applies a shock that generates 50 "ouch" messages. This is the same as the number of "ouch" messages that Harry felt from his knee at the end of the previous example, yet in this case, because the comparison is against only 5 "I like this" messages, Sam wastes no time in recoiling from the object.
Now, we can still debate whether the moral significance of 50 of Harry's and Sam's "ouch" messages are equal, but I'm pointing out that, to the organism himself, they're like night and day. Sam hated the shock much more than Harry hated his diminished knee pain. Sam might describe his experience as one of his most painful in recent memory; Harry might describe his as "something I barely noticed."

Carl:
In a split-brain patient each hemisphere can react separately to stimuli with awareness and cognitive processing, with access to half the body (more or less). So each hemisphere gets half as many "ouch" and "liking" messages, but takes them just as seriously. Similarly, each smaller subregion/subsystem is getting only a small number of positive and negative reinforcement signals, scaled down with size (other things equal). For the conditioning/reinforcement of particular learning in a particular system, the messages it is getting are enough to drastically change synapse strength, its behaviors/outputs, and so on.
"Harry might describe his as "something I barely noticed."" This is just limiting yourself to what is accessible for speech. In split-brain patients the left hemisphere doesn't know what the right is experiencing, and confabulates explanations. The split-brain case is easier because the right hemisphere can communicate with us through control of the motor processes of the left half of the body, but other tinier systems would be more voiceless (except through careful experiments or neurological scanning), just as many animals are voiceless.


I replied:
I agree with a lot of your intuitions, but I'm not sure if I prefer consciousness to be seen as the uppermost level of unified reflection rather than the subsystems. It's not speech specifically, but maybe the part right before speech. Given this view of consciousness, it's more dubious that animals (and especially insects) have it at all.

If the stuff below "almost speech" does matter, then that's a pretty knock-down argument for brain weighting.


Carl then cited Cartesian-theater confusion. I said that was a good point, but what's weird about Carl's position (wherewith I sympathize) is that it's not clear where the boundary of the mind lies. Even the body could be seen as part of the mind to some degree, since it interacts with the RL subsystems. And then even the external environment does somewhat too.

A paragraph from my essay on computations I care about is relevant:
As the homunculus discussion above highlights, there's not a distinct point in the brain that produces consciousness while everything else is non-conscious. Lots of parts work together to do the processes that overall we call consciousness. It does seem clear that some parts are not very important (e.g., much of a person's body, many peripheral nerves, some cortical regions, etc.), but as we keep stripping away things that look non-essential, we risk having nothing left. By way of analogy, I imagine looking in a big box full of stuffing for a tiny ring, only to find that there is no ring and that the stuffing itself was the content of the gift. (In the case of the brain, it's not the stuffing itself that matters but the structure and algorithmic behavior.)

CarlShulman
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: Sentience and brain size

Postby CarlShulman » Thu Aug 01, 2013 6:10 pm

Combined expected value of population count and neuron count, avoiding two-envelope problems
Let's work things out using ex ante priors that avoid the two-envelope problem.

We start off with a prior distribution about the value of population count, each individual organism regardless of size, which we will represent with X.

We also have a prior distribution over the value of some minimum scale of neural structure, which we'll represent with Y.

Now we take a lump of N neural structures, and divvy them up into Z separate organisms, each of which gets N/Z neural structures. How does the expected value change as we move between the extremes of a separate organism for each neural structure and a single big brain?

When Z=1 (one big brain), the expected value is 1*X+N*Y. With Z=N (each neural structure on its own) expected value is N*X+N*Y.

For N=1,000,000, and X=Y=1, going from a single brain to maximum subdivision takes us from 1,000,001 units of value to 2,000,000. With X=10Y=10, we would go from 1,000,010 units to 11,000,000 units.

If instead we look at slightly bigger organisms at the small end, e.g. Z=N/10, then most of the expected value from population count goes away.

So unless one starts with an extreme ratio of X to Y, or gets very strong additional evidence, expected value considering both population count and neural structure count will track fairly closely with neural count alone.

Implications of neural count
Wild invertebrates still make up a majority of neural count (although by much a smaller factor than for raw count, which could easily be passed by human population expansion if the trends of expanding human population and shrinking invertebrate populations continue somewhat longer), and the suffering of factory farmed animals still outweighs the pleasure of human carnivory. However, neural count suggests changes of focus, e.g. cows dominate the neural count of farmed land animals whereas chickens dominate the population count, and in neural count the interests of farmed and wild animals are much more in the same ballpark.

User avatar
Brian Tomasik
Posts: 1117
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Postby Brian Tomasik » Fri Aug 02, 2013 9:20 am

Thanks, Carl!
CarlShulman wrote:So unless one starts with an extreme ratio of X to Y, or gets very strong additional evidence, expected value considering both population count and neural structure count will track fairly closely with neural count alone.

Well, it's not obvious that we should make X and Y in the same ballpark. Naively, it seems like we might want X to be around the magnitude of the average size of organisms, so that individual count can compete with neural count for value -- otherwise one will dominate the other by fiat.

The total value can be written as Z*X + (N/Z)*Z*Y, which is proportional to X + (N/Z)*Y. So in order for X and Y to have a shot at competing with one another, we need X to be roughly (N/Z) times as big as Y, i.e., around the same magnitude as the average number of neurons per organism.

User avatar
Brian Tomasik
Posts: 1117
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Postby Brian Tomasik » Fri Aug 02, 2013 10:49 am

The two-elephants problem

We can turn the brain-size question into something structurally like the two-envelopes problem as follows. Suppose naively that we weigh brain size by number of neurons. An elephant has 23 billion neurons, compared with a human's 85 billion. Say this is 1/4 as many.

Two elephants and one human are about to be afflicted with temporary pain. There are two envelopes in front of us: One contains a ticket that will let us stop the human from being hurt, and the other will let us stop the two elephants from being hurt. We can only pick one ticket. Which should we take?

Suppose you're a human, and right now, you think there's a 50% chance you weight by brain size and a 50% chance you count each organism equally. If organisms are equal, then helping the elephants saves 2 instead of 1 individuals. If you weight by brain size, then helping the elephants is only 2 * (1/4) = 1/2 as worthwhile as helping the human. 50% * 2 + 50% * 1/2 = 5/4 > 1, so you should help the elephants.

Now suppose you're an elephant. If humans are equal to you, then helping the 1 human is only 1/2 as good as helping 2 of your elephant brethren. If you weight by brain size, then helping the human is 4 times as good per organism, or 2 times as good overall, as helping the elephants. 50% * 1/2 + 50% * 2 = 5/4, so you should save the human.

Applying a prior distribution

The Bayesian solution to the two-envelopes paradox is to realize that given the value of your current envelope, it's not possible for the other envelope to be equally likely to have 1/2 or 2 times the value of yours, for all possible values of your envelope. As the value in your envelope increases, it becomes more likely you got the bigger of the two.

One simple way to model the situation could be to use a fixed uniform prior distribution: The value in the larger envelope is uniform on [0,1000], which implies that the value in the smaller envelope is uniform on [0, 500]. Suppose you find that your envelope contains an amount in the range 300 +/- 1/2. (I'm using a range here, because my probability distribution is continuous, so the probability of any given point value is 0.) The probability of this is 1/500 if this is the smaller amount or 1/1000 if this is the larger amount. Therefore, if you started with equal priors between getting the smaller and larger amount (which you should have, given the symmetry of envelope picking), the posterior is that you got the smaller envelope with 2/3 probability. Then (2/3)*600 + (1/3)*150 > 300, so you should switch, and this is not fallacious to do.

Similar reasoning should work for a more complicated prior distribution.

The human envelope has a low amount?

I have the intuition that what I experience as a human isn't vastly important compared to how important I can imagine things being. In this way, it feels like when I open the envelope for how much a human is worth, I find a smaller amount than I would have expected if this was actually the higher of the envelopes. If this is true, it would not be an error to "switch envelopes," i.e., care more about insects than a brain-size weighting suggests.

A good counterargument to this is that my prior distribution is biased by factors like my past experience with caring a lot about insects, so it seems counterintuitive that the universe could matter so little.

Pascalian wagers in the other direction

Let n be the number of neurons in a single brain and W(n) be the moral weight of that brain's experiences. The debate here is whether to take W(n) = n (brain-size weighting) or W(n) = 1 (individual-count weighting). Of course, there are other options, like W(n) = n^2 or W(n) = 2^n or W(n) = busybeaver(n). The Kolmogorov complexity of using the busybeaver weighting is not proportionate with the size of the busybeaver values, so Pascalian calculations may cause that to dominate. In particular, there's some extremely tiny chance that a mind 100 times as big as mine counts busybeaver((85 billion) * 100) / busybeaver(85 billion) times as much, in which case my decisions would be dominated by the tiniest chance of the biggest possible mind, with everything else not mattering at all by comparison.

So the Pascalian wagers don't just go one way, and if I'm not willing to bite the above bullet, it's not clear I can sustain the Pascalian wager on insects either.

However: Moral-uncertainty calculations do not need to conform to Occam's razor. I'm allowed to care about whatever I want however much I want, and I'm not obligated to enforce consistent probabilistic calculations over possible moral values the way I have to over factual variables. So I can keep caring a lot about insects if I want; it's just that I can't ground this in a Pascalian moral-uncertainty wager without accepting the attendant consequences.

CarlShulman
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: Sentience and brain size

Postby CarlShulman » Fri Aug 02, 2013 11:48 am

"I have the intuition that what I experience as a human isn't vastly important compared to how important I can imagine things being."

Could you say more about what the higher alternative could have been? You can't directly evaluate the quantity of your experiences (at issue here), only the quality, and then only that connected to the speech centers. Aren't your judgments of quality calibrated in relative terms, i.e. comparisons of your different experiences at different times?

Also, in thinking about the probabilistic parallel architecture of the human brain it might be helpful to think about decision-making (at least partially) conducted by a democratic plebescite, or a randomly selected jury.

"In particular, there's some extremely tiny chance that a mind 100 times as big as mine counts busybeaver((85 billion) * 100) / busybeaver(85 billion) times as much, in which case my decisions would be dominated by the tiniest chance of the biggest possible mind, with everything else not mattering at all by comparison."

It is also possible that all experiences are multiplied similarly to the largest scale structures we could make. In general, all actions and states have some Pascalian tail in expected value through exotic hypotheses. Wedges between the expected values of actions then come from the strength of evidence distinguishing them. Superlinear scale and unbounded growth (like baby universes) are worth investigating but they don't automatically dominate in expected value over more mundane things.

"However: Moral-uncertainty calculations do not need to conform to Occam's razor. I'm allowed to care about whatever I want however much I want, and I'm not obligated to enforce consistent probabilistic calculations over possible moral values the way I have to over factual variables. So I can keep caring a lot about insects if I want; it's just that I can't ground this in a Pascalian moral-uncertainty wager without accepting the attendant consequences."

In some sense this is true, it is certainly physically possible to select do-gooding actions purely based on the immediate unreflective emotional reactions, even ones based on clear empirical, logical or philosophical mistakes. But abandoning reflection is a very big bullet to swallow, one which even moral anti-realist philosophers generally consider to be a mistake.

Consider an extreme case, e.g. someone who takes the view "I want the warm feeling of reducing suffering effectively by torturing squirrels, and I don't care if there are mistakes in the reasoning/information-processing going into that feeling, or whether this actually reduces suffering." There is a nihilistic take on which there's nothing wrong with this stance, but it's normal to say that the expected revision under various levels of idealization is relevant, i.e. that we predict this person would revise their view if better informed, having thought more carefully and precisely, etc. And such a view runs against many of the well-supported heuristics of everyday life, such as that better informed decisions tend to be better.

The reification of current policy conclusions, even if the underlying reasons don't hold up, seems vulnerable to such considerations, and likely to be ultimately considered a mistake by one's own lights in less extreme cases as well.

If one just wants warm feelings for oneself, one can get those in other ways (David Pearce has discussed a lot of them). But if one is pursuing good in the world, then it seems one should try to be sensitive to the world in one's judgments.

User avatar
Brian Tomasik
Posts: 1117
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Postby Brian Tomasik » Fri Aug 02, 2013 1:43 pm

CarlShulman wrote:Could you say more about what the higher alternative could have been? You can't directly evaluate the quantity of your experiences (at issue here), only the quality, and then only that connected to the speech centers. Aren't your judgments of quality calibrated in relative terms, i.e. comparisons of your different experiences at different times?

Hmm, good points! Yes, it is all relative, and I can't make assessments of quantity, because to me, I am the whole world. That's precisely the intuition behind brain size not mattering: At the highest levels of reporting, at least (e.g., verbalization), you seem to be just one entity, ignoring all the subcomponents, conflicting coalitions, etc. that went on to get to that point.

CarlShulman wrote:Also, in thinking about the probabilistic parallel architecture of the human brain it might be helpful to think about decision-making (at least partially) conducted by a democratic plebescite, or a randomly selected jury.

And if you only care about the direction in which the final vote came out, it doesn't matter how many people voted.

CarlShulman wrote:Superlinear scale and unbounded growth (like baby universes) are worth investigating but they don't automatically dominate in expected value over more mundane things.

Why not?

CarlShulman wrote:The reification of current policy conclusions, even if the underlying reasons don't hold up, seems vulnerable to such considerations, and likely to be ultimately considered a mistake by one's own lights in less extreme cases as well.

I somewhat agree. I would just caution that our brains are leaky, so there's not a clear distinction between "learning more" vs. "changing your brain in a way that biases you toward something you used to not agree with." If you try heroin, you're certainly learning what it feels like, but that's not all you're doing -- you're also setting yourself up to be prone to take it again next time. If you "try out" what it feels like to spend money lavishly, you might get used to the lifestyle and then not return to your old ideals. And so on.

CarlShulman
Posts: 32
Joined: Thu May 07, 2009 2:01 pm

Re: Sentience and brain size

Postby CarlShulman » Sat Aug 03, 2013 2:04 am

One of David Pearce's hedonic hotspot facts:

Neuroscientists are already homing in on the twin cubic-millimetre sized “hedonic hotspots” in the ventral pallidum and nucleus accumbens of the rodent brain. The equivalent hedonic hotspots in humans may be as large as a cubic centimeter.


So we have a ~1000x difference in hedonic hotspot scale, and a 600-700x difference in brain size between rats and humans. These hotspots have more neurons and more tissue to condition/reinforce.

User avatar
Darklight
Posts: 118
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada

Re: Sentience and brain size

Postby Darklight » Fri Feb 14, 2014 1:07 am

Sorry to necro an old thread, but I thought these interesting facts might be somewhat relevant.

According to my old undergrad Biopsychology textbook:

Lack of Clear Cortical Representation of Pain
The second paradox of pain is that it has no obvious cortical representation (Rainville, 2002). Painful stimuli often activate areas of cortex, but the areas of activation have varied greatly from study to study (Apkarian, 1995).

Painful stimuli usually elicit responses in SI and SII. However, removal of SI and SII in humans is not associated with any change in the threshold for pain. Indeed, hemispherectomized patients (this with one cerebral hemisphere removed) can still perceive pain from both sides of their bodies.

The cortical area that has been most frequently linked to the experience of pain is the anterior cingulate cortex (the cortex of the anterior cingulate gyrus; see figure 7.21). For example, using PET, Craig and colleagues (1996) demonstrated increases in anterior cingulate cortex activity when subjects placed a hand on painfully cold bars, painfully hot bars, or even on a series of alternating cool and warm bars, which produce an illusion of painful stimulation.

Evidence suggests that the anterior cingulate cortex is involved in the emotional reaction to pain rather than to the perception of pain itself (Panksepp, 2003; Prince, 2000). For example, prefrontal lobotomy, which damages the anterior cingulate cortex and its connections, typically reduces the emotional reaction to pain without changing the threshold for pain.


According to my old undergrad Sensation & Perception textbook:

COGNITIVE ASPECTS OF PAIN

Pain is actually a subjective state with two distinguishable components: the sensation of the painful source, and the emotion that accompanies it (Melzack and Casey, 1968). The latter aspect of pain can be affected by social and cultural contexts and higher-level cognition. For example, reports of painful strains of the arm from tasks requiring repetitive motion spread rapidly in Australia during the 1980s--like a contagious disease--but they were communicated by workers who did nothing more than talk to one another about their experiences.

We have known for some time that areas S1 and S2 are responsible for the sensory aspects of pain, but researchers have recently been able to use new methods to identify the areas of the brain that correspond to the more cognitive aspects of painful experiences. In one study (Rainville et al., 1997) (Figure 12.11), participants were hypnotized and their hands were placed in lukewarm or very hot water (which activated thermal nociceptors). The participants were sometimes told that the unpleasantness from the water was increasing or decreasing, and their brains were imaged during these periods by positron emission tomography (PET). The primary sensory areas of the cortex, S1 and S2, were activated by the hot water, but the suggestion of greater unpleasantness did not increase their response relative to the suggestion of decreased unpleasantness. In contrast, another area, the anterior cingulate cortex (ACC), did respond differentially to the two hypnotic suggestions, by increasing or decreasing its activity according to the suggestion of increased or decreased unpleasantness. The researchers concluded that the ACC processes the raw sensory data from S1 and S2 in such a way as to produce an emotional response.

At the higher level still, pain can produce what Price (2000) has called "secondary pain affect." This is the emotional response associated with long-term suffering that occurs when painful events are imagined or remembered. For example, cancer patients who face a second round of chemotherapy may remember the first and dread what is forthcoming. This component of pain is associated with the prefrontal cortex, an area concerned with cognition and executive control.
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein

User avatar
Brian Tomasik
Posts: 1117
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Postby Brian Tomasik » Fri Feb 14, 2014 5:39 am

Thanks. :) Good background info. I hope you didn't spend too long typing that up. :P

The question on this thread is whether, if we shrunk those whole mechanisms to 1/2 their size, we would care 1/2 as much. It's clear that some parts of the brain are more important for pain than others, and reducing brain size by removing unrelated parts doesn't matter.

User avatar
Darklight
Posts: 118
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada

Re: Sentience and brain size

Postby Darklight » Fri Feb 14, 2014 10:29 pm

Thanks. :) Good background info. I hope you didn't spend too long typing that up. :P


You're welcome. Nah, it was pretty quick. I type fast.

The question on this thread is whether, if we shrunk those whole mechanisms to 1/2 their size, we would care 1/2 as much. It's clear that some parts of the brain are more important for pain than others, and reducing brain size by removing unrelated parts doesn't matter.


It seems like it wouldn't matter for our sensory experience of pain, but it may matter for our emotional experience of pain. So I think an important thing to clarify first is which of these two we actually care about. Someone may register pain signals, but might not "feel" like the experience was particularly unpleasant to them. I know from first-hand experience that certain psychiatric medication can do things like "zombify" a person, and make them have generally less affect. It's like experiencing everything with a veil of cotton around your head. Pain and pleasure both matter less to you, and the experience is almost like being less conscious than before. There are also other medications that can have the opposite effect, essentially making you feel more alert and aware of things, and able to process information better. This is actually partly why there's a black market for "study aid" stimulants. In my view, they don't just make you more focused, but actually seem to increase your IQ temporarily. It's also known that sleep deprivation lowers your IQ. Think about how you felt the last time you were really sleep deprived. How conscious would you rate yourself then compared to normal?

Anyways, if just the pure sensation of pain is what matters, then it follows that what we should be counting is the number of nociceptors that an organism has, or the ratio of nociceptors to other neurons.

But if what matters is the emotional experience of pain, then arguably we also have to count the neurons and connections that contribute to this distributed experience.

I'm just thinking of a simple example of how a seemingly unrelated area of the brain might have significant influence on emotional pain. Insects generally don't have a concept of their mother. Thus, they don't feel separation anxiety the way a human infant would. We would feel pain at the death of the loved one, mainly because we have the capacity to do a bunch of things, like represent close relationships and attachments to other beings, which itself requires an ability to model such beings, and then there's the faculties required to imagine what it would be like to be without them for the foreseeable future.

Similarly, I would think that animals that can pass the mirror test and are self-aware, like a pig, would suffer more emotional/psychological pain from feeling the threat of existential annihilation, than say, an insect.

Having more neurons and connections and more advanced structures that enable one to cognate at higher levels, arguably allows the experience of emotional pain and emotional pleasure to be much richer because of all these feedbacks and interactions and our imagination. Thus, the "quality" of the pain is greater, even if the quantity of physical pain is not. A human being is able to experience emotional pain in many more ways than an insect.

So I guess it's a matter of whether or not emotional/psychological pain stacks with sensory pain.

There's also the Gate Control Theory of pain (both my textbooks go into some detail about it but I'm feeling too lazy today to type it all up) which notes that there are ways for the brain to inhibit pain when it needs to do so. Examples of this include when soldiers have reported feeling no pain in the middle of battle despite severe wounds. So it's possible that the pain that we actually experience isn't just the nociceptors going "OW OW OW", but a complicated interplay of different emotional conditions.

So to take a stab at answering the question "if we shrunk those whole mechanisms to 1/2 their size, we would care 1/2 as much", I think, assuming size means number of relevant neurons, the answer is yes, because it is the emotional state or experience of pain, it's actual unpleasantness to us, that matters more than the pure sensory experience itself. I imagine this is why insects that lose limbs don't seem to show nearly as much stress as a mammal that loses a limb (not that I've tested this in any way). Both the insect and the mammal have neurons going "OW OW OW OW", but the mammal also has a complex of neurons going "OMG I'M GOING TO DIE!!! A PART OF ME IS MISSING!!! PANIC PANIC PANIC!!!" etc., while the insect is just going "OW OW OW OW OW, MOVE AWAY FROM BAD THING!!!"

So, it kinda depends on what you mean by "care". Do they care less in the sense that they are less conscious and intelligent and therefore can't cognitively comprehend the pain in as many ways as we can? Yes. Do they care less in the sense that they aren't as averse to the negative stimuli as us? Probably not. Given their much simpler world view, it's possible that physical sensations may actually matter more to them because they have little else to care about. So it all boils down to whether you think caring on a higher cognitive level matters or not.

It's also possible that insects, or more likely, micro-organisms are basically Braitenberg vehicles. Which is to say that they aren't sufficiently advanced to reach the threshold of consciousness, and their experience would more like the way we might experience sleepwalking.

Learning actually doesn't require conscious awareness. The ur-example here is H. M., the man who lost his ability to add things to long-term memory after an invasive surgical procedure. Even though he couldn't consciously recollect new things he learned, he was still somehow able to unconsciously learn certain motor skills. Also, there's the famous anecdote about how he repeatedly met a scientist "for the first time" and shook hands with him the first actual time, but because the scientist had a electric buzzer in his hand, in subsequent "first meetings" H. M. eventually started to refuse to shake hands with the scientist, even though he couldn't explain why. XD

So, what does this all mean? It means that even if insects can feel pain and care about it at a very basic cognitive level, it's probable that their conscious experience is very different from ours, and thus their emotional "feeling" of pain may be difficult to compare to our own. We can't therefore assume that the experience will be the same but more or less intense. They don't have a cerebral cortex (though they have a Pallium). They probably can't feel existential angst, or fears that are based on it. I don't think a bee stings an intruder only after a careful deliberation about its sacrifice for the greater good of the hive. It probably does so instinctively, without any consideration to consequences to itself. Though at the same time, the fact that such insects are social, suggests that it has a concept of "other bees who are my friend". That does require some degree of cognitive sophistication. So it may have a very simple concept of self, the part of the world that I can control directly, versus other things. But it apparently doesn't reason much about the self because bees generally are not very self-interested.

I guess where I'm going with this is that, while sensing pain doesn't scale with brain size, experiencing the negative emotions and cognitions that associate with sensing pain, does. There is probably a threshold of consciousness, but it appears to be very low. Nevertheless, there may be other thresholds that determine the quality or character of conscious experience. I think for instance, that the gap between mammals and non-mammals is probably greater than many people realize in terms of their cognitive ability to comprehend their happiness and suffering. But at the same time, I do lean towards thinking that even very primitive animals like insects experience some very basic level of consciousness, and that probably has to be weighted into our considerations.
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein

User avatar
Brian Tomasik
Posts: 1117
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Postby Brian Tomasik » Sat Feb 15, 2014 1:11 pm

Once again, I'm back only briefly. Your writing output is impressive, and I don't have time at the moment to keep up with all of it, sadly. So as usual, the fact that I don't reply doesn't mean I wouldn't like to given more time.

I think you should consider getting a website or at least a blog. Your writings are too good to be lost in the depths of Felicifia threads. Let me know what I need to do to persuade you. :)

Also, if you're on Facebook, you are warmly encouraged to add me there.

Darklight wrote:Nah, it was pretty quick. I type fast.

That's good!

Darklight wrote:Someone may register pain signals, but might not "feel" like the experience was particularly unpleasant to them.

Pain asymbolia is often mentioned in this regard. I think most people agree it's the emotional reaction that matters rather than the quale itself.

Darklight wrote:Insects generally don't have a concept of their mother. Thus, they don't feel separation anxiety the way a human infant would.

As Robert Elwood says, fish don't have lungs, but they still breathe. Some insects are extremely social and may even develop behaviors that look like isolation syndromes when separated from the group.

Darklight wrote:We would feel pain at the death of the loved one, mainly because we have the capacity to do a bunch of things, like represent close relationships and attachments to other beings, which itself requires an ability to model such beings

You miss food without being able to model food. Pain upon separation from others could be implemented at a more basic level.

Darklight wrote:Similarly, I would think that animals that can pass the mirror test and are self-aware, like a pig, would suffer more emotional/psychological pain from feeling the threat of existential annihilation, than say, an insect.

I feel like threat of existential annihilation is probably trivial in most animals compared with living in the moment. Maybe this is an extension of my own peculiar psychology, which doesn't fear death.

Darklight wrote:A human being is able to experience emotional pain in many more ways than an insect.

On the flip side, a human may feel less pain at some things than an insect. For instance, when you're sick, you know you'll get better. An animal doesn't know that. When your loved one goes away, you can look forward to his/her return and therefore suffer less in the interim.

Darklight wrote:There's also the Gate Control Theory of pain (both my textbooks go into some detail about it but I'm feeling too lazy today to type it all up) which notes that there are ways for the brain to inhibit pain when it needs to do so. Examples of this include when soldiers have reported feeling no pain in the middle of battle despite severe wounds. So it's possible that the pain that we actually experience isn't just the nociceptors going "OW OW OW", but a complicated interplay of different emotional conditions.

Yeah. :) I mentioned Gate Control Theory briefly on the opening post, though I now think some of what I said there is confused. For my current views on this topic, see "Is Brain Size Morally Relevant?"

Wikipedia says that GCT is outdated, though I'm not sure what has replaced it.

Darklight wrote:I imagine this is why insects that lose limbs don't seem to show nearly as much stress as a mammal that loses a limb (not that I've tested this in any way). Both the insect and the mammal have neurons going "OW OW OW OW", but the mammal also has a complex of neurons going "OMG I'M GOING TO DIE!!! A PART OF ME IS MISSING!!! PANIC PANIC PANIC!!!" etc., while the insect is just going "OW OW OW OW OW, MOVE AWAY FROM BAD THING!!!"

It's plausible evolution hasn't built in the mechanisms for the insect to act upon the broken leg, i.e., avoid using it until it gets better, tend the wounds, etc. It's also possible evolution builds in less deterrence against injury for shorter-lived creatures.

How about a mouse vs. a human? A mouse reacts very badly to a broken leg too. Here it's not clear there's a big difference in behavioral response by the organism itself.

Darklight wrote:Do they care less in the sense that they are less conscious and intelligent and therefore can't cognitively comprehend the pain in as many ways as we can? Yes.

Do you think people with cognitive disabilities feel pain in fewer ways?

Darklight wrote:Do they care less in the sense that they aren't as averse to the negative stimuli as us? Probably not. Given their much simpler world view, it's possible that physical sensations may actually matter more to them because they have little else to care about.

Yes. :)

Darklight wrote:Learning actually doesn't require conscious awareness.

More on that here.

Getting concrete with numbers: How many insects would you let be shocked to death to prevent one human from being shocked to death? (I'm assuming that insects react about as aversively to shocking as humans do, relative to everything they care about.) How about mice vs. humans?

User avatar
Darklight
Posts: 118
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada

Re: Sentience and brain size

Postby Darklight » Sun Feb 16, 2014 8:43 pm

Once again, I'm back only briefly. Your writing output is impressive, and I don't have time at the moment to keep up with all of it, sadly. So as usual, the fact that I don't reply doesn't mean I wouldn't like to given more time.


No worries. My two skills in life are probably writing and programming, so I've gotten very good at turning my thoughts into words with a minimal amount of editing. I actually probably output a lot more than I should be, as it means I'm probably procrastinating on my thesis. >_>

I think you should consider getting a website or at least a blog. Your writings are too good to be lost in the depths of Felicifia threads. Let me know what I need to do to persuade you. :)


I actually do have a website, but it's mostly a personal portfolio rather than a proper site about my ideas. I used to have a few personal blogs, but I found that, in my less mature years, I would waste a lot of time posting emo stuff rather than meaningfully productive things, so I eventually shut down my blogs to cull that unfortunate habit. Also, I feel like there's already a million blogs out there, and I would just be adding yet another blog to people's already long reading lists.

One of the reasons why I am choosing to post on Felicifia is because I like the idea of a Utilitarian forum, and want to support it with my writing. Rather than just making yet another website or blog on the Internet, I feel that my time might be better spent supporting existing ones and trying to help sites like Felicifia reach a critical mass of intellectual activity that will produce synergies, for lack of a better word.

As Robert Elwood says, fish don't have lungs, but they still breathe. Some insects are extremely social and may even develop behaviors that look like isolation syndromes when separated from the group.


Indeed, this is why I discuss bees later on. I'm also aware that apparently cockroaches appear to show distress at social isolation.

You miss food without being able to model food. Pain upon separation from others could be implemented at a more basic level.


Again, it's quite possible.

I feel like threat of existential annihilation is probably trivial in most animals compared with living in the moment. Maybe this is an extension of my own peculiar psychology, which doesn't fear death.


It's not -that- peculiar, given that Epicurus was apparently the same way (or at the very least he taught that the fear of death was silly).

On the flip side, a human may feel less pain at some things than an insect. For instance, when you're sick, you know you'll get better. An animal doesn't know that. When your loved one goes away, you can look forward to his/her return and therefore suffer less in the interim.


This does complicate things a bit. I admit, I'm making a big assumption that the added pain and pleasure from all the possible considerations that a human can have outweighs this somehow.

Wikipedia says that GCT is outdated, though I'm not sure what has replaced it.


I'm not aware of any new theory, but maybe my textbooks are out of date? I took those courses in 2007 though...

How about a mouse vs. a human? A mouse reacts very badly to a broken leg too. Here it's not clear there's a big difference in behavioral response by the organism itself.


A mouse is a mammal with a cerebral cortex, so it's not surprising that they would behave very similarly to humans. Most mammals are actually quite intelligent (stupid rats who fail to learn to press the lever for a food pellet in my Experimental Psychology: Learning labs, notwithstanding XD). I would definitely say a mouse probably feels emotional pain more similarly to a human than an insect, probably.

Do you think people with cognitive disabilities feel pain in fewer ways?


It depends on the cognitive disability. If it's comprehensive rather than something specific like left field object blindness, then they probably do feel pain in fewer ways, but the experience of pain itself may still be as intense as with non-impaired humans. There are some very unique disabilities, like Congenital Analgesia, where the person feels no sensory pain. I also think someone who's had a lobotomy, probably doesn't feel as much pain either. Again, it really depends.

More on that here.


I find myself constantly being impressed by just how thoroughly you've researched these things. I apologize if I often go over things that you've already looked at. While I've read a lot of your essays, I'll admit I've more skimmed them for the gist, than given them all the kind of due diligence they deserve.

Getting concrete with numbers: How many insects would you let be shocked to death to prevent one human from being shocked to death? (I'm assuming that insects react about as aversively to shocking as humans do, relative to everything they care about.) How about mice vs. humans?


Right now, I'm not confident enough about my knowledge of the subject to really make such calculations with certainty. On the one hand, I did say I consider humans to be utility monsters, which would suggest the numbers could go to arbitrarily large numbers.... but I hesitate to take that stand because it would potentially justify enormous amounts of suffering. So I guess I have to say, I don't know. It depends on whether or not a strict hierarchy of sentience can be justified, or whether the calculation function should weigh each sentient being according to their relative sentience level.

If it were the latter, we could assume that a human being is a million times as sentient as a insect. In that case the number could be a million insects to one human. And if a human being is a hundred times as sentient as a mouse, a hundred mice to one human. But again, I'm not even sure if this can be justified, any more than weighing them equally can be justified. My intuition is that a human should be worth more than a mouse or an insect, but I admit that could be a bias.

On the other hand, if we look at extreme edge cases, I really doubt we should give the same weight we give to humans, mice, or insects, to that of a photo-diode, which according to Integrated Information Theory would be the most minimally conscious thing possible. A photo-diode doesn't feel pain or pleasure at its "experience" of discrimating between different levels of light. So I'm inclined to think that there are certain thresholds that need to be reached before we start granting "sentient" things, moral worth. Thus, it may well matter more what structures are in the "brain" of these entities, than the pure number of neurons. It's imaginable for instance, to create an artificial neural network with billions of neurons that would rival a human brain in size, but all those neurons were purely used for image classification, rather than having any sort of pain/pleasure evaluation. Indeed, Google recently made a massive network that just classifies cats in Youtube videos. It had more neurons than a snail, but arguably, it was less sentient because all those neurons were for much fewer and simpler structures.

Thus, while the number of neurons is a good approximation in practice, it's only because evolved brain complexity in terms of structures seems to correlate well with number of neurons.
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein


Return to “Animal welfare”

Who is online

Users browsing this forum: No registered users and 7 guests