Sentience and brain size

"The question is not, Can they reason? nor, Can they talk? but, Can they suffer?" - Jeremy Bentham
Richard Pearce
Posts: 32
Joined: Mon Sep 20, 2010 3:10 pm

Re: Sentience and brain size

Post by Richard Pearce » Wed Sep 29, 2010 4:01 pm

'We might suspect that an organism's number of such firing nerve cells is roughly proportional to total brain size (?), in which case the implications for how much we care about various animals would be the roughly the same' (Alan Dawrst). Do you have evidence to back up this suspicion? Or is your suspicion merely guesswork?
I would tread cautiously on the 'animals feel less pain than humans route' as it has gathered momentum and is repeated often by people who fail to substantiate that claim.
Here I will say why I doubt that non-human mammals feel less pain and fear than we do. I will recall from memory, so forgive any inacurracies, especially David Pearce. David says in either 'The Hedonistic Imperative' or 'The Post-Darwinian Transition' that an organism's capacity to feel pain, anxiety or pleasure relates to the size and density of neurons in the part of the brain responsible for feeling that emotion in proportion to the body size of the organism. He also says that non-human mammals have proportionately larger brain parts that sense fear and anxeity than humans have. If this is correct, then non-human mammals have the capacity to feel more fear and anxiety than humans.
Here I present another reason to doubt that non-human animals feel less pain than humans. Evan Thompson has written a fascinating essay 'Empathy and Consciousness' on mirror neurons and their relationship to empathy in sentient organisms. Here is a link to that essay:

http://www.imprint.co.uk/pdf/Thompson.pdf

His essay explains that our mirror neurons allow us to both learn physical activities by copying others and feel some of the pain that others feel. He says that studies suggest that:

1. the closer likeness that another person has to oneself, the more accurately one's mirror neurons will be able to represent the other person's actions.
2. the more likeness the other person has to oneself, the more intensely we can feel the pain that they feel.

Does this resonate with anyone's experience? For example, does anyone here sympathise more with someone who looks very much like him or herself, male and female? There are a few women I have met, who have looked like me and had some of my idiosyncracies who on seeing, I have been drawn into feeling what seemed like their pain and pleasure. I felt a more intense bodily and emotional experience of the present, because I was suddenly feeling my feelings and theirs I imagine almost as intensely as they were.
From Thompson's evidence, it also makes sense that identical twins feel intense empathy. Their similarity allows their mirror neurons to create in the mind of one twin the pain, actions, pleasure etc. of the other. What genetic benefits do mirror neurons have? They would help organisms favour exploiting organisms with whom they have a more distant genetic relationship.
Mirror neurons might also explain the regular occurrence of sexual partners looking similar to each other. If 2 sexual partners are similar, then they will have a child who looks similar to both of them, and they will both care for it intensely. In caring intensely for the child, the male will be more likely to nurture the child instead of philandering. This is an advantage to the mother, whose child is reared by a caring father, and it is also an advantage to the father, who can be more sure that the child is his (either consciously sure in the case of humans, or sure in genetic terms if we personify his genes in evolutionary speak). He will be more sure that the child is his.
So our lack of mirror neurons that accurately represent the pain of non-human animals should make us wary of guessing that they feel less pain than we do. The make up of our mirror neural network makes us feel much of the pain of fellow humans and less of the pain of non-humans.

Richard Pearce
Posts: 32
Joined: Mon Sep 20, 2010 3:10 pm

Re: Sentience and brain size

Post by Richard Pearce » Thu Sep 30, 2010 8:25 am

'I would tread cautiously on the 'animals feel less pain than humans route' as it has gathered momentum and is repeated often by people who fail to substantiate that claim' (Richard Pearce).
Sorry for the way I expressed that Alan. I typed it quickly and did not realise it would sound almost threatening. Oops. Please take my sentiment as more gentle than the above expression would have you think.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Sun Oct 10, 2010 10:55 pm

Thanks for the comments, Richard!
Richard Pearce wrote: Do you have evidence to back up this suspicion? Or is your suspicion merely guesswork?
Since writing that comment, I've realized that the question of "how much pain an organism feels" has no objective answer but depends on how much we want to care about the organism. Here's a section from my "How the Hard Problem Matters" (see the original source for hyperlinks):
•Is degree of pain proportional to the amount of neural matter involved in generating it? Answer: Do we want to care about algorithms run on a greater amount of hardware more than those same algorithms run on less hardware? I think I probably do, especially in light of Nick Bostrom's thought experiments, but I'm not entirely sure -- my intuitions are still fuzzy.

Proportioning concern based on brain size may be a rough heuristic, but I think we should be wary of extending it too far. For instance, suppose certain insects do run algorithms that self-model their own reaction to pain signals in a similar way to how this happens in humans. (See, for instance, pp. 77-81 of Jeffrey A. Lockwood, “The Moral Standing of Insects and the Ethics of Extinction” for a summary of evidence suggesting that some insects may be conscious.) If bees have only 950,000 neurons while humans have 100 billion, should we count human suffering exactly 105,263 times as much as comparable bee suffering? I would argue not, for a few reasons.

◦The relevant figures should not be the number or mass of neurons in the whole brain but only in those parts of the brain that “run the given pieces of pain code.” I would guess that human brains contain a lot more “executable code” corresponding to non-pain brain functions that bees lack than vice versa, so that the proportion of neurons involved in pain production in insects is likely higher.
◦The kludgey human brain, presumably containing significant amounts of “legacy code,” is probably a lot “bulkier” than the more highly optimized bee cognitive architecture. This is no doubt partly because evolution constrained bee brains to run on small amounts of hardware and with low power requirements, in contrast to what massive human brains can do, powered by an endothermic metabolism. Think of the design differences between an operating system for, say, a hearing aid versus a supercomputer. If we care more about the number of instances of an algorithm that are run more than, e.g., the number of CPU instructions executed, the difference between bees and humans shrinks further.

These points raise some important general questions: How much extra weight (if any) should we give to brains that contain lots of extra features that aren't used? For instance, if we cared about the number of hearing-aid audio-processing algorithms run, would it matter if the same high-level algorithm were executed on a device using the ADRO operating system versus a high-performance computer running Microsoft Vista? What about an algorithm that uses quicksort vs. one using bubblesort? Obviously these are just computer analogies to what are probably very different wetware operations in biological brains, but the underlying concepts remain. (I should add that current computers don't presently run self-modeling algorithms anywhere near similar enough to those of conscious animals for me to extend them moral concern.)
Richard Pearce wrote: He also says that non-human mammals have proportionately larger brain parts that sense fear and anxeity than humans have. If this is correct, then non-human mammals have the capacity to feel more fear and anxiety than humans.
A great point -- one I made in the above quotation as well. Of course, I still think the relevant question is probably about absolute rather than relative amounts of neural tissue, but my emotions are still fuzzy here.
Richard Pearce wrote: Does this resonate with anyone's experience? For example, does anyone here sympathise more with someone who looks very much like him or herself, male and female? There are a few women I have met, who have looked like me and had some of my idiosyncracies who on seeing, I have been drawn into feeling what seemed like their pain and pleasure. I felt a more intense bodily and emotional experience of the present, because I was suddenly feeling my feelings and theirs I imagine almost as intensely as they were.
I agree with the mirror-neuron point in general. Indeed, that's the reason I care about, say, rats but not rocks.

Personally, I can't think of examples of animals that I care more about because of similarity to myself (for instance, I would be just as appalled to see Hitler tortured as I would to see a friend tortured), but perhaps this reflects years of cognitive override on my part. Hard to say. I do think I'm rather unusual in this regard.
Richard Pearce wrote: So our lack of mirror neurons that accurately represent the pain of non-human animals should make us wary of guessing that they feel less pain than we do.
Indeed! I couldn't agree more.

EmbraceUnity
Posts: 58
Joined: Thu Jul 09, 2009 12:52 am
Location: USA
Contact:

Re: Sentience and brain size

Post by EmbraceUnity » Thu Feb 09, 2012 8:20 am

What does everyone think of Metcalfe's Law in relation to this question? Is the brain a network? If no, explain how and why it is different, or what causes you to doubt.

https://en.wikipedia.org/wiki/Metcalfe%27s_law

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Sentience and brain size

Post by DanielLC » Thu Feb 09, 2012 6:46 pm

It's an economic law based on the idea the value of having a node is proportional to the number of nodes it connects to. Why would it relate to this?

Also, the brain isn't a network like the ones that law is talking about. Each neuron is indirectly connected to every other neuron, but those indirect connections don't matter much. It can't just pass information to an arbitrary neuron.
Consequentialism: The belief that doing the right thing makes the world a better place.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Sun Feb 12, 2012 2:34 am

Thanks for the reference, EmbraceUnity.

Certainly the brain is a kind of network, but as DanielLC said, "economic value" for a network is different from the "ethical value" of happiness. The latter is determined by what the neurons are doing, i.e., are they implementing the processes for pleasure-glossing of experience? If they aren't, then whatever connections they have are irrelevant. If they are, then again the number of connections doesn't matter, so long as they have the connections that they need. And the connections in this case provide only instrumental value, not intrinsic value.

Well, at least that's what my intuition says at first glance. However, by the same logic, the amount of neural tissue itself shouldn't matter either, in which case brain size has no relation to sentience. Is that true? I'm not sure, as this thread has shown. So I suppose one could argue for some importance of network connections as one factor determining how much it matters when a brain of a given size suffers.

User avatar
Ruairi
Posts: 385
Joined: Tue May 10, 2011 12:39 pm
Location: Ireland

Re: Sentience and brain size

Post by Ruairi » Sun Feb 12, 2012 11:53 pm

maybe he means perhaps a more connected brain is more sentient?

or sorry maybe you just covered that, is the last part of your post arguing for the first or second view you present?

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Tue Feb 14, 2012 12:23 pm

Ruairi wrote:is the last part of your post arguing for the first or second view you present?
I guess I meant that if you think brain size in general matters at all, then you might care about number of connections as at least one part of a holistic assessment about which brains are more vs. less sentient. However, I'm not sure if I agree that brain size in general does matter at all (I'm still undecided). I lean toward biting the bullet of a thought experiment that says if you take the same amount of neural tissue and divide it into two different brains, then you thereby double the happiness/suffering that each experiences.

At the very least, almost everyone agrees that there's no "law of conservation of sentience" similar to the "law of conservation of mass/energy" because if you arrange formerly non-sentient matter in the right way, it becomes sentient. I say "almost everyone" because panpsychists might remonstrate. :)

User avatar
Pablo Stafforini
Posts: 174
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford
Contact:

Re: Sentience and brain size

Post by Pablo Stafforini » Tue Mar 06, 2012 10:20 am

At the very least, almost everyone agrees that there's no "law of conservation of sentience" similar to the "law of conservation of mass/energy" because if you arrange formerly non-sentient matter in the right way, it becomes sentient. I say "almost everyone" because panpsychists might remonstrate.
Even some panpsychists would agree with that statement. David Pearce, for example, believes that there is more pain in an ordinary painful episode than would be in an aggregate of its constituent pain "micro-qualia". (At least that is how I interpret his position. An alternative interpretation could be that painfulness is not a building block of conscious experience, but that it "emerges" when the constituent micro-qualia are organized in certain ways. The problem with this position, however, is that now this emergence of macro-qualia from micro-qualia becomes as hard to explain as the emergence of mind from matter that posed the original hard problem of consciousness, and that panpsychism was supposed to solve.)

User avatar
Pablo Stafforini
Posts: 174
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford
Contact:

Re: Sentience and brain size

Post by Pablo Stafforini » Tue Mar 06, 2012 11:28 am

Coincidentally, I just stumbled upon a post by Jesper Östman that raises the same objection I hinted at in my parenthetical comment above. Here is the relevant part:
A unified qualia field of eg a red and a blue qualia (two bits of mind-dust) isn't identical to the mere sum of the red and blue qualia. Since there is no identity the field is a new emergent thing, distinct from the red qualia and the blue qualia. But now the existence of these other things don't do anything to explain the character of this new thing. Sure, it would be a law that a unified red-blue field would arise when we have a red qualia, a blue qualia and some further physical conditions. However, this law is doesn't seem less arbitrary than a law saying that the purely physical correlates to the qualia and the further physical conditions give raise to a unified red-blue field.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Wed Mar 07, 2012 3:58 am

Thanks, Pablo! I don't know if I fully understood Jesper's point when he said it originally, but in this context it makes perfect sense.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Sat Jun 23, 2012 9:15 pm

From Wikipedia's "Pain in invertebrates":
One suggested reason for rejecting a pain experience in invertebrates is that invertebrate brains are too small. However, brain size does not necessarily equate to complexity of function.[8] Moreover, weight for body-weight, the cephalopod brain is in the same size bracket as the vertebrate brain, smaller than that of birds and mammals, but as big or bigger than most fish brains.[9][10]
The article quotes Charles Darwin:
It is certain that there may be extraordinary activity with an extremely small absolute mass of nervous matter; thus the wonderfully diversified instincts, mental powers, and affections of ants are notorious, yet their cerebral ganglia are not so large as the quarter of a small pin’s head. Under this point of view, the brain of an ant is one of the most marvellous atoms of matter in the world, perhaps more so than the brain of man.

Jonatas
Posts: 4
Joined: Wed Jul 21, 2010 9:35 pm

Re: Sentience and brain size

Post by Jonatas » Sat Nov 17, 2012 6:07 pm

I have some observations about some objections.

1. While men have larger brains, women have a higher neuronal density (their neurons are more tightly packed into a smaller area) and higher connectivity, rendering a similar functionality. I think that brains' functionality is largely limited by available energy rather than by area.

2. Intelligence is moderately related to absolute brain size (0.4) among humans, but the causation of intelligence is much more complex than this. There is indeed also Encephalization Quotient, but it doesn't help much. One could count number of neurons left for cognitive purposes after those involved with bodily functions are subtracted, and their degree of connectivity, myelination, etc. Whales and elephants have larger brains, but humans have more neurons than either of them (human brains, like our female brains, are much more tightly packed than those of larger animals).

http://www.subjectpool.com/ed_teach/y3p ... igence.pdf

http://www.ploscompbiol.org/article/inf ... bi.1000395

According to Wikipedia, the cingulate cortex seems to be highly involved with feelings of suffering and "unpleasantness" in humans, so number of cortical neurons may actually be a good proxy for capacity for bad feelings.

http://en.wikipedia.org/wiki/Cingulate_cortex

http://en.wikipedia.org/wiki/Suffering# ... psychology

I think that feeling intensity should be proportional to the number of neurons involved with causing bad and good feelings and perhaps some relationship with the amount of neurons involved with consciousness, and bandwidth in between, if there be such a separate process. For causing superhuman-like intensities of good feelings, it seems to be necessary to increase the number of neurons (or related artificial substrates) and possibly increase the efficiency of their organization. As pointed out, there seems to be a relationship of quantity of impulses increasing intensity of feelings in humans. I estimated the correlation based on suffering in humans being dependent on the cingulate cortex, a region whose size should correlate with absolute number of cortical neurons, and also on integration of neuronal networks as a degree of consciousness. However, this is still quite imprecise, because people and animals vary in their sensitivity to pain, for example, according to their genes. Giulio Tononi's theory of consciousness as integrated information seems to support the quantitative hypothesis.

http://www.biomedcentral.com/1471-2202/5/42/

http://www.youtube.com/watch?v=AgQgfb-HkQk

The first study I linked to in this post has a table estimating the number of cortical neurons in some animals. For instance:

Humans - 11,500,000,000
African elephants - 11,000,000,000
Chimpanzees - 6,200,000,000
Horses - 1,200,000,000
Dogs - 160,000,000
Rats - 15,000,000
Mice - 4,000,000

There are different estimates for the number of cortical neurons in humans. This other study estimates the number of cortical neurons in average humans to be higher, 16,000,000,000. ( http://www.frontiersin.org/Human_Neuros ... .2009/full ). This other study estimates the number of cortical neurons in average human brains to be 13,500,000,000. ( http://www.sciencedirect.com/science/ar ... 1305000823 ). According to this study, the number of cortical neurons in humans ranges from 15 to 31 billion and averages about 21 billion. ( http://www.ncbi.nlm.nih.gov/pubmed/9215725 ).

The difference from mice to humans is quite steep, a 2,875-fold increase, perhaps more than one would intuitively think. Furthermore, brain connectivity and myelination should be taken into account, such that intelligence may actually be a better proxy for feeling capacity than number of neurons.

For cautiousness, we may still give a non-negligible probability to the hypothesis that intensity of feelings has been evolutionarily contained to a low number, similar between different species, regardless of number of neurons involved, in order not to overdo its role. But this case seems very improbable to me.

When estimating the capacity of different animals for bad feelings in decision theory, the resulting values of both hypotheses (relationship and non-relationship) can be compounded, weighed by their probabilities. I would assign about 95% of chance to the relationship hypothesis and 5% of chance to the non-relationship hypothesis.

So, for instance, in decision theory I would take the bad feelings of mice to probably be worth approximately 20 times less than those of humans (0.05). In case of assuming a linear relationship between number of neurons and feelings, their feelings would be estimated as about 2,785 times less intense.

(0.95 x 4,000,000) + (0.05 x 11,500,000,000) = 578,800,000
578,800,000 / 11,500,000,000 = 0.05
Last edited by Jonatas on Sat Nov 17, 2012 11:08 pm, edited 4 times in total.

Jonatas
Posts: 4
Joined: Wed Jul 21, 2010 9:35 pm

Re: Sentience and brain size

Post by Jonatas » Sat Nov 17, 2012 9:12 pm

This article estimates the number of cortical neurons for pigs as 432,000,000. So in decision theory, giving a probability of 95% to the linear correlation hypothesis, pigs' feelings could be considered 12 times less valuable than those of humans, or 26 times less, assuming the linear correlation hypothesis as certain.

http://jeb.biologists.org/content/209/8/1454.abstract

Cows' brains weigh about 440g, compared to 532g of horses, which have 1,200,000,000 cortical neurons, so cows probably have around 1,000,000,000 cortical neurons. This would mean, in decision theory with a probability of 95% to the linear correlation hypothesis, that cows feelings could be considered as 7.5 times less valuable that those of humans, or 11.5 times less valuable assuming the linear correlation hypothesis as certain.

http://faculty.washington.edu/chudler/facts.html

For birds and fishes, the comparison is probably not as straightforward as between different mammals, because their brain structures and neuron cells are different. However, one could roughly estimate.

The brain of a chicken is estimated to weigh about 4g. In comparison, a rat's brain weighs about 2g, and has 15,000,000 cortical neurons, giving chickens a figure of about 30,000,000 cortical neurons. However, birds have smaller neurons and a higher brain efficiency per weight. So it could be hypothesized that chickens have some 50,000,000 cortical neurons. So, in decision theory, giving a probability of 95% to the linear correlation hypothesis, chicken's feelings could be valued about 18 times less than those of humans, or about 230 times less, assuming a linear relationship between number of cortical neurons and feeling intensity as certain.

I couldn't find data on the number of neurons in fishes' brains, but a fish with a body weight of 1kg should have a central nervous system of about 1g. The brain of a 30g mouse weighs about 0.5g. However, while the telencephalon of a fish occupies an area of about 20% of its central nervous system, the proportion of a mouse's cortex is much higher, about 50%. So I would estimate that a fish of 1kg may have a number of "cortical" neurons comparable to that of a mouse of 30g, and the relation to humans in terms of feelings would be similar to that of mice. The brain of a goldfish weighs about 0.01g, so it should have a number of neurons about 100 times lower than that of a 1kg fish.

The fruit fly seems to have about 100,000 neurons for its whole central nervous system. In comparison, humans have about 100,000,000,000. This would mean that fruit flies could be taken in decision theory to have bad feelings 20 times less intense than those of humans. This number is wholly dependent on the 5% of chance of the non-relationship hypothesis (supposing that fruit flies feel the same as humans, what seems ludicrous), being its minimum, otherwise the estimation would be that fruit flies have bad feelings 1,000,000 times less intense than those of humans. Perhaps a different estimation in decision theory should be taken for animals below the "20 times less" threshold, because flies couldn't possibly feel the same as humans. I think that there may be a minimum threshold for bad feelings to appear at all, which does not include fruit flies and insects. Cockroaches, as one of the bigger insects, have 1,000,000 neurons, 10 times more than fruit flies, but still 100,000 times less than humans.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Thu Nov 22, 2012 9:34 am

Very interesting, Jonatas. Thanks for all the great info! I have more to learn on this topic.

I didn't realize that females have tighter and more connected brains than males, nor that the same is true for humans vs. cetaceans.

It is indeed a strawman to suggest that those who think our concern for suffering should scale with brain complexity believe that the raw number of (cortical or otherwise) neurons is an adequate proxy. However, number of neurons is easier to look up than more advanced measures, and it does at least capture claims that, say, the suffering of small fish matters intrinsically less than that of humans, a stance on which I remain ambivalent.
Jonatas wrote: As pointed out, there seems to be a relationship of quantity of impulses increasing intensity of feelings in humans.
Yes, but it seems to me that this is essentially "normalized" per organism. This is the main intuition behind why I feel brain size may not matter ethically.

Harry the human is doing his morning jumping jacks, while listening to music. Suddenly he feels a pain in his knee. The pain comes from nociceptive firings of 500 afferent neurons. At the same time, Harry is enjoying his music and the chemicals that exercise is releasing into his body, so his brain simultaneously generates 500 "I like this" messages. Harry is unsure whether to keep exercising. But after a minute, the nociceptive firings decrease to 50 neurons, so he decides his knee doesn't really hurt anymore. He continues his jumping routine.

Meanwhile, Sam the snail is sexually aroused by an object and is moving toward it. His nervous system generates 5 "I like this" messages. But when he reaches the object, an experimenter applies a shock that generates 50 "ouch" messages. This is the same as the number of "ouch" messages that Harry felt from his knee at the end of the previous example, yet in this case, because the comparison is against only 5 "I like this" messages, Sam wastes no time in recoiling from the object.

Now, we can still debate whether the moral significance of 50 of Harry's and Sam's "ouch" messages are equal, but I'm pointing out that, to the organism himself, they're like night and day. Sam hated the shock much more than Harry hated his diminished knee pain. Sam might describe his experience as one of his most painful in recent memory; Harry might describe his as "something I barely noticed."

<EDIT, 12 Sept 2013>
Carl Shulman makes the following comment:
Harry could still make choices (eat this food or not, go here or there) if the intensity of his various pleasures and pains were dialed down by a factor of 10. The main behavioral disruption would be the loss of gradations (some levels of relative importance would have to be dropped or merged, e.g. the smallest pains and pleasures dropping to imperceptibility/non-existence).

But he would be able to remember the more intense experiences when he got 10x the signals and say that those were stronger, richer, more intense, more morally noteworthy.
I find his point about remembering the more intense experiences is interesting. I'm not sold on it, but I wouldn't rule it out either.
</EDIT, 12 Sept 2013>
Jonatas wrote: For cautiousness, we may still give a non-negligible probability to the hypothesis that intensity of feelings has been evolutionarily contained to a low number, similar between different species, regardless of number of neurons involved, in order not to overdo its role. But this case seems very improbable to me.
At the end of the day, there is no "right answer" to this question. There's no such thing as a "true" intensity of suffering that organisms tap into. That said, we can (and I do) still have moral uncertainty about how we want to apportion our concern depending on various size, connectivity, etc. characteristics of relevant brain regions.

Thanks for all the expected-value estimates for the ratios between various animals and humans. Fascinating! My own probability that the relationship is not linear is much higher than 5% -- maybe ~50%.
Jonatas wrote: The brain of a goldfish weighs about 0.01g
I think you mean 0.01 kg.
Jonatas wrote: Perhaps a different estimation in decision theory should be taken for animals below the "20 times less" threshold, because flies couldn't possibly feel the same as humans.
Haha, I think it's not completely obvious. :) That said, there are two separate issues: (1) Whether insects possess the relevant structures for conscious pain at all. (2) If so, how does the scale of those structures compare with the scale in humans? My probability for (1) is maybe ~40%? My probability for (2) is ~50%; yours is ~5% unless you want to decrease it here.
Jonatas wrote: Cockroaches, as one of the bigger insects
And one of the most likely to be able to experience suffering IMO.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Mon Nov 26, 2012 10:54 am

Jonatas mentioned to me Giulio Tononi's theory of consciousness as something he might write about in an upcoming reply. I hadn't heard of this and so thought I would spend 5 minutes looking up a brief summary. I got a little carried away and spent more like 50 minutes reading about it, but now I have a few comments. Of course, I claim no expertise on this topic, so take everything I say with some grains of sodium chloride.

Christof Koch wrote a summary of Tononi's theory for Scientific American Mind titled "A Theory of Consciousness." As usual, that's sensationalism by the popular press to get people to read the article. (Don't get me wrong -- I love the popular-science press. But that doesn't mean they don't regularly overblow the significance of their stories.)

Tononi's Phi formula is not an explanation of how consciousness works. It's a measure on network systems that describes their information content and connectedness. Phi is a more sophisticated measure than, say, "number of neurons" or "number of synapses," but it's no more fundamental than those. The "number of neurons" of an organism is surely relevant to consciousness, but it's hardly a theory of consciousness. :)

Phi is presumed to be highest for those brain regions that seem most crucial for consciousness in humans, which is great. However, it's not clear that Phi is identical with the features of a mind that we care about. After all, almost any system has nonzero Phi. Do we want to care about almost any system to a nonzero degree? There may be particular, specific things that happen in brains during emotional experiences that are the only features we wish to value. Perhaps these usually accompany high Phi, but they may be a small subset of all possible systems which have high Phi. Work remains to articulate exactly what those features are that we care about. Such work would be aided by a deeper understanding of the mechanisms of conscious experience, in addition to this aggregate measure that seems to be generally correlated with consciousness. (Of course, that aggregate measure is a great tool, but it's far from the whole story.)

In any event, as far as the discussion at hand goes: If we are seeking a measure that captures the intuition that more complex brains deserve more weight, it's possible Phi (or some function of Phi) would be closer than "number of neurons" or "number of synapses" or whatever. However, the moral question still remains whether we want to use that measure, or whether we'd rather treat brains equally based on their belonging to a separate organism. Tononi's measure doesn't have implications for that question (except insofar as learning more about these ideas may mold our underlying moral intuitions).

In his article, Koch conflates Phi with consciousness, but it would be sophistical reasoning to say, "I'm defining Phi as consciousness. Therefore, consciousness is not binary but varies depending on brain complexity. Therefore, this moral question on this Felicifia thread is resolved." The "consciousness" that we care about morally may differ from the "consciousness" that someone defines some measure to be.

Anyway, I really do like this stuff from Tononi, and I like Koch, so thanks, Jonatas, for teaching me about something new. :)

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Tue Nov 27, 2012 10:23 am

Follow-up note: Why don't I like the panpsychist view? Why can't it be the case that Phi actually is the consciousness that we care about, with no extra complications? Well, it could be, and I won't rule it out. But it seems to me that panpsychism doesn't explain anything. Consciousness is not a reified thing; it's not a physical property of the universe that just exists intrinsically. Rather, it's an algorithm that's implemented in specific steps. As Eliezer Yudkowsky has said, if you can't write me the source code for a process, you don't really understand it. Consciousness involves specific things that brains do.

Now, maybe the consciousness algorithm is really general, like basic propagation of any kind of information in any kind network. If so, then panpsychism would once again hold, because that algorithm is run by all kinds of physical systems. But my hunch is that the algorithm is more specific, and indeed, if we so choose, we can declare that the types of consciousness that we care about are more specific.

It's not incoherent to care about something really simple. We could, for example, decide that consciousness is the number of times that electrons jump energy levels in atoms. If we did so, then indeed consciousness would scale with brain size (and body size). But that measure doesn't capture what moves us when we see an animal writhing in pain. Similarly, it may be that a Phi measure that declares hydrogen atoms as marginally conscious also does not capture what moves us. Or maybe it will once our understanding of neuroscience improves and our intuitions are more refined. We'll see.

Postscript, 18 Mar 2013:
A good way to think about consciousness is to ask, "What process is going on in my brain that makes me think to myself, 'Wow, I'm conscious!'?" The process that gives rise to that feeling and that exclamation is probably pretty close to what we're getting at when we want to point to what we find consciousness to be. But it's more plausible to suppose that this process is a specific series of steps in the brain than that it's a generic information-processing system that spills over into the more specific perception of being aware of one's emotions and sensations. How exactly would the spilling over happen? Well, take whatever that spilling-over process would be, and ask why that can't happen on its own, without the more fundamental form of consciousness kicking off the spilling process. This is very similar to the argument against dualism: Why do we need a separate consciousness stuff if the material brain already does all the work? Well, why do we need a fundamental, panpsychist form of consciousness (Phi or whatever) if the specific steps of thinking that you're conscious could be done anyway?

User avatar
Pablo Stafforini
Posts: 174
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford
Contact:

Re: Sentience and brain size

Post by Pablo Stafforini » Sun Dec 02, 2012 5:26 am

Brian Tomasik wrote:it seems to me that panpsychism doesn't explain anything.
Well, if one is a dualist, panpsychism would explain the puzzling phenomenon that consciousness seems to exist only in certain regions of spacetime. For the panpsychist, there is really no puzzle, since consciousness exists everywhere.
Brian Tomasik wrote:Consciousness is not a reified thing; it's not a physical property of the universe that just exists intrinsically. Rather, it's an algorithm that's implemented in specific steps.
Consciousness may be a "physical property of the universe that just exists intrinsically" and also "an algorithm that's implemented in specific steps." This is precisely the view of David Chalmers. He believes that consciousness is an irreducible aspect of the natural world, but he also believes that there is a systematic connection between the character of a conscious state and the associated pattern of functional organization. As he writes:
If consciousness arises from the physical, in virtue of what sort of physical properties does it arise? Presumably these will be properties that brains can instantiate, but it is not obvious just which properties are the right ones. Some have suggested biochemical properties; some have suggested quantum properties; many have professed uncertainty. A natural suggestion is that consciousness arises in virtue of the functional organization of the brain. On this view, the chemical and indeed the quantum substrate of the brain is irrelevant to the production of consciousness. What counts is the brain's abstract causal organization, an organization that might be realized in many different physical substrates.
Brian Tomasik wrote:It's not incoherent to care about something really simple. We could, for example, decide that consciousness is the number of times that electrons jump energy levels in atoms.
I think we should keep the moral question of what states we ultimately care for distinct from the empirical question of what physical states realize consciousness. Of course, we could simply define 'consciousness' as that which we ultimately care for. But I think that would just introduce more confusion into the discussion. It seems preferable to use the term 'consciousness' in its standard sense (or the more precise sense given to it by philosophers and cognitive scientists), and leave open the question of whether the physical states that realize this property are states for which we ultimately care.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Sentience and brain size

Post by Brian Tomasik » Sun Dec 02, 2012 2:46 pm

Cool -- thanks, Pablo!
Pablo Stafforini wrote: Well, if one is a dualist, panpsychism would explain the puzzling phenomenon that consciousness seems to exist only in certain regions of spacetime. For the panpsychist, there is really no puzzle, since consciousness exists everywhere.
Makes sense. But I am not a dualist.
Pablo Stafforini wrote: Consciousness may be a "physical property of the universe that just exists intrinsically" and also "an algorithm that's implemented in specific steps."
I don't understand why one would claim both at the same time. Why on Mars would there be two independent properties of something that are inextricably linked? With the caveat that I don't understand Chalmers's view in detail, I feel like it's time to hire Occam the hedge trimmer.
Pablo Stafforini wrote: It seems preferable to use the term 'consciousness' in its standard sense (or the more precise sense given to it by philosophers and cognitive scientists), and leave open the question of whether the physical states that realize this property are states for which we ultimately care.
Sure. There are already too many meanings of consciousness. It can mean being awake, being aware of one's surroundings, having the "feeling of what it's like" experience, what we care about ethically, etc. I'm fine with whatever definitions we settle on, so long as we avoid confusing the scientific meanings with the ethical ones.

User avatar
Pablo Stafforini
Posts: 174
Joined: Thu Dec 31, 2009 2:07 am
Location: Oxford
Contact:

Re: Sentience and brain size

Post by Pablo Stafforini » Sun Dec 02, 2012 9:53 pm

Brian Tomasik wrote:I don't understand why one would claim both at the same time. Why on Mars would there be two independent properties of something that are inextricably linked? With the caveat that I don't understand Chalmers's view in detail, I feel like it's time to hire Occam the hedge trimmer.
The problem is that you cannot use Occam's razor to deny the existence of something with which you are directly acquainted. Occam's razor says that you shouldn't multiply entities beyond necessity; but in this case recognizing the existence of conscious properties is necessary to do justice to appearances.

Of course, you could argue that we can recognize that consciousness exists, and then claim that consciousness is reducible. Yet Chalmers's position is that no reductive explanation of consciousness can be given (he supports this claim with a battery of sophisticated arguments).

Note, by the way, that denying that there are "two independent properties of something that are inextricably linked" doesn't automatically lead to materialism. One may instead regard physical properties as redundant, and embrace the view that reality is ultimately mental.
Brian Tomasik wrote:Sure. There are already too many meanings of consciousness. It can mean being awake, being aware of one's surroundings, having the "feeling of what it's like" experience, what we care about ethically, etc. I'm fine with whatever definitions we settle on, so long as we avoid confusing the scientific meanings with the ethical ones.
Just to be picky: I don't think there is any "ethical" meaning of 'consciousness'; there is no sense in which 'consciousness' means "what we care about ethically". Moreover, if 'consciousness' had that meaning, it would then be vacuous to say that we care ethically about consciousness. In order for ethical statements of that sort to be non-trivial, the term 'consciousness' in those statements needs to be used in a non-moral sense.

Post Reply