Consciousness and tableness

For ethics in the real world - bioethics, law, effective altruist outreach etc.
Post Reply
User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Consciousness and tableness

Post by Brian Tomasik » Wed Jun 06, 2012 5:56 am

One of my friends and I have been having an excellent Facebook discussion that I wanted to share more broadly. I've reproduced the most interesting part.

Friend's statement:
And I still believe Eliezer's position does not make sense, Brian. I also believe you changed the subject (or I didn't make it clear enough). You made the point that "we can care about anything we like" and that every object of our care will seem "equally absurd" from an abstract and objective point of view. I gave arguments against the latter claim but I may grant you the former. (Although I have some reservations there as well, but let's set them aside for the moment.) But how does that address my original point? I wasn't talking about ethics or what we do or should care about at all. My exclusive focus was the metaphysics of consciousness, i.e. the question of what it is in virtue of which it is true that a certain object is conscious. My answer is: qualia. But suppose the answer was: third-person observable internal processes ABC. If we then discovered other beings with the exact same behavioral outputs (i.e. pain-behavior) but with different internal processes XYZ, then it would follow that these other beings cannot be conscious. But that's absurd and arbitrary, for we could just as well have defined the truth-maker of consciousness attributions in terms of the internal processes XYZ (which would imply that the beings containing ABC cannot be conscious). The proposed solution might be a disjunctive definition of consciousness: internal processes ABC or XYZ. But now we can already see what's going on here: We're accepting *behavior* as our ultimate metaphysical criterion of consciousness after all! For we can ask: Why just ABC or XYZ and not DEF and GHI as well? (Suppose DEF is what goes on in stars and GHI what goes on in my cell phone.) Well: behavior!

Now let's jump to ethics. Let's set aside the foundational issues and the question of realism and just note that 1) we're both believers in qualia and 2) we're hedonistic (and negative) utilitarians. *Given that*, we *cannot* say that we can "care about what we like" and we cannot make "closeness to our own minds" our basic and ultimate criterion. The only relevant question is this: Where are there qualia, are they negative/painful, and what can we do to make them positive or to end them. That's what we need to find out and that's our terminal/intrinsic value. "Closeness to my own mind" may be helpful epistemically, inductively (and thus only instrumentally/extrinsically) but it *cannot* be what ultimately matters if we assume qualia and hedonistic utilitarianism. So if I somehow acquired the *certain knowledge* that all beings that are behaviorally and internally-observably similar to me are actually zombies and that there are qualia (constituting subjects of experience) residing in stars and cell phones, then "closeness to me" would become *totally* irrelevant, even instrumentally/extrinsically.
My reply:
"I wasn't talking about ethics or what we do or should care about at all. My exclusive focus was the metaphysics of consciousness, i.e. the question of what it is in virtue of which it is true that a certain object is conscious."

Yeah, well in my mind, these are the same question, because there is no metaphysics of consciousness. Consciousness is whatever we define it to be, and the main motivation for delimiting its boundaries is because we care about things that are conscious and don't care about things that aren't.

"Consciousness" is like "tableness" -- it's a concept, a cluster in thingspace, not a separate metaphysical entity. Is a board with only three legs a table? What if it has a hole in the middle? What if someone primarily uses it to sleep on? What if it's cut in half? What if it's made out of cell phones glued together? What if it's just a drawing on a piece of paper? These questions don't have "real" answers. They depend on how we want to delimit the boundaries of what a "table" is. The same is true for consciousness algorithms.

"We're accepting *behavior* as our ultimate metaphysical criterion of consciousness after all!"

Not necessarily. You could say that because a table can be made of wood or metal, "tableness" should be defined by the disjunction of wood or metal. Then the ultimate definition of what's a table is just whether people use it to put their dinner plates on. But what if people are having a picnic outside and they put their plates on a rock? Is the rock really a table? Some would say no, just based on the "internals" of the object.

We can get all the same confusion that people have over consciousness when we talk about any old concept. There's nothing specially mysterious about consciousness, except the fact that humans don't know very much about it yet.

"*Given that*, we *cannot* say that we can 'care about what we like' and we cannot make 'closeness to our own minds' our basic and ultimate criterion."

We have to make these choices somehow. Hedonistic utilitarianism is underspecified without saying which things count as subjective experience. Suppose we were table-minimizers (aiming to minimize the expected number of tables in the universe). Would we decide to destroy rocks because people might put their picnic plates on them? Or would we say that we know this is a table and we'll focus on eliminating things based on how similar they are to that? Both approaches might be reasonable; there's no objective answer to what's a table and what isn't.

"So if I somehow acquired the *certain knowledge* that all beings that are behaviorally and internally-observably similar to me are actually zombies and that there are qualia (constituting subjects of experience) residing in stars and cell phones, then 'closeness to me' would become *totally* irrelevant, even instrumentally/extrinsically."

There could certainly be discoveries and thought experiments that would shift our intuitions such that we no longer care about brain-like things and instead care about stars and cell phones. But these would be changes in our feelings, not a discovery of a metaphysical property of the world. When we have a Gestalt shift, it's our brain's attitude that changes, not the photons coming off of the page.

Adriano_Mannino
Posts: 3
Joined: Wed Jun 06, 2012 1:11 pm

Re: Consciousness and tableness

Post by Adriano_Mannino » Wed Jun 06, 2012 2:33 pm

Hi Alan, thanks for sharing our discussion.

- Your point, I take it, is that a question is not real if the terms used in its formulation are not clearly (or not sufficiently sharply) defined. Sure, that's trivial.

- It baffles me that you would claim those were the same question. Surely the question of what consciousness is and how it is to be explained is different from the question of whether consciousness is ethically relevant (axiologically, i.e. for the theory of value, and/or deontically, i.e. for the theory of duties or what we have reason to care about and do).

- When I speak of consciousness/qualia/subjective experience, I know what I am talking about by an internal deictic definition. I know what pain-qualia are because I (sometimes) have them - and *this* is what I mean by "being in pain". Algorithms don't enter the definition. I'd know perfectly well what pain is even if I didn't possess even the rudiments of the concept of an algorithm. (Sure, algorithms may play a crucial role in the empirical elaboration of the nature and the causal explanation of pain, but whether they do is an empirical, aposteriori question; it could turn out that they don't, e.g. if it turned out that we were empty or composed 100% of water inside.)

- If there's nothing "specially mysterious" about consciousness, then there is no "hard problem" - there's just the arbitrary third-person definition that we have to fix (as with tableness) and the third-person empirical research to be carried out. But you seem to agree that there is a "hard problem"! So, Mr. Dawrst, which side are you on? (And by the way, we need to have a similar discussion about negative vs. classical utilitarianism too. Your "non-pinprick NU" is incoherent, as is the going back and forth between classical and negative, of course. Lukas (--> "ExtendedCircle") and I have settled for strict NU, and we have three lines of argument for it, one intuitive-coherentist, one theoretical, one empirical. But we're always open to persuasion, we still struggle a bit with the asymmetry that NU presupposes, though we believe we can build a decent case. Let's have this discussion in another thread and invite Pablo too.)

- <<We have to make these choices somehow. Hedonistic utilitarianism is underspecified without saying which things count as subjective experience.>> - Of course, but again, the third-person criteria that we may come to fix are *empirical (and, epistemically: inductive) correlates* of what hedonistic utilitarians care about (agreeable/disagreeable qualia), they are not definitional.

- <<There could certainly be discoveries and thought experiments that would shift our intuitions such that we no longer care about brain-like things and instead care about stars and cell phones. But these would be changes in our feelings, not a discovery of a metaphysical property of the world.>> - This nonplusses me again. Are you saying there is no fact of the matter to whether stars or cell phones are subjects of experience/bearers of pleasurable or painful qualia?

- Regarding the question of (non-)realism: Isn't it *trivially true* that an Animal Paradise would objectively make for a better world than an Animal Hell regardless of whether there were any intelligent agents around that could morally care (or not care) about it? If positive/negative qualia exist, they do so independently of whether you believe in them or care about them.

User avatar
Arepo
Site Admin
Posts: 1097
Joined: Sun Oct 05, 2008 10:49 am

Re: Consciousness and tableness

Post by Arepo » Wed Jun 06, 2012 3:15 pm

Welcome to the forum, Adriano :)

I don't have much to contribute here, but I'm looking forward to seeing your defence of NU.
"These were my only good shoes."
"You ought to have put on an old pair, if you wished to go a-diving," said Professor Graham, who had not studied moral philosophy in vain.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Consciousness and tableness

Post by Brian Tomasik » Wed Jun 06, 2012 3:19 pm

Adriano_Mannino wrote: thanks for sharing our discussion.
Thanks for having it.
Adriano_Mannino wrote: Surely the question of what consciousness is and how it is to be explained is different from the question of whether consciousness is ethically relevant
Well, I meant that the primary use of a theory of consciousness is for making the ethical decision about whether we care about an organism's emotions. I suppose there might be other uses in cognitive science too, but that's because consciousness is an overloaded term that can mean awareness, awakeness, or many other things. We should really call these by different names. Maybe call the ethics one "consciousness," call awareness "fronsciousness," call awakeness "stonsciousness," etc.
Adriano_Mannino wrote: I know what pain-qualia are because I (sometimes) have them - and *this* is what I mean by "being in pain". Algorithms don't enter the definition.
I know what Microsoft Excel is because my computer is running it - and *this* is what I mean by Microsoft Excel. Algorithms don't enter the definition. (Sorry, this doesn't address your point; I just had to say that.)
Adriano_Mannino wrote: I'd know perfectly well what pain is even if I didn't possess even the rudiments of the concept of an algorithm. (Sure, algorithms may play a crucial role in the empirical elaboration of the nature and the causal explanation of pain, but whether they do is an empirical, aposteriori question; it could turn out that they don't, e.g. if it turned out that we were empty or composed 100% of water inside.)
I see where you're coming from. Sure, there's more than one way to refer to what we mean by consciousness. But this doesn't help us when we try to assess whether others are conscious. We know this is a table, but is this?

And I'm not sure if I could comprehend what it would look like for consciousness not to be an algorithm, although I admit that "I can't imagine" philosophy arguments aren't always persuasive. (BTW, I also can't imagine philosophical zombies if I actually think about what that would imply.)
Adriano_Mannino wrote: there's just the arbitrary third-person definition that we have to fix (as with tableness) and the third-person empirical research to be carried out.
Yes. Of course, the first is not necessarily prior to the second. Learning more about how consciousness is implemented may change our intuitions about what we want to regard as conscious.
Adriano_Mannino wrote: But you seem to agree that there is a "hard problem"! So, Mr. Dawrst, which side are you on?
Haha, that's because I used to think there was a hard problem until I changed my mind. Hence the big division in my essay. Maybe I should change to a different title. Suggestions?
Adriano_Mannino wrote: Your "non-pinprick NU" is incoherent
Non-pinprick NU would say there are negative experiences that would outweigh any amount of happiness, but those negative experiences have to be sufficiently bad (like burning at the stake for 2 minutes rather than just a pinprick). This is fully coherent logically speaking. Maybe you mean that it can be torn apart by other intuitions we might hold.
Adriano_Mannino wrote: as is the going back and forth between classical and negative, of course
Yes. :) I make no claims that I'm not inconsistent depending on my mood. Right now I lean toward regular utilitarianism with a very high pain:pleasure exchange rate.
Adriano_Mannino wrote: the third-person criteria that we may come to fix are *empirical (and, epistemically: inductive) correlates* of what hedonistic utilitarians care about (agreeable/disagreeable qualia), they are not definitional.
If we're willing to use deictic definitions, then we can refer to our own subjective experience as "what I experience" or some such. But ultimately we're referring to something concrete, and that something concrete is not metaphysical qualia but just an algorithm/process running on a substrate. I'm not an expert in philosophy of language, but it feels like just saying "what I experience" isn't specific enough to do anything with. Yes, I sort of know what you mean, but we have to tease out the details.
Adriano_Mannino wrote: Are you saying there is no fact of the matter to whether stars or cell phones are subjects of experience/bearers of pleasurable or painful qualia?
Yes, just as there is no objective fact of the matter about whether my shoe is a table. There's a societal definition that you violate on pain of being misunderstood. And there are things that we care deeply about regarding "happiness/suffering," which we observe and respond to and want to learn more about. But that's all.
Adriano_Mannino wrote: Regarding the question of (non-)realism: Isn't it *trivially true* that an Animal Paradise would objectively make for a better world than an Animal Hell regardless of whether there were any intelligent agents around that could morally care (or not care) about it?
I don't buy the concept of "objectively make for a better world." Isn't it *trivially true* that screegle fibbla ostulax? :)

Adriano_Mannino
Posts: 3
Joined: Wed Jun 06, 2012 1:11 pm

Re: Consciousness and tableness

Post by Adriano_Mannino » Wed Jun 06, 2012 8:37 pm

- If what you say is true and if the ultimate specification of what "matters" is just an issue of semantics or arbitrary definition, then it's totally ridiculous to worry about it. The rational response would be to stop worrying (as we have done in other cases where the issues have turned out to be merely semantic/verbal in nature), and if we can't stop (psychologically), we should take the pill that enabled us to stop. (Would you take that pill?) And there's certainly no reason to design/influence future generations so as to care about consciousness rather than tables.

- Yeah, there's nothing wrong with that analogous sentence about Excel. As you said: You can single out whichever aspect you want in order to define it. My point was that it's absurd to morally care about algorithms per se, there's nothing intrinsically valuable about them. It's also utterly anti-utilitarian (it would be an instance of the "moral mysticism" that Felicifia is trying to get rid of), for it may turn out, empirically, that happiness and suffering have *nothing to do* with algorithms. From which it follows that algorithms *cannot* be part of the primary definition of happiness and suffering. You have not responded to this argument.

- If you think philosophical zombies cannot be coherently imagined, then - by definition - there is no "hard problem". (And as I said, if it's ultimately just third-person accessible semantics plus empirical research, then there is no room for any "hard problem" either.) But you seem to agree there is one. So once again, I don't quite get what your position is and what I'm supposed to be attacking. :) Anyway, I disagree that zombies cannot be coherently imagined. Where do you see a problem? And don't link me to the LW stuff on the matter, I think it's flawed. Which implications do you think are not coherently imaginable?

- <<Learning more about how consciousness is implemented may change our intuitions about what we want to regard as conscious.>> - I disagree. What it may and does change is the *probability* we must rationally assign to certain objects being conscious given our third-person knowledge about them. But it doesn't affect my "intuitions" about what consciousness is *at all*. And "intuition" is a misleading term here, for what I have is first-person knowledge of consciousness/qualia, e.g. of what it's like to (and what it semantically *means to*) be in a disagreeable/painful mental state.

- Ah, sorry, I only just read the passage where you say you've come to reject the existence of a "hard problem". :)

- I don't know that "non-pinprick NU" is coherent, it's certainly totally arbitrary (no argument can be given for drawing the line at 2min of burning at the stake), which is reason enough to reject it. If the line is drawn at 2min of burning at the stake, it seems that 1min could be outweighed by a sufficient amount of happiness. But then why can't 2, 3, 4min... too? (Asymptotic accounts lead to inconsistencies as well, cf. http://homepage.mac.com/anorcross/paper ... tology.pdf)

- If it all depends on your mood, then you have no reason to lean either way. (So you would create *any* amount of suffering right now if it came in a package with a sufficiently large amount of happiness?)

- <<But ultimately we're referring to something concrete, and that something concrete is not metaphysical qualia but just an algorithm/process running on a substrate. I'm not an expert in philosophy of language, but it feels like just saying "what I experience" isn't specific enough to do anything with.>> I don't know what you mean. My own qualia (e.g. pain experiences) are the most immediate, concrete and certain things I know of. And what do you mean by the suggestion that there's something "metaphysical" about them? (Cf. "Why physicalism entails panpsychism", by Galen Strawson - the paper is available online, google it.) Why shouldn't I be able to linguistically refer to what I experience, e.g. a stinging and pulsating headache? And when I say "stinging and pulsating headache", what I mean is the (subjective experience of a) *stinging and pulsating headache*, not some neuronal activity nor any algorithm nor what have you. Sure, it may turn out, empirically and aposteriori, that the headache is caused by or is even physically identical to some neuronal activity or algorithm, just as it may or may not turn out that Kent Clark is identical to Superman or water to H2O. And if I care about water, I care about water, not about H2O. And if, on Twin Earth, water turns out to be, chemically, XYZ, then it's still water and I still care about it (and I don't care about it less because it's not H2O, for I care about water and the macro-properties that go into its definition and nothing else).

- <<Yes, just as there is no objective fact of the matter about whether my shoe is a table.>> Yeah, this is funny. It is also totally beside the point. You keep reiterating the triviality that when definitions aren't clear, then questions aren't real. And I keep telling you: My definition e.g. of "pain" or "disagreeable feeling" is perfectly clear. I've experienced a wide variety of them and things of this first-person/subjective kind are exactly what I mean. They may turn out to be caused by or even identical to algorithms - or they may not. Intrinsically, I don't care about algorithms *at all* (how utterly pointless and anti-utilitarian would that be!), I care about subjective experiences and how agreeable/disagreeable they are, for they are what makes anything matter (see below).

- <<I don't buy the concept of "objectively make for a better world." Isn't it *trivially true* that screegle fibbla ostulax?>> - The difference being that "my" concept can be explained and ontologically grounded. Very roughly: There seems to be no fact about the different states that a stone, say, can be in that could meaningfully be described as intrinsically good or bad. But it's obviously possible for different stone-states to be instrumentally good or bad - which indicates that "good/bad" can (descriptively and intelligibly) only mean "good/bad for someone", the different states of this "someone" therefore being the source of ultimate/terminal/intrinsic value (goodness/badness). The notion of a "someone" is in turn to be cashed out in terms of consciousness/subjective experience/qualia. This makes it possible to assert, as an objective fact, that Animal Paradise beats Animal Hell in goodness/badness terms. (Equivalently, we could ask not whether things can be objectively good/bad, but whether there's anything that *matters*, objectively. And here again, if something matters, it must matter *to someone* etc.)

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Consciousness and tableness

Post by Brian Tomasik » Fri Jun 08, 2012 3:33 pm

Adriano_Mannino wrote: If what you say is true and if the ultimate specification of what "matters" is just an issue of semantics or arbitrary definition, then it's totally ridiculous to worry about it.
It's not just a definitional dispute. There is real content behind what we care about and what we don't. All I'm saying is that we can use the word "conscious" to mean "the property belonging to the things we want to care about that makes us want to care about them" so that we have a compact way of saying what we're referring to.
Adriano_Mannino wrote: Would you take that pill?
No, because I think these things do matter. Indeed, I think consciousness is the only thing in the universe that matters.
Adriano_Mannino wrote: My point was that it's absurd to morally care about algorithms per se, there's nothing intrinsically valuable about them. It's also utterly anti-utilitarian (it would be an instance of the "moral mysticism" that Felicifia is trying to get rid of), for it may turn out, empirically, that happiness and suffering have *nothing to do* with algorithms.
My claim is that when we just say "I care about happiness/suffering," we aren't being specific enough. Suppose we program a Lego Turing machine to run operations mimicking/simulating what happens in the brain when a person suffers. Do you think it would be bad to run that Lego Turing machine? How do you decide? Well, you decide based on properties of the things you care about. I'm suggesting that one of the key properties of things we know we care about is the algorithm that it runs (probably more than the substrate -- carbon vs. silicon, etc. -- or the speed).

How else do you propose to figure out whether you want to regard a Lego Turing machine as being conscious?
Adriano_Mannino wrote: But you seem to agree there is one.
I revised the title of the essay on my site to reduce confusion. :)
Adriano_Mannino wrote: And don't link me to the LW stuff on the matter, I think it's flawed. Which implications do you think are not coherently imaginable?
Ha, here's the LessWrong link. :) (No, you don't have to read it.)

The fact that we think we can imagine something doesn't mean that thing is physically possible. Indeed, it might not even be logically possible. It might seem intuitively that one should be able to build a rank-order voting system that satisfies all the criteria of Arrow's impossibility theorem, but it's not.

In particular, zombies are impossible if everything that makes up subjective experience consists of physical operations by quarks and the like. The hypothesis that subjective experience is only made of quarks and the like comes from Occam's razor. Even if consciousness involves additional physics that we don't yet know about, it's still physics, so by assumption, it applies to the zombies too.
Adriano_Mannino wrote: If the line is drawn at 2min of burning at the stake, it seems that 1min could be outweighed by a sufficient amount of happiness.
This is the reason I'm probably not a negative utilitarian. However, if I were one, I would prefer 2min NU over pinprick NU on purely intuitive grounds.
Adriano_Mannino wrote: If it all depends on your mood, then you have no reason to lean either way.
I don't follow. All of our moral sentiments always depend on our mood.
Adriano_Mannino wrote: So you would create *any* amount of suffering right now if it came in a package with a sufficiently large amount of happiness?
Yes.
Adriano_Mannino wrote: Why shouldn't I be able to linguistically refer to what I experience, e.g. a stinging and pulsating headache?
You can. But when it comes to the question about Lego Turing machines, you can't just refer to what you're experiencing to answer it. If you want to care about things "like what I'm experiencing," you need to specify what "like" means -- what's the similarity measure?
Adriano_Mannino wrote: And if, on Twin Earth, water turns out to be, chemically, XYZ, then it's still water and I still care about it (and I don't care about it less because it's not H2O, for I care about water and the macro-properties that go into its definition and nothing else).
Yes, exactly. Maybe this will help us see eye to eye. Water is another concept like consciousness and tableness that we have to define the boundaries of. You're concerned about macro-properties rather than chemical composition. For consciousness, I'm more concerned about macro properties than composition, too.
Adriano_Mannino wrote: And I keep telling you: My definition e.g. of "pain" or "disagreeable feeling" is perfectly clear.
And I keep telling you: It's not specific enough for us to decide if we care about Lego Turing machines. Ultimately, your experience does refer to something Third-person, and that's the thing we want to tease out and generalize.

Adriano_Mannino
Posts: 3
Joined: Wed Jun 06, 2012 1:11 pm

Re: Consciousness and tableness

Post by Adriano_Mannino » Sat Jun 09, 2012 10:44 am

- Although you might not admit it, you are in fact claiming that consciousness does not exist (as a phenomenon out there), because the truth-maker of existence claims about consciousness now absurdly depends on what you care about:
<<All I'm saying is that we can use the word "conscious" to mean "the property belonging to the things we want to care about that makes us want to care about them">>: Suppose some "we" cared about pointy rocks due to their property of pointiness. It would then follow that consciousness = pointiness. You can make anything "conscious" by fiat. If that's the case, then consciousness doesn't really exist.

- <<I think consciousness is the only thing in the universe that matters>>: Combine this with your moral non-realism (what matters = what I care about) and with what I quoted above, and what you end up with is a bloody tautology! The thing/property I care about is what I care about! Congrats. :)

- How else do I propose to figure out whether something is conscious? - You're changing the subject. We were talking about what consciousness is (primary definition), not about the methods there are to figure out whether something is conscious. Consciousness is present if it feels like something to be a certain thing, i.e. if a thing has a first-person perspective, however rudimentary. We cannot detect it directly from the third-person perspective, so we have to resort to indirect methods. The inductive principle ("similar cause, similar effect", very roughly) allows me to probabilistically generalize from the only case for which I have direct and certain knowledge that consciousness is present, which is my own. Behavioral, functional (--> algorithms), material and evolutionary properties will play a role here. But the point is that the role they play is *epistemic*. None of them are to be equated with consciousness, they are just pieces of *evidence* for the presence of consciousness.

- The basic argument for the postulated "direct third-person inaccessibility" of consciousness is, of course, the following: My pain is as real as anything could be, but when you look at my behavior or inside my body, you'll find particles in motion and you cannot detect this property directly. Nothing forces you to conclude that it feels like something to be me or that there's a pain feeling there if you examine me (third-person perspective). Yet it is the *surest* thing in the world that my pain feeling exists! (It's more certain than the existence of the external world, for instance. And it's not even clear how I *could* be in error about the presence/absence of a painful feeling.)

- Zombies: What's at issue here is *logical possibility*. You're right that we can falsely believe that something is consistent and then be proven wrong (--> Arrow's theorem). But you have done *nothing* to prove that zombies are logically impossible. The question of whether there is a "hard problem" depends on logical, not physical possibility. Take the analogy of water: Given all the material micro-properties of water, we can *logically deduce* its macro- or surface-properties. So it's logically impossible for water to be H2O but not fluid at room temperature, but it seems perfectly logically possible for an object to be (algorithmically) interacting neurons and *not* conscious. Even knowing everything material about the brain, everything that's accessible from the third-person perspective, nothing will tell you that there *must* be something it is like to be that object. Applying Occam's Razor and leaving consciousness out of the ontological picture might seem like a natural epistemic move now ("eliminative materialism") - but it's totally inadequate (even ridiculous) given that the existence of my own consciousness is the surest thing in the world.

- The only way to make zombies logically impossible without denying obvious facts is, I think, by assuming panpsychism (cf. Galen Strawson). If some form of (very rudimentary) consciousness is ontologically basic ("micro-qualia"), then it becomes possible (at least in principle) to logically deduce our property of being conscious/subjects of experience/having macro-qualia from what we are on the micro-level. (Well, I'm not being precise enough: Actually, zombies would still be logically possible. What would be logically impossible, though, is that they have the same micro-structure as we.)

- <<So you would create *any* amount of suffering right now if it came in a package with a sufficiently large amount of happiness? - Yes.>> - Totally nuts! :) I don't see how the creation of happiness (where there was *nothing* before) could be ethically good in itself. (Creating happiness where there would otherwise be less agreeable conscious states is a different matter, of course - here the benefit is perfectly clear.) You cannot benefit non-existents by bringing them into existence. Strictly speaking, you also cannot harm them by the mere fact of bringing them into existence, but they are of course harmed if they suffer and should be turned off immediately (after 0 seconds in the limit case). Or do you really believe it would be *immoral* not to turn on some simulated minds that you built if it was guaranteed that they would be happy? If you could turn on enough of them, would you forgo saving children from starvation and torture? That strikes me as totally crazy on an intuitive level, and the theoretical reason is that nobody can mind not being turned on, whereas there certainly is someone that minds not being saved from torture. Or would you really feel any ethical motivation to fight for the simulated minds being turned on and for more of them being built? Fight for the rights of the unborn! - at the expense of those who actually exist and suffer.

- <<All of our moral sentiments always depend on our mood.>> - No. Or maybe yes, just as your acceptance of some mathematical proof or physical theory might depend on your mood. But it doesn't depend on anyone's mood that there's something that *matters* for me, that there are things that are *good/bad* for me, and for you, and for children, pigs, and dogs, but probably not for rocks. This is an objective fact.

Hutch
Posts: 40
Joined: Sun Jun 10, 2012 9:58 am
Location: Boston

Re: Consciousness and tableness

Post by Hutch » Sun Jun 10, 2012 10:51 am

This is a really interesting discussion about consciousness, but I think that the discussion of different types of utilitarianism (e.g. NU vs. classical) deserves its own thread. I've started one here, and started it off with my thoughts (sorry about the length of the post there...)

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Consciousness and tableness

Post by Brian Tomasik » Sun Jun 10, 2012 10:53 am

Adriano_Mannino wrote: Suppose some "we" cared about pointy rocks due to their property of pointiness. It would then follow that consciousness = pointiness.
Yes, those rocks would be conscious. However, for example, they would not be "fronscious" (aware) nor "stonscious" (awake) nor many other things.
Adriano_Mannino wrote: You can make anything 'conscious' by fiat. If that's the case, then consciousness doesn't really exist.
We can define any word to refer to any property of the world we want. That doesn't mean the property doesn't exist. "What's in a name? that which we call a rose / By any other name would smell as sweet [...]."
Adriano_Mannino wrote: The thing/property I care about is what I care about! Congrats. :)
Why thank you. :) Yes, that's right -- it was a tautological sentence.
Adriano_Mannino wrote: We were talking about what consciousness is (primary definition), not about the methods there are to figure out whether something is conscious.
Perhaps I phrased it poorly. What I meant was that our primary definition isn't sufficiently precise to delineate the set of things we're referring to even in theory. It's like saying that something is "enormous" when it's at least as big as the Empire State Building. But wait, how exactly is that comparison done? Is it by mass? By volume? By height? Do we count the air inside the building? If so, what do we do when it's a warm day and the density of air is lower than usual? Is the metric expansion of space-time important? How about relativistic length contraction? All of these details need to be filled in before the definition can be made precise.
Adriano_Mannino wrote: Behavioral, functional (--> algorithms), material and evolutionary properties will play a role here.
Yep, I agree with all of this.
Adriano_Mannino wrote: But the point is that the role they play is *epistemic*. None of them are to be equated with consciousness, they are just pieces of *evidence* for the presence of consciousness.
No, because there is no "real truth" about whether there's something it's like to be a Lego Turing machine. A Lego Turing machine just does things, and we may or may not regard those things as counting as subjective experience. Same for you or me or a bat.
Adriano_Mannino wrote: Nothing forces you to conclude that it feels like something to be me or that there's a pain feeling there if you examine me (third-person perspective).
If we could read your source code, or if we could trace all of your particle movements and understand their functions at a higher level, then yes indeed, we would see that there's something that it's like to be you. We would see your responses to aversive stimuli, your encoding of experiences into memory, your plans, your fears, your hopes, your 'liking' responses resonating through the ventral pallidum, your reflection upon your own emotions, your calculations of how to act next, and your verbal brain regions assimilating these operations into a statement that "Hey, it feels like something to be me." We would see everything.
Adriano_Mannino wrote: (--> Arrow's theorem)
Nice visual aid. :)
Adriano_Mannino wrote: Given all the material micro-properties of water, we can *logically deduce* its macro- or surface-properties. So it's logically impossible for water to be H2O but not fluid at room temperature, but it seems perfectly logically possible for an object to be (algorithmically) interacting neurons and *not* conscious.
We have not fleshed out a sufficiently precise definition of "conscious" to answer this. However, for any given set of values, there is *some* precise definition of what's conscious and what isn't. At that point, consciousness is as concretely defined as "fluid at room temperature," and then zombies become as logically impossible as H2O that's frozen at 20 Celsius.

I'm not totally sure what you're thinking counts as a zombie, either. I'm assuming it's something that's physically identical to a conscious creature in all realms of physics. So even if we discovered some new branch of physics where consciousness lives that's totally separate from quarks and leptons (which I find highly unlikely due to Occam but isn't logically impossible), I'm assuming zombies would be identical in this respect too.

Basically, you can't get outside of physics. :)
Adriano_Mannino wrote: Actually, zombies would still be logically possible. What would be logically impossible, though, is that they have the same micro-structure as we.
Cool -- then I think we agree here?
Adriano_Mannino wrote: You cannot benefit non-existents by bringing them into existence.
Why not? They exist in the space-time of our deterministic block universe just as much as anyone else who's already alive. In any event, I think happiness is just plain good all on its own. It doesn't have to "benefit someone" per se.
Adriano_Mannino wrote: Or do you really believe it would be *immoral* not to turn on some simulated minds that you built if it was guaranteed that they would be happy?
Yes.
Adriano_Mannino wrote: If you could turn on enough of them, would you forgo saving children from starvation and torture?
On my non-NU days, yes, if it were a sufficiently large number of minds that could be turned on.
Adriano_Mannino wrote: Or would you really feel any ethical motivation to fight for the simulated minds being turned on and for more of them being built? Fight for the rights of the unborn! - at the expense of those who actually exist and suffer.
In theory, yes. In practice, we alas are unlikely to find ourselves in a situation where there's not more important suffering to be prevented. My pain:pleasure exchange rate is a finite constant, but it's a very high constant.

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Consciousness and tableness

Post by DanielLC » Wed Jun 13, 2012 9:54 pm

If consciousness isn't a thing, what are the Born probabilities?
Consequentialism: The belief that doing the right thing makes the world a better place.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Consciousness and tableness

Post by Brian Tomasik » Thu Jun 14, 2012 7:43 am

DanielLC, can you clarify the puzzle? And what does it mean that consciousness is or is not "a thing"?

DanielLC
Posts: 707
Joined: Fri Oct 10, 2008 4:29 pm

Re: Consciousness and tableness

Post by DanielLC » Thu Jun 14, 2012 8:04 pm

The probability of us observing an outcome is proportional to the square of the amplitude. What is it the probability of? I think Where Experience Confuses Physicists explains it better than I could. You should probably read Where Physics Meets Experience fist.

Also, if it turns out that the Copenhagen interpretation is true, then why do we find ourselves now?
Consequentialism: The belief that doing the right thing makes the world a better place.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: Consciousness and tableness

Post by Brian Tomasik » Fri Jun 15, 2012 10:38 am

I probably won't read the articles any time soon, but thanks a lot for the references! This would be fun to return to at some point.

Without having read them, I would say that "finding yourself experiencing something" is a completely sensible state of affairs, regardless of the debate between me and Adriano.

Post Reply

Who is online

Users browsing this forum: No registered users and 1 guest