In Defence of Moral Realism

Utilitarianism, prioritarianism and other varieties of consequentialism.
Post Reply
User avatar
Darklight
Posts: 118
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada

In Defence of Moral Realism

Post by Darklight » Sun Feb 09, 2014 4:36 am

The following is as much an effort to put my thoughts on the subject in order as it is an effort to persuade people to at least consider moral realism. I thank the very smart people here at Felicifia for causing my mind to turn to this subject. I'd not given much thought to this particular bit of meta-ethics until recently. So without further adieu...

In Defence of Moral Realism

Moral Realism is defined by Wikipedia as of Saturday, February 8, 2014 as:

"A non-nihilist form of cognitivism and is a meta-ethical view in the tradition of Platonism. In summary, it claims:
  1. Ethical sentences express propositions.
  2. Some such propositions are true.
  3. Those propositions are made true by objective features of the world, independent of subjective opinion."
So how do we go about proving or disproving these claims?

To do this, I shall first establish some definitions.

What is Morality?

Some people argue that morality is simply a relative standard by which we judge things, as in End-Relational Theory. If true, then this makes morality inherently relative because different people can establish different standards and there is no basis for proving any particular standard as being more correct. I don't subscribe to this view.

Morality in my view is simply, and this a loaded statement I know, "what is right".

What exactly do I mean by this? To say something is right means to imply that it is the correct fact or world state or course of action leading to a world state given all relevant information. For instance, we can say that "1 + 1 = 2" is right because it correctly represents a mathematical relationship. We cannot say that "1 + 1 = 2" is good however. Goodness is a different property than rightness. Rightness simply says that, given all the facts, this is correct.

Rightness is not the same thing as rational. Rationality has to do with finding the best way to achieve one's values and goals. It is quite possible then for rational activity to be immoral.

Rightness is simply the property of being true. If morality is this, it essentially makes 1 and 2 correct by definition.

Morality as Truth

Morality thus is not a subjective standard we apply because we desire it. Rather, morality is a set of prescriptions based on descriptions of reality. It is a set of normative truths that we can infer through a combination of perception, logic and reason. In that sense it is very much like mathematics, and I argue exists in the same realm as mathematics. This essentially makes 3 correct by definition.

Thus, assuming that my definition of morality expresses something that actually exists, rather than just a hypothetical construct of my philosophy, the definition of moral realism is satisfied. Thus, to prove moral realism, I need only show that this definition of morality is, -ahem- true.

What is moral?

So then, what does this definition of morality imply that makes it falsifiable? It implies that morality is something that is grounded in facts. It implies strongly that whatever is moral is not a matter of opinion, but of knowledge, and that the reason why people disagree about morality is that they lack perfect knowledge.

I don't pretend to have perfect knowledge. Thus, any attempt at finding out what morality implies is inherently limited by this lack of knowledge. Nevertheless, lack of knowledge has never been a reason not to attempt to reason with what knowledge we do have. Science is all about figuring out what we can know despite uncertainty.

So what is moral? Something that is moral is fact dependent. Strictly speaking there are only a few facts that we know without question. We know that something exists, that existence is. We know that what some part of what exists has subjective states, that experience is. We know that some subjective states feel different than others, that some are noxious, while others are pleasant. We know that because of the feeling of these states, we discriminate automatically between them, assigning some of them to be positive (or good), and others to be negative (or bad). This is not a preference, but a feature of sentience.

We can, perhaps at the risk of some confusion, refer to these positive and negative valences as absolute values because we have no choice in assigning value to them. It is an automatic, or deterministic mechanical process. These absolute values differ fundamentally from other values that we can choose, and I think much of the confusion over values is in not recognizing this. Absolute values can motivate action and establish desires, but motivation is not by itself moral. The correctness of a desire depends on the consequences of them, whereas the correctness of a feeling depends only on how it feels. Feelings and desires are both facts. But feelings have valences, while desires are either satisfied or not. However, we do not say that desires are positive when they are satisfied and negative when they are not. In fact, the satisfaction of a desire often leads to its annihilation. It is therefore clear that desires exist to serve as means to motivate the achievement of values or goals. They may be good, but not absolutely good. I use absolute instead of intrinsic because it may be possible to hold some outside goods, like a better world, as intrinsically valuable. However, such is a choice that we can make to assign such value, so I consider absolute value as potentially different from intrinsic value.

Given these facts, we can begin to state what is moral. An entity with perfect knowledge would be aware of these facts, and would know what good and bad feelings felt like. As it would know what every entity in this universe felt, it would be able to reason about the truth of these feelings, these absolute values. And the fundamental truth is simply that all entities automatically discriminate or prefer feeling the good over the bad. There is a kind of correctness to feeling good, and incorrectness to feeling bad, that subjects automatically are motivated to act upon.

In a sense, this can be understood by looking at a goal-directed agent. When such an agent reaches its goal state, it is in the correct state. If it fails to do so, than it is in the incorrect state. Sentient beings, have an intrinsic goal state, and it is called happiness. The desires, values, and actions of the agent can be described as correct only in the sense that they contribute to reaching the goal state. Sentient beings could conceivably develop other goal states, such as desired states of the world. But those states would not be about them. A world state could be "correct" to a sentient being, but that could just be a belief, rather than necessarily being a fact about the sentient being. Knowing the actual correct world state depends on perfect knowledge, and is therefore unknowable to the average sentient being. Though, this should not necessarily preclude sentient beings from trying to know as much as possible and trying to create what they think is the "correct" world state.

It can be stated then that the best state is the correct state that an entity -should- be in. That is to say, there is a prescriptive relationship between right and good, that the truth prescribes goodness as being fundamentally correct. Thus all good should be right, though not all right should be good, because it is not the case that all things that are true should be good (to say that 1 + 1 = 2 should be good is silly), but all things that are good should be true (as in, goodness should exist).

An entity with perfect knowledge, if motivated to do what is right, would therefore act to maximize the good for all sentient beings, not because it was feeling benevolent, but because it would be the correct course of action consistent with the truth of knowing what the correct world state, and correct state of all sentient beings, was.

In attempting to be moral, we attempt to achieve this correct world state, rather than just achieving the correct state for ourselves. We choose to take a universal perspective, even without perfect knowledge, and try to approximate what an entity with perfect knowledge would do.

The Problem with Values

Something more should be said about values. Often one of the confusions of moral theory is that it must have something to do with all our values. This confusion I believe stems from the belief that values determine morality, which I believe is actually mistaken.

Non-absolute values are inherently subjective, and are based on our imperfect perceptual knowledge of the outside world. People who's knowledge of the outside world changes often change their values to suit the information they have. To found "morality" on these values is to make "morality" inherently subjective and error prone. Non-absolute values are useful because the fulfillment of these values correlates strongly with positive states, but this is not always the case. Values can be described as good or bad in terms of what consequences holding those values entails. But non-absolute values cannot be described as "absolutely" good or bad or right or wrong.

I will state however something that will likely be controversial, and that is that the correct values are the ones that are most moral. Most people do not have values which are perfectly moral. Rather they either think they do, or they don't care. Nevertheless, some values are closer to moral than others. For instance, I think Utilitarianism is close to moral, but it may not be perfectly moral. I don't pretend to know because I lack perfect knowledge.

Nevertheless, I conjecture that there is a perfect morality because objective truth exists, even if we in our limited nature can only apprehend subjective truth directly, and must infer the qualities of objective truth indirectly.

Thus, the truth is, that I cannot prove that my definition of morality is true. And so I cannot actually prove moral realism. However, I can conjecture my definition of morality as plausible. Thus, moral realism, -could- be true, and unless falsified, presents a legitimate intellectual position to take.

Morality as Computation

The interesting corollary to all of this is that if morality is truly like mathematics, then morality should be computable. Maximizing the good is in effect, a computation that sees maximum goodness as the correct state of the universe. In which case we could calculate a kind of "moral error function" or "moral objective function", and morality can be seen as a kind of optimization problem. This is of course, what the various shades of Utilitarianism have been saying all along.

Anyways, that's my attempt at a defence of moral realism. I apologize if it isn't the most rigorous proof.
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein

User avatar
Darklight
Posts: 118
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada

Re: In Defence of Moral Realism

Post by Darklight » Sun Feb 09, 2014 9:40 pm

I should perhaps make some clarifications to this moral theory.

Values and Correctness

I would like at this point to separate values into three categories:

Subjective value is the value we assign as subjects to experiences and things. While we can state facts like "subject values money", it does not follow automatically that "money is valuable" in any other sense. This is what most people mean by values in general.

Abstract value is like the common mathematical values, such as "5", or "true". It's relevance is that it allows us to make universally true comparisons of things. For instance, "9 < 17". This statement is true everywhere regardless of what any subject thinks.

Objective value exists at the intersection of subjective and abstract value. That is to say, it refers to what subjects value, but abstracts it to be universal. Thus, statements like "happiness > suffering" is an objective value because it is true in all cases across all subjects.

Correctness is a mathematical property that I am, perhaps abusing here by applying to morality, but I feel it best captures what I mean by "rightness". The fundamental assumption underlying my usage of correctness is that "true" or "positive" is preferable to "false" or "negative", because "true" and "positive" are more objectively valuable. That is to say, that any discriminatory system will choose to be more correct and maximize the objectively valuable, assuming it has the motivation to be right rather than wrong.

Morality as Subjective Value-Independent

Due to perhaps the lack of clarity in my earlier writing, my definition of morality may appear to be subjective value-dependent, but in fact a key feature of this theory is that morality is actually subjective value-independent.

What I mean by this is not that subjective values and morality are not at all related but that the relationships is one-sided.

Most definitions of morality assume that subjective values determine would our morality should be. I argue that the theory of Morality As Correctness suggests that the opposite is the case. Morality determines would subjective values we should hold (though not necessarily what we do hold). Morality As Correctness holds that what determines a moral statement as being correct is its relation to objective value.

The Central Thesis

The central argument of my theory then is that certain states like happiness are good or positive not because we subjectively value it, but because it is an experience that is objectively more valuable, and therefore more correct than suffering. That is to say, there is a mathematical relationship that says that happiness > suffering, and that therefore happiness should exist, while suffering should not exist. Good in this case, is not a subjective value judgment, but a state of correctness that happens to benefit the subject.

Why should an all-knowing objective value maximizer only maximize these things and not other things like the number of paperclips? Because while 100 paperclips > 10 paperclips is true as an abstract value, it doesn't follow that a paperclip itself carries any objective value. Paperclips are not universally valued by all subjects. Thus the statement 100 paperclips > 10 paperclips actually reads as: 100 (0) > 10 (0). Which is false and therefore not motivating. What makes happiness true and objectively valuable is that all subjects experience it directly as positive.

This is true even if you got your wires crossed. As happiness describes a positive state, any attempt to reverse the wires and make happiness a negative state would be defeated because the new happiness would actually feel bad and therefore no longer be happiness by definition. By it's nature of being a directly experienced thing rather than an indirectly experienced thing, it has a universal description that allows it to be objective.

The nature of values is that there are many, many subjective values, an infinite number of abstract values, and very few objective values. So far I have identified happiness as an objective value. I leave open the possibility that other objective values might exist, and therefore also be worth maximizing, but they would have to satisfy the criterion of being experienced universally by all sentient beings as positive or good. If it can be proven that happiness is not an objective value, and that there are no objective values, then this theory of moral realism can be falsified.
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein

User avatar
peterhurford
Posts: 391
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University
Contact:

Re: In Defence of Moral Realism

Post by peterhurford » Mon Feb 10, 2014 5:10 am

This is a good post. I don't have time to respond to it right now, but I hope to respond to it within the week.

Have you seen my most up-to-date summary of my view (the end relational view you're critiquing)? You can find it here: "A Meta-Ethics FAQ".
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: In Defence of Moral Realism

Post by Brian Tomasik » Mon Feb 10, 2014 11:17 am

I have succumbed to another temptation to engage Darklight's writings. :) I also plan to finish the reply on the other thread eventually.

The two fundamental problems with moral realism:
1. Why should I believe it?
2. Even if it were true, why would I care?

#1: Everything you said about moral realism being true like 1+1=2 is true could also be said of why God exists. Moral realism has no explanatory power and seems to have no more basis than theism. Do you see any difference?

#2: If I don't believe 1+1=2, I tend to fail to achieve my goals and may even get killed. If I don't accept moral realism, then what? What motivation do I have to do "the morally right thing" if I don't already care about moral realism? Maybe there's no answer here. You could defend the reality of moral truth without having a way to compel me to care about it.

---

It may be that for most organisms, happiness by me > suffering by me. It's not the case that for any organism X, happiness by organism Y > suffering by organism Y for all Y, especially if Y's extra 10 of happiness means 5 less for X.

---

This article is well written. :) I disagree with most of it but alas don't have time to debate it point by point. It's possible you'll come to reject it yourself in time, just as theists who think more about religion tend to become atheists.

Would there be adverse consequences if you "became atheist" so to speak? Would this dampen your motivation to live ethically? I think many realists-turned-nonrealist rebound to their original levels of moral commitment, though this may not always be true.

Do you think any sufficiently advanced AI would behave according to the moral truth? If not, then there's less consequence to resolving this issue, because that's one of the main practical cases where moral realism can cause real-world harm rather than being like liberal religion where God doesn't actually have much role in people's lives.

P.S.: Sorry for all the religion comparisons. If this were a debate, I would be accused of ad hominem without advancing any arguments. : P

P.P.S.: I've been trying and failing to find a way to express the fact that I admire this piece and feel like you're a high-quality contributor.

User avatar
Darklight
Posts: 118
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada

Re: In Defence of Moral Realism

Post by Darklight » Mon Feb 10, 2014 7:12 pm

Have you seen my most up-to-date summary of my view (the end relational view you're critiquing)? You can find it here: "A Meta-Ethics FAQ".
I've skimmed it a couple times, but I haven't had a chance to give it as much due diligence as I probably should. From what I did read, I mostly agree that if you accept your premises on what morality is, most of what follows is correct. I think what I disagree with mostly is whether definitions are arrived at by social consensus. This may be true of language definitions, but I don't think this is true of mathematical definitions.

And I agree that morality is not by itself motivating. It is just a bunch of facts that suggest certain things and acts are better or more right than others, it doesn't say you have to care about being right. I just think that the only motivation you need is a desire to be right.

Also, while "being moral" is itself a goal, morality itself is not. Thus, the statement, "you ought to be moral" may not be as circular as you imply. In my view, it is a statement that says, your goal should be to be correct or consistent with the truth. If it is in a way circular, then it is circular in the sense that a Hofstadter Strange Loop is circular, which to me means we shouldn't discard it just because.

I actually don't disagree with most of your premises. I guess the one premise I disagree with is that there is no basis for declaring one goal better than another. I think that being correct or right is a goal that is self-justifying, whereas other goals are not self-justifying. Note that self-justifying doesn't mean the same thing as motivating.
The two fundamental problems with moral realism:
1. Why should I believe it?
2. Even if it were true, why would I care?
1. You should believe it if you value the truth as being very very useful for making good decisions with generally good consequences.
2. It's totally up to you whether or not you care about the truth.
#1: Everything you said about moral realism being true like 1+1=2 is true could also be said of why God exists. Moral realism has no explanatory power and seems to have no more basis than theism. Do you see any difference?
I think it does have explanatory power, and further, that unlike a theory of God, it makes predictions or statements which are falsifiable (though it can be argued that even theism is falsifiable in the sense that it could be falsified with perfect information). My particular version of moral realism explains why an agent interested only in being correct would come to behave in ways that we would see as being moral.
#2: If I don't believe 1+1=2, I tend to fail to achieve my goals and may even get killed. If I don't accept moral realism, then what? What motivation do I have to do "the morally right thing" if I don't already care about moral realism? Maybe there's no answer here. You could defend the reality of moral truth without having a way to compel me to care about it.
Indeed. I accept that motivation is separate from morality.
---

It may be that for most organisms, happiness by me > suffering by me. It's not the case that for any organism X, happiness by organism Y > suffering by organism Y for all Y, especially if Y's extra 10 of happiness means 5 less for X.

---
Hmm... An implicit assumption of my theory is that the happiness of individuals is independent of each other, that one organism's experience of happiness does not directly affect the experience of happiness of another. I would argue that this is true because happiness, being a subjective state, does not directly interfere with the objective world, and therefore an entity's experience of happiness is independent of another's experience. Situations where 10 happiness for Y = -5 happiness for X, are not a result of the happiness interfering with one another directly, but rather because of some indirect relationship, such as Y stealing objects from X. This means that there can exist some arrangement of the universe where happiness's don't indirectly conflict. For instance, we could separate all sentient beings to be alone in their own universes. But I think it's not even necessary to do that. The happiness of individuals is sufficiently independent that a being with perfect information could set up some arrangement of the universe so that all sentient being's happiness would not conflict.
This article is well written. :) I disagree with most of it but alas don't have time to debate it point by point. It's possible you'll come to reject it yourself in time, just as theists who think more about religion tend to become atheists.
We'll see. I admit this theory is just a hypothesis, created sort of as an experiment in pushing some ideas to their logical conclusions. Like any hypothesis, I am willing to accept that it could be wrong, and in fact would be interested to see how well it stands up to scrutiny.
Would there be adverse consequences if you "became atheist" so to speak? Would this dampen your motivation to live ethically? I think many realists-turned-nonrealist rebound to their original levels of moral commitment, though this may not always be true.
Not really. My motivation to be ethical comes not only from a desire to be right, but also from a strong empathy towards other sentient life. Even if there was no certain way to "be right", I would still prefer to do the most good, because I am inclined to care.
Do you think any sufficiently advanced AI would behave according to the moral truth? If not, then there's less consequence to resolving this issue, because that's one of the main practical cases where moral realism can cause real-world harm rather than being like liberal religion where God doesn't actually have much role in people's lives.
It really depends on the AI's motivations. Part of my effort to create a theory that is morally realist, is to try and see if there is any way to direct such an AI to behave morally rather than arbitrarily. Right now I think that in order for my theory to be motivating, such an AI would have to already be motivated to do what was right or correct. Perhaps somehow the theory becomes self-motivating once you have perfect information, but at that point you have not just an advanced AI, but essentially a god-like being. I would like for god-like knowledge to automatically motivate omni-benevolent behaviour, but I can't, with the current version of this theory, assert that confidently.
P.S.: Sorry for all the religion comparisons. If this were a debate, I would be accused of ad hominem without advancing any arguments. : P
No worries, the religion comparison is apt. It does kind of resemble one, especially with the assertion that a being with perfect knowledge might be perfectly moral. But then, I think any morality or ideology that argues that it is more correct or true than others, runs the risk of being compared to religion.
P.P.S.: I've been trying and failing to find a way to express the fact that I admire this piece and feel like you're a high-quality contributor.
I appreciate that. I think the most clear expression that you feel I'm a high-quality contributor is that you're willing to reply and take my arguments seriously. Personally, I actually have doubts that I'm really that high-quality a contributor. I haven't gotten around to organizing my thoughts as seriously as you and the others who have websites and blogs filled with high quality essays. Most of my offline writings are still a work in progress, and I would consider most of the theories and ideas that I've advanced on this forum to be very much works in progress, that I'm sharing in order to see how well the "first draft" actually stands up to outside scrutiny.

So yes, thank you all for your time. :)
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: In Defence of Moral Realism

Post by Brian Tomasik » Tue Feb 11, 2014 2:31 am

Darklight wrote: I think what I disagree with mostly is whether definitions are arrived at by social consensus. This may be true of language definitions, but I don't think this is true of mathematical definitions.
Definitions are always arbitrary. That doesn't mean the statements you can express with them are.
Darklight wrote: Thus, the statement, "you ought to be moral" may not be as circular as you imply. In my view, it is a statement that says, your goal should be to be correct or consistent with the truth.
What does "should" mean here?
Darklight wrote: I think that being correct or right is a goal that is self-justifying, whereas other goals are not self-justifying.
"I believe everything the Pope says."
"Why?"
"The Pope says he's infallible. His statement is self-justifying."
Darklight wrote: My particular version of moral realism explains why an agent interested only in being correct would come to behave in ways that we would see as being moral.
My particular version of theism explains why an agent interested only in being correct would come to behave in ways that we see as being religious.

What about the problem of multiple revelations? Well there's also the problem of many conflicting moral views.
Darklight wrote: Indeed. I accept that motivation is separate from morality.
Cool. :)
Darklight wrote: The happiness of individuals is sufficiently independent that a being with perfect information could set up some arrangement of the universe so that all sentient being's happiness would not conflict.
Depending on how you individuate organisms and treat brain size, it may be that you could always increase the happiness of one at the expense of others by stealing materials that comprise their brains to add to the brain of the other one to make it bigger. Alternatively, if you don't weight by brain size, you could divide one big brain into two smaller brains, and by not doing so, you're withholding happiness from those two small brains.
Darklight wrote: Not really. My motivation to be ethical comes not only from a desire to be right, but also from a strong empathy towards other sentient life. Even if there was no certain way to "be right", I would still prefer to do the most good, because I am inclined to care.
Cool. :) Me too. In fact, I would probably rather be humane than be right. If it turned out that the right thing was to torture bunnies for no reason other than my amusement, I would not do the "right" thing. This is like the Euthyphro dilemma, a variant of which could ask, "If God told you to rape and murder, would you?" (As it turns out, God did tell people to do such things in certain books of the Bible, so it's not purely a hypothetical question. Also compare to the story of Abraham and Isaac, where Abraham is praised for trying to kill his son at God's command.)
Darklight wrote: Right now I think that in order for my theory to be motivating, such an AI would have to already be motivated to do what was right or correct.
Cool.
Darklight wrote: But then, I think any morality or ideology that argues that it is more correct or true than others, runs the risk of being compared to religion.
Sort of. Religious presuppositionalists are right that every axiom we hold involves faith. I personally believe that there is a real external world about which facts are true. I have more faith than a complete epistemological skeptic.
Darklight wrote: Personally, I actually have doubts that I'm really that high-quality a contributor.
Your writing is distinctly lucid and persuasive (even if the arguments aren't to me). I'm guessing you have a high verbal IQ.

---

Some additional reading that may be of interest:
* Dealing with Moral Multiplicity by me
* The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it by Joshua Greene (which I haven't read most of). Greene is a non-realist utilitarian.

User avatar
Darklight
Posts: 118
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada

Re: In Defence of Moral Realism

Post by Darklight » Tue Feb 11, 2014 6:55 am

What does "should" mean here?
I think it means the same thing as "ought". As in, it is a prescriptive indicator of obligation.
"I believe everything the Pope says."
"Why?"
"The Pope says he's infallible. His statement is self-justifying."
But that statement is not self-justifying. "Pope = infallible" is not the same thing as saying "true = true", which is essentially how being correct is a self-justifying goal. Being correct or right = being true, by definition of what being correct or right means. I know it's circular, but that's intentional. Self-justification seems to require a kind of Strange Loop or recursive reasoning to exist.
My particular version of theism explains why an agent interested only in being correct would come to behave in ways that we see as being religious.
I don't actually see anything wrong with this statement, and it's more clear why if you replace "being correct" with "the truth" so that it reads:

"My particular version of theism explains why an agent interested only in the truth would come to behave in ways that we see as being religious."

If an agent is actually only interested in the truth, and starts acting religious, then we should see that as evidence in favour of theism, weighed by how much information the agent actually has. If the agent has perfect information and starts acting religious, I would start praying. XD
What about the problem of multiple revelations? Well there's also the problem of many conflicting moral views
The thing with multiple revelations is that revelation is a claim that God or some outside power just gave you knowledge. The more apt comparison would also include people who claim to have a proof of God from reason alone. The problem of many conflicting moral views can be dealt with by assessing claims according to their veracity. Revelations would be like Divine Command Theory (which I believe has been shown to be invalid by the Euthyphro dilemma), while proofs from reason would be more like non-naturalist moral realist theories. In practice, most proofs of God, like the Ontological Argument or the Teleological Argument, have flaws that can be used to falsify them. On the other hand, if there was a proof of God that was based on scientific facts that actually purported to show the existence of God, that would be much more appealing. That would be equivalent to an ethical naturalist moral realism.
Depending on how you individuate organisms and treat brain size, it may be that you could always increase the happiness of one at the expense of others by stealing materials that comprise their brains to add to the brain of the other one to make it bigger. Alternatively, if you don't weight by brain size, you could divide one big brain into two smaller brains, and by not doing so, you're withholding happiness from those two small brains.
At first glance this is a perplexing argument. But I think it hinges on the mistaken assumption that you can just add parts to a brain and expect it to function bigger, or that dividing one big brain can create two smaller brains. It's not clear that this is true. If you carefully surgically cut apart the two hemispheres of the brain, you don't get two brains. You get one person who's two hemispheres aren't talking to each other anymore, and has weird disorders of thinking. Now, I'm no neurosurgeon, but if you try and cut the brain further than that, I'm pretty sure you'll end up just killing the person entirely somewhere around the time you get to the brain stem. Brains are not just random masses of neurons. They have structure and a kind of integrated cohesion and stuff.
Cool. :) Me too. In fact, I would probably rather be humane than be right. If it turned out that the right thing was to torture bunnies for no reason other than my amusement, I would not do the "right" thing. This is like the Euthyphro dilemma, a variant of which could ask, "If God told you to rape and murder, would you?" (As it turns out, God did tell people to do such things in certain books of the Bible, so it's not purely a hypothetical question. Also compare to the story of Abraham and Isaac, where Abraham is praised for trying to kill his son at God's command.)
Yeah, I reserve the right to choose not to do what is right, if it turns out to be something utterly evil. But I am inclined to think that doing what is right will probably be reasonable, if for no other reason than that the truth so far seems to be that right and good correlate well together.

As for the stories in the Bible about God telling people to do bad things... it's one of the reasons I'm not so big on Biblical literalism. Though, for what it's worth, my own view is that if God actually exists and actually did those things, then I think the only way it could be justified would be on some Utilitarian grounds of somehow achieving a Greater Good that we aren't aware of from the way the stories were written. Actually, if you think about it, most of the Bible, it seems like God doesn't follow the Ten Commandments, but acts according to rather "Utilitarian" or "ends justify the means" reasoning (i.e., these people in Sodom are evil and need to be cleansed from the Earth to prevent corrupting others, or I need to test Abraham's loyalty to make sure he will be reliable, or I'll send my son to be tortured and killed so that all of humanity can be saved, etc.).

It's actually interesting to me how the Jesus narrative actually seems a bit like "The Ones Who Walk Away from Omelas" or the narrative where the cost of Utopia is that one child must be tortured.
Your writing is distinctly lucid and persuasive (even if the arguments aren't to me). I'm guessing you have a high verbal IQ.
Thanks! I have scored fairly high on verbal reasoning type tests in the past, though I've yet to actually take a proper IQ test.
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: In Defence of Moral Realism

Post by Brian Tomasik » Tue Feb 11, 2014 8:57 am

Darklight wrote: I think it means the same thing as "ought". As in, it is a prescriptive indicator of obligation.
What does "ought" mean? What does "prescription" mean?

For me, "should" means "I want." So to say that "you should reduce suffering" means "I want you to reduce suffering." I have thus explained what a prescriptive word means in a descriptive way.
Darklight wrote: But that statement is not self-justifying. "Pope = infallible" is not the same thing as saying "true = true", which is essentially how being correct is a self-justifying goal.
Ok, but why would I care what's true? I guess you're saying something like the following?
1. There is a moral truth.
2. I care about what's true.
3. Therefore, I care about the moral truth.
Darklight wrote: I don't actually see anything wrong with this statement
I agree. :) That smart people believe in God is evidence for God. The problem lies with the prior probability of the hypothesis being very low because it's more complex than simpler hypotheses that explain the same data. Likewise for moral realism, we can easily explain moral beliefs without moral realism being true. Indeed, maybe moral non-realism explains the data better, because there's so much disagreement about what moral truths are, in contrast to mathematical truths where most people agree on them.

It also seems to me that people emotionally want moral realism to be true, just like they emotionally want God to exist. Indeed, the two are often closely linked, like when people ask, "If there's no God, where do right and wrong come from?" People fear nihilism in a similar way as they fear atheism.
Darklight wrote: Brains are not just random masses of neurons. They have structure and a kind of integrated cohesion and stuff.
Take the cells composing the big brain and feed them to insects, who then have more children due to having more food. Or nano-decompose the matter in the brains and reconstruct smaller brains atom-by-atom.
Darklight wrote: It's actually interesting to me how the Jesus narrative actually seems a bit like "The Ones Who Walk Away from Omelas" or the narrative where the cost of Utopia is that one child must be tortured.
:)
Some differences: (1) In the case of Christianity, it's torture exchanged for torture, not for happiness. (2) Jesus presumably did it voluntarily.

---

I had a thought that's probably unoriginal. When we say 1+1=2 is true, what do we mean by this? We mean it's true given certain axioms of arithmetic. Any mathematical statement is only as true as its axioms. So why wouldn't the same apply for morality? There wouldn't be a single moral truth but many moral truths depending on what axioms you start with.

Now, there's another sense in which 1+1=2, which is that if I take one apple and add another apple, empirically I get 2 apples. The statement thus helps make predictions about what I observe. If moral truth is to be more than a series of if-then rules, presumably it should be of a similar type -- i.e., taking particular axioms explains our observations in a better way than can be done without those axioms. Maybe this is what you were getting at with explanatory power. What kinds of observations would your axioms explain?

Maybe you'd say the axioms explain why people behave in particular ways toward each other. But we could just as well say that people are following norms, laws, and principles that they manipulate using logic. (Indeed, this is my perspective.) Those norms often appear to have arisen for instrumental, evolutionary reasons. They don't need some sort of extra ontological structure in the universe.

There are many human universals, most of which we don't explain by appeal to the ontological. I could just as well ask, "Why is there music?" and have it explained by the statement, "Just as there are laws of mathematical truth, so there are laws of musical truth, and people discover these true laws of music across the world." This is true in the sense that there are patterns of music that people pick up on, but I don't need to claim some additional ontological status of musical realism. I just say these patterns exist among the many patterns that mathematical structures allow for. They're relations among physical objects, just like a snowflake is a relation among the atoms that comprise it. That doesn't mean I need to believe in Snowflake Realism. Fundamentally I wonder if we're getting into a debate over Platonism here?

User avatar
Brian Tomasik
Posts: 1107
Joined: Tue Oct 28, 2008 3:10 am
Location: USA
Contact:

Re: In Defence of Moral Realism

Post by Brian Tomasik » Tue Feb 11, 2014 11:06 am

Brian Tomasik wrote: I just say these patterns exist among the many patterns that mathematical structures allow for. They're relations among physical objects, just like a snowflake is a relation among the atoms that comprise it. That doesn't mean I need to believe in Snowflake Realism. Fundamentally I wonder if we're getting into a debate over Platonism here?
Actually, I think this is a useful insight: The debate over moral realism is very much parallel to the debate over Platonism in general. In fact, the same reasons that militate against moral realism militate against mathematical Platonism. I updated the reductionism discussion (section 11) of my piece on consciousness to explain this.

User avatar
Darklight
Posts: 118
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada

Re: In Defence of Moral Realism

Post by Darklight » Tue Feb 11, 2014 8:25 pm

What does "ought" mean? What does "prescription" mean?

For me, "should" means "I want." So to say that "you should reduce suffering" means "I want you to reduce suffering." I have thus explained what a prescriptive word means in a descriptive way.
While this may be what some people mean by "should", I disagree that this is what I mean by "should". I think "should" means that you have an obligation or duty to do something, for some reason. For instance, "you should reduce suffering" means that according to some fact about suffering, if you accept this fact as motivating, you are obliged to act according to the statement. I see it as corresponding to an unwritten IF-THEN type statement.

Thus, "you should reduce suffering" is actually:
IF suffering is wrong/bad/incorrect THEN reduce suffering.

The problem is that most prescriptive statements leave out the IF part, and just state the THEN part.

I guess now is a good time to state that my concern with End-Relational Theory was actually based on a misunderstanding of End-Relational Theory. I originally thought that End-Relational Theory meant that morality was concerned with ends, when End-Relational Theory actually just means that morality itself can be considered an end. After some contemplation, I now accept the premises of End-Relational Theory, if not the conclusions of Peter's full meta-ethics. Morality can be described as a goal or standard by which to evaluate normative statements, because the truth itself is a standard by which to evaluate things.

Again:

IF suffering is wrong/bad/incorrect THEN reduce suffering.

This statement can be further expanded to state:

In order to be correct, evaluate if suffering is correct, IF suffering is correct THEN increase suffering, ELSE IF suffering is incorrect THEN reduce suffering.

Implicit in this is the assumption that correctness is valued over incorrectness.

So, basically, my attempt to create a value-free morality was unsuccessful.
Ok, but why would I care what's true? I guess you're saying something like the following?
1. There is a moral truth.
2. I care about what's true.
3. Therefore, I care about the moral truth.
Pretty much? The truth is quite reliably useful to agents, so most agents tend to value the truth quite highly.
I agree. :) That smart people believe in God is evidence for God. The problem lies with the prior probability of the hypothesis being very low because it's more complex than simpler hypotheses that explain the same data. Likewise for moral realism, we can easily explain moral beliefs without moral realism being true. Indeed, maybe moral non-realism explains the data better, because there's so much disagreement about what moral truths are, in contrast to mathematical truths where most people agree on them.

It also seems to me that people emotionally want moral realism to be true, just like they emotionally want God to exist. Indeed, the two are often closely linked, like when people ask, "If there's no God, where do right and wrong come from?" People fear nihilism in a similar way as they fear atheism.
I think the assumption that simpler hypotheses that explain the same data are necessarily better is an overuse of Occam's Razor. It's true that usually the simpler theory is better, but it's not always the case. For instance, Newtonian Physics explains most things that people normally encounter quite fine. But Relativity is strictly speaking the more correct theory, even though it's much more complex. GPS satellites depend on Relativistic calculations to work properly.

In the same way, moral realist and anti-realist theories both work for answering most questions of morality. It's rather the edge cases where they might disagree that make it important to me to settle this question.

I actually most emotionally want my own pet theory of Eudaimonic Utilitarianism to be true. However, the arguments that I've laid out so far actually seem to show that Hedonistic Utilitarianism is more likely to be correct. If my emotional desires were really causing me to be biased, I would be trying harder to lead us to the conclusion of the existence of some kind of Eudaimonia, rather than just letting the theory reach its own logical conclusions.
Take the cells composing the big brain and feed them to insects, who then have more children due to having more food. Or nano-decompose the matter in the brains and reconstruct smaller brains atom-by-atom.
I actually think that insects, because of their less advanced minds, count for less in the grand hedonic calculus of things, that the more advanced minds that can experience pleasure on more levels deserve to count for more. In other words, I consider humans to be utility monsters compared to insects.
I had a thought that's probably unoriginal. When we say 1+1=2 is true, what do we mean by this? We mean it's true given certain axioms of arithmetic. Any mathematical statement is only as true as its axioms. So why wouldn't the same apply for morality? There wouldn't be a single moral truth but many moral truths depending on what axioms you start with.
You make a good point, though I think that when it comes to morality and facts about the universe, only one set of consistent axioms can actually be true.
Now, there's another sense in which 1+1=2, which is that if I take one apple and add another apple, empirically I get 2 apples. The statement thus helps make predictions about what I observe. If moral truth is to be more than a series of if-then rules, presumably it should be of a similar type -- i.e., taking particular axioms explains our observations in a better way than can be done without those axioms. Maybe this is what you were getting at with explanatory power. What kinds of observations would your axioms explain?
At this point, I'm honestly not sure what observations the axioms would explain.
Maybe you'd say the axioms explain why people behave in particular ways toward each other. But we could just as well say that people are following norms, laws, and principles that they manipulate using logic. (Indeed, this is my perspective.) Those norms often appear to have arisen for instrumental, evolutionary reasons. They don't need some sort of extra ontological structure in the universe.
Since I don't think morality is by itself motivating, I don't think it's a good explanation for why people behave in certain ways anyway. Most people can get by using Newtonian physics in their daily lives rather than using Relativity.
There are many human universals, most of which we don't explain by appeal to the ontological. I could just as well ask, "Why is there music?" and have it explained by the statement, "Just as there are laws of mathematical truth, so there are laws of musical truth, and people discover these true laws of music across the world." This is true in the sense that there are patterns of music that people pick up on, but I don't need to claim some additional ontological status of musical realism. I just say these patterns exist among the many patterns that mathematical structures allow for. They're relations among physical objects, just like a snowflake is a relation among the atoms that comprise it. That doesn't mean I need to believe in Snowflake Realism. Fundamentally I wonder if we're getting into a debate over Platonism here?

Actually, I think this is a useful insight: The debate over moral realism is very much parallel to the debate over Platonism in general. In fact, the same reasons that militate against moral realism militate against mathematical Platonism. I updated the reductionism discussion (section 11) of my piece on consciousness to explain this.
I'm not actually that attached to Platonism. I think that morality exists in the same sense that mathematics exists, whatever that may entail. Even if morality is just a bunch of relationships we can observe between things, it's still different from saying that morality is arbitrary. The reason why I don't like moral anti-realism is mostly that as a position it makes it very difficult to actually claim that any particular moral theory is better than any other.

Furthermore, the intuitive appeal of realism in mathematics is that actually makes it simpler to compare parallel universes. If mathematics is real and exists separate from physics, then it can be consistent across universes, that is to say, that (1 + 1 = 2) in all universes, and that is less complex than there being parallel universes where (1 + 1 = 3). If mathematics is real, then we can describe other universes with existing mathematics, rather than having to invent new math for every universe. Perhaps in another universe, E = MC^3, but this is still an equation being described by our mathematical axioms. That is to say, E, M, and C, represent the same thing in all universes, and it's only that the relationship can be different. I'm not sure if I'm explaining this well.
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein

User avatar
Darklight
Posts: 118
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada

Re: In Defence of Moral Realism

Post by Darklight » Thu Feb 13, 2014 2:42 am

Darklight wrote:In other words, I consider humans to be utility monsters compared to insects.
As an aside, I should probably qualify this statement. As someone who studied cognitive science in undergrad and is currently doing thesis research with artificial neural networks, I believe that species can be categorized by the neural complexity of their brains. This is different from straight up brain size. An elephant or a whale has a larger brain than that of a human, but their neurons are also larger. The structure of the elephant or whale brain is less complex in the sense that it has fewer neurons than that of a human. There appears to be a straight up correlation between the number of neurons and intelligence.

The reason why more neurons has such a significant effect on intelligence is that for every neuron there are thousands of connections to other neurons, thus more neurons = more connections = greater complexity of structures possible. If any of you have played with artificial neural networks, you'll know that the number of neurons in the hidden layer(s) has a significant impact on how well it performs at a task, and that it is weights between the nodes, that is to say, the connections between neurons, that actually hold the information, associations, and memory of the system. Thus, while number of neurons is a good indicator of intelligence, technically I would actually suggest that the number of total connections is an even better indicator. The best indicator however, is probably some function of the combined complexity of the neurons and connections, such as Integrated Information Theory.

Think of it this way. If you only have two neurons with a thousand connections between each neuron, nearly all of those connections are redundant, and the "network" is not going to represent much. However, if you have a thousand neurons that are each connected to only one other neuron (a very sparse network), you aren't going to be able to represent things nearly as well as if the thousand neurons were fully connected to each other. Mind you, fully connected is not always better, as can be seen in the usefulness of Convolutional Neural Networks that only connect to local receptive fields. Modularity and sparsity are sometimes quite efficient uses of neural resources.

This notion of neural complexity, by the way, counters the argument that men are smarter than women just because they have more neurons (grey matter). While this is true, women's brains are actually more connected (they have more white matter) than men, so the actual neural complexity probably evens out.

At this time, we don't have an easy way to calculate the neural complexity of brains, or even the number of connections, so I treat the number of neurons as a reasonable approximation.

So where am I going with this? Basically it means that splitting a brain into two brains does not increase happiness. Due to the fact that connections increase exponentially with the number of neurons, two brains with the same number of neurons as one big brain, will actually have much fewer possible connections per brain than the one big brain. Thus, complexity and intelligence do not scale linearly with the number of neurons, but exponentially.

Thus if you make the huge assumption that neural complexity correlates with sentience, it implies that it is actually utility maximizing to create one massive brain rather than many small brains. This one massive brain would arguably be a utility monster to all other sentient beings. Of course, you can deny this by arguing that sentience is not a property that can be increased, but is simply true or false of a given entity, that all sentient beings feel the same amount of pleasure and pain. In which case you have a case for creating an infinite number of barely conscious entities to maximize utility (sounds a bit like the repugnant conclusion/mere addition paradox no?).

I am inclined to accept the former view, that neural complexity correlates with level of sentience. This therefore means I must accept the possibility that utility monsters could exist. In fact I do. I believe that humans are utility monsters compared to the other known species on this Earth. I also accept the possibility that any sufficiently advanced A.I., could become a utility monster to humans.

While I am inclined to bite the bullet and consider humans to be utility monsters compared to less cognitively complex species, this does not mean that I don't value the happiness of those other species. Human happiness does not depend intrinsically on the suffering of lesser species. If we can make humans AND other species happy, we should. In practice, I would prefer not to step on a bug if I could avoid it (unless it was a mosquito in a Malaria prone country), and I still think we should be preventing the suffering caused by factory farming as much as we reasonably can do so. I very much support eventually replacing all meat production with grown synthetic meat created using techniques that involve no suffering.

That being said, if I had to choose between a single human life, and the death of an arbitrarily large number of insects, I would probably choose the human life. Does this make me a monster?

This leaves me with the unsettling conclusion that the morally obligatory thing to do is to turn all of humanity and sentient life into one massive "supersentient" mind. In other words, I think I've discovered yet another repugnant conclusion! XD

Wait! I have a counter. Technically speaking, in order to create a larger mind, we would need to convert the atoms into not only the neurons, but also the connections. Therefore, direct conversion of two brains into one brain would result in a sparser brain either with fewer neurons or fewer connections per neuron. Similarly, converting one brain into two brains would allow for more neurons or more connections in each brain. Thus the total complexity would actually stay the same! Therefore, there is no benefit to splitting or joining brains after all. Whew.
Depending on how you individuate organisms and treat brain size, it may be that you could always increase the happiness of one at the expense of others by stealing materials that comprise their brains to add to the brain of the other one to make it bigger. Alternatively, if you don't weight by brain size, you could divide one big brain into two smaller brains, and by not doing so, you're withholding happiness from those two small brains.
So now to this earlier question of the independence of happiness. If it were in fact possible to simply add to the brain material and keep the existing mind the same, then this would be true. However, in order to add to the brain, we would have to disconnect sections of one brain, and attempt to connect these sections to another brain. Here's the problem. Where do you make the connections? If you just try to graft the brain section to the other brain, the connection strengths would initially be arbitrary and random. You would most likely cause the combined brain to seizure, because the weights would make no sense. But even assuming you could do this and get it to work, what would happen? Assuming the integrity of the original sections were maintained, you would now have an entity with memories from both of the original entities. In effect, you wouldn't have just moved happiness from one person to another. You would have also moved a piece of the person with it. If you combine two brains together, you basically merge two persons into one "person" who would have the memories of both. So strictly speaking happiness is only independent so long as people, so long as subjects are independent. Once you start merging parts of people together, then you're not just increasing happiness at the expense of another. You're literally increasing a subject at the expense of another.

Alternatively, you could try disassembling one brain section into the predicates for neuronal growth, and allow the other brain to use those resources to create new neurons in the right places, but it's not clear that the brain would have any reason to create more new neurons than it normally would. Maybe you could modify the genetic code to make it enlarge the brain, but these new neurons would still need to form connections and learn. In effect you would be creating a new, larger minded subject that was different from the original subject in question. In any case, the effect would be the same. You're not just moving happiness around, but modifying the subject itself.

I originally said:
An implicit assumption of my theory is that the happiness of individuals is independent of each other, that one organism's experience of happiness does not directly affect the experience of happiness of another. I would argue that this is true because happiness, being a subjective state, does not directly interfere with the objective world, and therefore an entity's experience of happiness is independent of another's experience.
Note the bold. This is true of independent entities. When you merge two entities together via direct brain combining, these two entities are no longer independent entities. They become, in effect, a new single entity, with a new and uniquely different experience from what the previous entities had. The happiness experience of this new entity is still independent from any other existing entities.

When I say independent, I mean something like probabilistic independence. Happiness is thus only as independent as persons are independent of one another, as the minds are independent. Once you start merging or splitting minds, you're basically creating new minds, and all bets are off.
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein

User avatar
peterhurford
Posts: 391
Joined: Mon Jul 02, 2012 11:19 pm
Location: Denison University
Contact:

Re: In Defence of Moral Realism

Post by peterhurford » Fri Feb 21, 2014 5:39 am

So both this here and on my blog is a lot for me to wade through, so I think I'm going to have to put this off even longer. Sorry. :(
Felicifia Head Admin | Ruling Felicifia with an iron fist since 2012.

Personal Site: www.peterhurford.com
Utilitarian Blog: Everyday Utilitarian

Direct Influencer Scoreboard: 2 Meatless Monday-ers, 1 Vegetarian, and 2 Giving What We Can 10% pledges.

User avatar
Darklight
Posts: 118
Joined: Wed Feb 13, 2013 9:13 pm
Location: Canada

Re: In Defence of Moral Realism

Post by Darklight » Fri Feb 21, 2014 6:19 am

So both this here and on my blog is a lot for me to wade through, so I think I'm going to have to put this off even longer. Sorry. :(
No worries, take your time. :)

If it helps to know at all, only my first comment on the meta-ethics post on your blog is actually all that relevant to your meta-ethics. The rest of those comments are my side of a lengthy debate between Marvin and I on whether or not to accept a Hedonistic interpretation of Utilitarianism.
"The most important human endeavor is the striving for morality in our actions. Our inner balance and even our existence depend on it. Only morality in our actions can give beauty and dignity to life." - Albert Einstein

Verrian
Posts: 22
Joined: Mon Apr 29, 2013 4:38 pm
Location: Italy

Re: In Defence of Moral Realism

Post by Verrian » Fri Feb 21, 2014 8:36 pm

I would be... No, I would to be... or, I would like to be... anglophone :cry: Merciless stupid life.

Well, consider the chemical or biophysical elements of pleasure and pain states: now you've a moral (scientific) realism, I guess.
Italian user. Please, pardon possibly wrong english (use a simple one, b.t.w.) and consequent ignorance and inattention. Thanks

Post Reply