It would help if you could explain clearly and unambiguously what you mean by moral realism.Turns out I'm a moral realist. I'm reading Sam Harris' The Moral Landscape: How Science Can Determine Human Values. I find it compelling.
Any objections to moral realism?
Criticisms
Several criticisms have been raised against moral realism: The first is that, while realism can explain how to resolve moral conflicts, it does not explain how these conflicts arose in the first place.[16] The Moral Realist would appeal to basic human psychology, arguing that people possess various motivations that combine in complex ways, or else are simply mistaken about what is objectively right.
Others are critical of moral realism because it postulates the existence of a kind of "moral fact" which is nonmaterial and does not appear to be accessible to the scientific method.[17] Moral truths cannot be observed in the same way as material facts (which are objective), so it seems odd to count them in the same category.[18] However, such an argument could be applied to saying that the science of psychology also cannot be a science; or the acceptance of psychology as a cognitive science vitiates this argument (which would not be indicative of any weakness of the argument, as Feynman in the Cargo Cult Science made the same claim starting from different hypothesis). One emotivist counterargument[by whom?] (although emotivism is usually non-cognitivist) alleges that "wrong" actions produce measurable results in the form of negative emotional reactions, either within the individual transgressor, within the person or people most directly affected by the act, or within a (preferably wide) consensus of direct or indirect observers.[citation needed]
Another counterargument comes from moral realism's ethical naturalism.[specify] Particularly, understanding "Morality" as a science addresses many of these issues.[citation needed]
Moral intuitions are founded upon a conditional?Moral intuitions are only debatable when the conditional upon which they are founded is something all parties agree about ahead of time.
I object to the idea that there is any way to derive a normative statement from a factual one.
Hume covered this, and Sam Harris doesn't really advance the discussion much. Moral intuitions are only debatable when the conditional upon which they are founded is something all parties agree about ahead of time.
Sure.
Hume covered this, and Sam Harris doesn't really advance the discussion much. Moral intuitions are only debatable when the conditional upon which they are founded is something all parties agree about ahead of time.
If we want people to be happy ...
That's it. Some people will argue that virtues (honesty, loyalty, etc.) increase happiness. Some will claim that the road to happiness is following orders from an invisible eccentric.
What would be the point of any moral system that didn't tend to increase happiness/well-being? No point at all. With a moral system like that, there would be no reason for people to want to be moral.
Moral claims (though I'm told this isn't a very robust system of morality) tend to fall into two categories. Either you give up some personal happiness in order to achieve a greater increase in group happiness (do not steal), or you give up some current happiness in order to achieve a greater long-term happiness (brush your teeth).
Moral discussion doesn't involve a jump from is to ought. To have a moral discussion, you start at ought. Aircraft design doesn't start by asking, "Should man want to fly?" It already assumes that. The discussion is about how to achieve it. Cooking shows don't open with the question, "Should we want food to taste good and be nutritious?" They already assume that. The shows are about how to achieve it. Moral discussion doesn't start by asking, "Should we desire happiness?" It already assumes that. The discussion is about how to achieve it.
That, increasing happiness, is the realm of moral discussion.
Not everybody agrees with that goal. The exceptions are sociopaths, and we are happy to label them that, and continue with our discussion.
Moral intuitions are founded upon a conditional?Moral intuitions are only debatable when the conditional upon which they are founded is something all parties agree about ahead of time.
Not to me. I view increasing happiness as secondary to reducing suffering. These starting points imply materially different moral outcomes. Which one of us is right?
Not to me. I view increasing happiness as secondary to reducing suffering. These starting points imply materially different moral outcomes. Which one of us is right?
I talk about increasing happiness as a kind of shorthand. Sam Harris talks about increasing well-being. Dan Barker talks about flourishing. Reducing suffering is certainly important; it's on the right road. As is increasing happiness.
Not to me. I view increasing happiness as secondary to reducing suffering. These starting points imply materially different moral outcomes. Which one of us is right?
I talk about increasing happiness as a kind of shorthand. Sam Harris talks about increasing well-being. Dan Barker talks about flourishing. Reducing suffering is certainly important; it's on the right road. As is increasing happiness.
To play devil's advocate for a moment: what if I disagree?
You may call me a sociopath and ignore me, but that's different from refuting me with evidence, as is normally the case in disagreements about facts.
As I said, following different starting principles to their logical conclusions may produce radically different moral precepts.
It seems these precepts can only be considered truths if their starting principles are taken for granted.
They're intuitions! They're not founded on statements at all. Do you think a chimp who gets ticked off when he gets a smaller reward for doing a trick than he just saw the chimp next to him get for the same trick is basing that reaction on a statement?!? Or do you think he isn't intuitively feeling it's unfair?Moral intuitions are founded upon a conditional?
They are founded upon value-laden statements <snip>
No they aren't. Buried in that imperative is the opinion that you should not hurt small children whether you want to cause unnecessary pain or not. I'm not offering you advice, fer chrissakes! I'm telling you if you don't conform to my moral judgment you're being a dick.When you say "you should not hurt small children," buried in that imperative are a bunch of assumptions that can be imagined as antecedents of a conditional clause. Just off the top of my head, they could be something like:
"If you want to avoid causing pain unless absolutely necessary, then..."
"If you want the future leaders of society to be generally free of emotional hangups, then..."
"If you want to live in a society where as few people as possible suffer from anxiety or depression, then..."
But that is SO not what "you should not hurt small children" means. People don't say it because they have a problem with unnecessary pain. People say it because they feel hyperprotective of children, and if you hurt one they'll feel a strong urge to dish out some unnecessary pain on you. Moral imperatives are categorical. You're trying to reinterpret them as hypothetical because that's something you know how to analyze; but when you do that they aren't moral imperatives any more.Implicit in each one of these possible antecedents is the empirical statement that posits a correlation between the two variables. For instance, the first antecedent, coupled with the consequent "you should not hurt small children," can be unpacked to represent the empirical statement:
"Hurting small children is likely to result in unnecessary pain." THAT'S the objective part, THAT'S the part we can do Sam Harris' brainwave experiments about, THAT'S the part that can be proven or disproven based on observations.
Not to me. I view increasing happiness as secondary to reducing suffering. These starting points imply materially different moral outcomes. Which one of us is right?
I talk about increasing happiness as a kind of shorthand. Sam Harris talks about increasing well-being. Dan Barker talks about flourishing. Reducing suffering is certainly important; it's on the right road. As is increasing happiness.
To play devil's advocate for a moment: what if I disagree?
About what specifically? About wanting people to be happy?
You may call me a sociopath and ignore me, but that's different from refuting me with evidence, as is normally the case in disagreements about facts.
Most of us want people to be happy. You hypothetically don't. That's not a disagreement about facts. The rest of us are discussing morality, how to achieve a higher level of happiness, and you are talking about something else.
Go back to the aerodynamics example (assuming that happened in this thread): A bunch of us are trying to design a faster, safer, cheaper, more fuel-efficient airplane, and you say, "Eh, I'd rather travel by train," and wander off to do your own thing. No facts are in dispute. No refutation is needed.
As I said, following different starting principles to their logical conclusions may produce radically different moral precepts.
Agreed. For instance, some will disagree with my assumption that morality is about increasing happiness. But their moral intuitions can be challenged, shown to conflict with each other. (Mine too, doubtless.) When you work it out, get down to bedrock, achieve an internally consistent system, it's going to be about increasing happiness.
Yes, that was an opinion.
Here's an example of working thru conflicting intuitions. Suppose Joe thinks morality consists of obeying Jehovah, and I think it consists of increasing happiness. We can ask how Joe would feel if he knew that obeying Jehovah would cause universal misery. And we can ask me how I'd feel if I knew that increasing happiness would violate all of Jehovah's commandments. Presumably Joe would be conflicted. (In fact, he's likely to respond, "But that wouldn't happen...") Whereas I wouldn't be conflicted at all. (Who cares about the commandments of an invisible eccentric if they don't make people happy?)
It seems these precepts can only be considered truths if their starting principles are taken for granted.
You preferring to travel by train doesn't make our lift to drag ratio any less true.
They're intuitions! They're not founded on statements at all. Do you think a chimp who gets ticked off when he gets a smaller reward for doing a trick than he just saw the chimp next to him get for the same trick is basing that reaction on a statement?!? Or do you think he isn't intuitively feeling it's unfair?Moral intuitions are founded upon a conditional?
They are founded upon value-laden statements <snip>
No they aren't. Buried in that imperative is the opinion that you should not hurt small children whether you want to cause unnecessary pain or not. I'm not offering you advice, fer chrissakes! I'm telling you if you don't conform to my moral judgment you're being a dick.When you say "you should not hurt small children," buried in that imperative are a bunch of assumptions that can be imagined as antecedents of a conditional clause. Just off the top of my head, they could be something like:
"If you want to avoid causing pain unless absolutely necessary, then..."
"If you want the future leaders of society to be generally free of emotional hangups, then..."
"If you want to live in a society where as few people as possible suffer from anxiety or depression, then..."
But that is SO not what "you should not hurt small children" means. People don't say it because they have a problem with unnecessary pain. People say it because they feel hyperprotective of children, and if you hurt one they'll feel a strong urge to dish out some unnecessary pain on you. Moral imperatives are categorical. You're trying to reinterpret them as hypothetical because that's something you know how to analyze; but when you do that they aren't moral imperatives any more.Implicit in each one of these possible antecedents is the empirical statement that posits a correlation between the two variables. For instance, the first antecedent, coupled with the consequent "you should not hurt small children," can be unpacked to represent the empirical statement:
"Hurting small children is likely to result in unnecessary pain." THAT'S the objective part, THAT'S the part we can do Sam Harris' brainwave experiments about, THAT'S the part that can be proven or disproven based on observations.
In this case, contrary to my reply to Wiploc, I think we are truly talking about different things. I don't disagree that the origins of morality are rooted in survival instincts that have evolved to propagate our genes. I just don't think that's a relevant determinant of what we ought to do. To me, the central defining feature of morality is going beyond our Darwinian predispositions, taking advantage of the ones that serve a shared purpose and subjugating those that are contrary to it. But the important part is that there's a purpose, which requires dialogue, a hashing out of options, etc. (all of which is part of having a society that enables the free exchange of ideas). Morality at its best is indeed advice, particularly when informed by evidence. Again, I don't disagree that there is a strong emotional component to it, or that we evolved to be nice to little kids. I just recognize that we can't stop at that point, because it's not enough to meet the demands of our evolved brains, which are capable of reasoning, planning; our genes are not.
In this case, contrary to my reply to Wiploc, I think we are truly talking about different things. I don't disagree that the origins of morality are rooted in survival instincts that have evolved to propagate our genes. I just don't think that's a relevant determinant of what we ought to do. To me, the central defining feature of morality is going beyond our Darwinian predispositions, taking advantage of the ones that serve a shared purpose and subjugating those that are contrary to it. But the important part is that there's a purpose, which requires dialogue, a hashing out of options, etc. (all of which is part of having a society that enables the free exchange of ideas). Morality at its best is indeed advice, particularly when informed by evidence. Again, I don't disagree that there is a strong emotional component to it, or that we evolved to be nice to little kids. I just recognize that we can't stop at that point, because it's not enough to meet the demands of our evolved brains, which are capable of reasoning, planning; our genes are not.
Unfortunately for your argument the same set of genes, twin one, can find itself in an entirely different set of circumstances, twin two. Each twin will derived a life philosophy, part of which will be a set of rules, soft or hard, for dealing with conditions. To suggest that the communist located twin will have a similar set of rules as the libertarian located twin strains the imagination. Resolving desires work depending on the situation in which one finds oneself. Not just those located in different places experiencing different life style but also at different times within a lifestyle set.
For instance where a young person resolves primarily his/her sexual desires in context of strong glandular environment an old person resolves primarily his/her desires to continue living absent, mostly, of any glandular components. Same person, different moralities.
Sure conditions in the external world influences ones choices within their, let's say motivational, contexts, but they don't determines which set of values one is going to base one's morality upon. That, it seems to me is driven by internal demands which are expressed in tendencies, desires, suited to ones human condition in life. So if we treat morality as an evolving thing, not getting more complete, just getting more appropriate for here and now, we should wind up with narrow bands of good or ought tos suited to age. Trying to treat morality as a group thing strains the mind resulting in inappropriate suggestions for specific individuals at every age.
Then we're definitely talking about different things, because I am sure as hell not talking about survival instincts and gene propagation. I'm talking about moral intuitions; what are you talking about? You appear to have an idee fixe that morality has to be goal-oriented; you're keyword searching to find a clue as to what goal I must think morality is the pursuit of; and you picked survival instincts and gene propagation because I used the keyword "chimp". I said "chimp" not as a codeword for Darwinian evolution but because chimps are living breathing proof that you can have moral intuitions without language, which means moral intuitions are not founded on statements and conditionals.They're intuitions! They're not founded on statements at all. ... People don't say it because they have a problem with unnecessary pain. People say it because they feel hyperprotective of children, and if you hurt one they'll feel a strong urge to dish out some unnecessary pain on you. Moral imperatives are categorical. You're trying to reinterpret them as hypothetical because that's something you know how to analyze; but when you do that they aren't moral imperatives any more.They are founded upon value-laden statements <snip>
In this case, contrary to my reply to Wiploc, I think we are truly talking about different things. I don't disagree that the origins of morality are rooted in survival instincts that have evolved to propagate our genes. I just don't think that's a relevant determinant of what we ought to do.
"Nature, Mr. Allnut, is what we are put in this world to rise above." - The African QueenTo me, the central defining feature of morality is going beyond our Darwinian predispositions, taking advantage of the ones that serve a shared purpose and subjugating those that are contrary to it.
Why is that the important part? Why does it need a purpose at all? That sounds like a recipe for a "The end justifies the means" morality.But the important part is that there's a purpose, which requires dialogue, a hashing out of options, etc.
At its best? At its best by what standard? A moral standard? Is "Don't hurt that kid if you want to avoid unnecessary suffering." morally better than "Don't hurt that kid."?Morality at its best is indeed advice, particularly when informed by evidence.
I don't disagree; the question isn't whether to stop at that point, but where to go from there. Talking ourselves into believing categorical imperatives are hide-and-seek-playing hypothetical imperatives is a dead end.Again, I don't disagree that there is a strong emotional component to it, or that we evolved to be nice to little kids. I just recognize that we can't stop at that point, because it's not enough to meet the demands of our evolved brains, which are capable of reasoning, planning; our genes are not.
Can you provide evidence for this claim?Not everybody agrees with that goal. The exceptions are sociopaths