• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Moral Realism

I don't agree that happiness is a goal. I think it's a value judgement applied to outcomes.

The reason why the guy advocating following Jehovah gets into trouble with conflicting imperatives, and the person following happiness doesn't, is because the guy following Jehovah is actually committing to a course of action, and the guy following happiness can simply label any desirable outcome as happiness after the fact. Happiness doesn't tell him how to behave or what to do - it provides almost no guidance whatsoever.

Try this as a thought experiment. A guy works hard all his life, and scrimps and saves and puts himself through countless discomforts and hardships, in order to ensure a prosperous life for his children. He feels good about this. Is he happy?

Now take the same situation, the same feelings, the same hardships etc. Except that the guy doesn't actually have any children - they're a delusion caused by a brain aneurism when he was a teenager - and his entire motivation in life is actually illusory. He feels exactly the same way. But suddenly this isn't a morally desirable state any more.

The reason why various authors use terms like 'flourishing' is not because they basically mean happy but want to use a different word, but because they know they can't use a person's feelings or mental state as basis for a moral system. So they use terms which don't mean happiness at all, but boil down to a value judgement of the end state.

So no, morality is not based on an objective fact about happiness, or on objective facts at all. It's based on value judgements. I've not read Sam's book on this subject, but I strongly suspect moral realism just boils down to internalising value judgments and then substituting them with factual observations to disguise the fundamentally subjective nature of the process.
 
Okay, I'm going to try this again, now that my brain is in a different state.

Sure.


Hume covered this, and Sam Harris doesn't really advance the discussion much. Moral intuitions are only debatable when the conditional upon which they are founded is something all parties agree about ahead of time.

If we want people to be happy ...

That's it. What would be the point of any moral system that didn't tend to increase happiness/well-being? No point at all. With a moral system like that, there would be no reason for people to want to be moral.
That doesn't make it "not a moral system", though. It just makes it a moral system that's memetically unfit for the human mind, just like a moral system devised by and for beings with a vastly different evolutionary background than ours would likely fail to appeal to humans. That reminds me, awhile ago I read a more carefully worded but equally unconvincing post by Eliezer Yudkowsky where he expressed a position like yours.

Moral claims (though I'm told this isn't a very robust system of morality) tend to fall into two categories. Either you give up some personal happiness in order to achieve a greater increase in group happiness (do not steal), or you give up some current happiness in order to achieve a greater long-term happiness (brush your teeth).
Apparently I was so side-tracked earlier that I failed to notice this:

On second thought, if your second category constitute moral claims (not "claims about something else")in your eyes, then there's no reason "sociopaths" need be categorically excluded after all. The relatively higher-functioning members of that category can certainly accept the value of deferred gratification. And even in the absence of emotional interdependency (I don't think "sociopaths" are in fact necessarily emotionally independent, but it's not clear to me to what extent that figures into the notion of "sociopathy" from which you're operating), merely being human entails a practical dependence upon the group whereby short of claims meant to promote literal self-sacrifice, your first category of claims can overlap with the latter. The happiness of the group may be of intrinsic value to the neurotypical brain which empathizes and identifies with the group, but to a certain extent it retains instrumental value to a more atypical brain as long as it has sufficient capacity/inclination towards the rational contemplation and pursuit of long-term self interest.

Moral discussion doesn't involve a jump from is to ought. To have a moral discussion, you start at ought.
Is: "We desire happiness"
Ought: "We ought to find ways to achieve happiness"
It's just the satisfaction of a desire. Why call the satisfaction of that desire morality, and the satisfaction of other desires something else?

It seems almost like a sort of PR maneuver, to capitalize on the name-recognition of the word "morality" (rather than discarding it as an obsolete artifact), like compatibilists with "free will".

On the other hand, maybe you're making a descriptive psychological/sociological/linguistic argument about what people do when they moralize. But then if you were doing that, I think you'd perhaps have to relinquish the more exclusive definition of morality and just say that the phenomenon of moralizing covers all of the things that people do with it, and that this business of human happiness is just the most popular aim of that realm of discussion. And I'm not sure how the idea of moral realism fits into this.

Moral discussion doesn't start by asking, "Should we desire happiness?"
I think we'd be better off if we didn't desire happiness, or anything else, for that matter. I'd rather we be free from this whole carrot-stick prison our brains have set up. So I'd say "no".
 
Unbeatable said:
That doesn't make it "not a moral system", though. It just makes it a moral system that's memetically unfit for the human mind, just like a moral system devised by and for beings with a vastly different evolutionary background than ours would likely fail to appeal to humans. That reminds me, awhile ago I read a more carefully worded but equally unconvincing post by Eliezer Yudkowsky where he expressed a position like yours.
I have two questions:
1. Why did Wiploc's post remind you of Yudkowsky's? (i.e., what part of Yudkowsky's post do you find similar)
2. What do you find unconvincing about Yudkowsky's post?
 
I think that for extreme individualism sociopaths are equally dignified as any other creature, eg bat, dog, human, cat, shrimp etc. In that he may have (I suppose) a sense of quality of life, or value in being, in that emotions - however weak - help constitute his life world. But when morality is construed as social rather than just for one person, the sociopath probably fails the test. He may have "rational attraction to being" or a way to personal welfare for himeslf, but in the context of interpersonal or human group dynamics he is a likely to be threat to both RAB and welfare. So he may be rational unto himself, in a war of one against all, capable of pursuing self interest just like any other. But unto anyone else, or the group, he is not as fit for play under normal rules as normal people are.


But which morality is The MoralityTM, the exclusively individualised or the social? I think that both meet sufficint condition for morality, in that they revolve around conscious states of people. Yet, the choice between the two is not trivial, and far from arbitrary. It is in the interests of most people to respect others, so for most people that is what they ought to do. For most people, even in a so-called "war of one against all" peace and harmony is generally superior. If society and its rules is viewed as a complex adaptive system or a multi agent system, the sociopath is as a type of "freak / deviant agent" (see "agent based models"http://en.wikipedia.org/wiki/Agent-based_model ) that changes or disturbs the process of emergence in some way.

http://www.youtube.com/watch?v=JANTkSa4hmA
 
Turns out I'm a moral realist. I'm reading Sam Harris' The Moral Landscape: How Science Can Determine Human Values. I find it compelling.

Any objections to moral realism?

The philosophical reasons for rejecting moral realism are nearly as strong as the moral reasons for rejecting atheists' attempts to sell people a defined-down simulacrum of the concept.

An empirically adequate world model will contain no value judgments, only ascriptions of value judgments relativised to specific agents within the model. The efficient causes plus initial conditions of hurricanes suffice to capture every hurricane observation, and efficient causes are random with respect to whether gay-day-celebrators deserve it, just as efficiently caused mutations are random with respect to "the good of the organism".

Arif Ahmed once pithily (but accurately) summarized William Craig's argument for realism as amounting to "there must be objective moral truths because deep down you know there are." As much as I admire Sam Harris, I have to say his argument doesn't really rise above this level of brute emotive appeal, even though he dresses it up in culturally fashionable jargon.

It's the oldest trick in the book. It is the lynchpin argument in Plato's Republic. The speaker paints a picture of extreme aesthetic appeal, or extreme aesthetic repulsion, and hopes that the hearer won't notice she is merely giving an emotive, noncognitive response when she assents, "yes, that is the best possible Republic I've ever imagined", or "yes, I recoil from the thought of the worst possible suffering for all". But one's subjective, aesthetic valuations of some set of actual or hypothetical observations are, as I've said, random with respect to empirical adequacy. It is only because our phenomenology involves a projection of our emotive reactions onto our observations that in extreme situations we sometimes forget that the two can always be made to come apart.

Thus, when you reply to Pyramidhead with:

Most of us want people to be happy. You hypothetically don't. That's not a disagreement about facts. The rest of us are discussing morality, how to achieve a higher level of happiness, and you are talking about something else.

you have already internalized your own personal aesthetic preference for happiness for all as definitionally the best possible thing, illicitly dismissing the possibility that another's emotive reaction could be equally logically and empirically legitimate. Hence, the misguided conclusion that he is "not talking about morality". When in reality, you've forgotten that it was you who first relativised morality to a hypothetical rather than categorical imperative, and he is simply pointing out the consequences of this argument form.

The distinctive feature of objective morality is that its normative force applies to people regardless of their desires. Whereas the normative force of your hypothetical imperative to maximise happiness is contingent on others' sharing your values. Therefore, however nice it would be if we all tried to maximise happiness, you have not shown that there is an objective moral obligation to do so. Nor have you nor Harris shown that someone who relativises their ultimate concern to something other than happiness has made any specifically empirical, lexical, or logical mistake -- which would have to be the case if morality is objective and two people disagree over some moral truth.
 
staircaseghost said:
The philosophical reasons for rejecting moral realism are nearly as strong as the moral reasons for rejecting atheists' attempts to sell people a defined-down simulacrum of the concept.
That seems to imply that those atheists that use the expression “moral realism” in the way you criticize (like, say, those who describe themselves as “Moral Naturalists”, like David Copp and some other philosophers) are behaving immorally and/or misusing the expression.
Could you please clarify your claim(s), and sketch those reasons?

staircaseghost said:
Arif Ahmed once pithily (but accurately) summarized William Craig's argument for realism as amounting to "there must be objective moral truths because deep down you know there are."
I've seen Craig's argument in support of what he calls “objective moral values and duties”.
After doing my best to decipher his (admittedly, obscure as usual) “explanations” of what he means by that expression, I don't think he means what you call “moral realism” - at least, taking his explanations of what he means in the context of his argument as authoritative regarding what he means, but it's his argument so I would be inclined to do that.
In fact, his appeal to intuitions do not go beyond establishing (if such intuitions are accepted), that 1. moral issues are matters of fact, not matters of opinion, in the usual sense of those expressions, and also that 2. some behaviors are actually morally good, morally wrong, etc.
But that is compatible with a number of other metaethical views, including what many philosophers (see above) call “moral realism”, or “moral objectivism”, or both – i. e., the usage that you reject in the paragraph I quoted earlier.
Only later, when arguing in support of the first premise of his metaethical argument (and specifically, against the compatibility of his second premise with unguided evolution), he presents a couple of hypothetical scenarios (one involving an alien invasion, the other one involving a different evolutionary history on Earth; both similar on the relevant point) and “concludes” that in that case, there would be no objective moral values and duties. However, if one actually considers his explanation about what he means by “objective moral values and duties” - also, in the context of the same defense of his metaethical argument -, the alleged “conclusion” appears to be a non-sequitur.

At any rate, he does not seem to give any arguments – not even an appeal to intuition – in support of anything beyond 1. and 2. above, regardless of what he meant.

staircaseghost said:
The distinctive feature of objective morality is that its normative force applies to people regardless of their desires. Whereas the normative force of your hypothetical imperative to maximise happiness is contingent on others' sharing your values.
For the sake of clarifying the matter before miscommunication takes more time and lines than the substance of the debate (in case there will be a debate, but that happened several times in the past, so it would be unsurprising if it happened again), I would like to ask a few questions:

a. When you say “people”, given the context of your previous posts, I reckon that you're probably using the word in a way that is broad enough to encompass any aliens from another planet who are at least capable of linguistic communication of a complexity similar to that of human language. Is that interpretation correct?
b. When you say “regardless of their desires”, similarly given context I reckon you're probably using “desire” in a broad manner as well, including any set of preferences – or values, or whatever one calls them - an agent may have; for example, it would include what in a narrower sense one calls “desire” (like a desire to have sex, or to eat a banana), but also aversions (like, say, an aversion to being wounded, to endure pain, or to snakes, etc.), even if in a narrow sense that is not called a “desire”. Is that interpretation correct?
c. When you say that the “normative force” of morality applies to a person, do you at least imply that it would be irrational for that person to behave immorally?
d. If so, would you say that it's practically irrational, or irrational in some other sense of “irrational”?
At any rate, I would like to ask for clarification of the “normative force” part.

staircaseghost said:
Therefore, however nice it would be if we all tried to maximise happiness, you have not shown that there is an objective moral obligation to do so.
I don't think Wiploc has shown that every person has a moral obligation to maximize happiness.
However, I do not get the impression that Wiploc was attempting to show that there was an objective moral obligation to do so, in the sense in which you seem to be using the expression “objective moral obligation”.
 
The philosophical reasons for rejecting moral realism are nearly as strong as the moral reasons for rejecting atheists' attempts to sell people a defined-down simulacrum of the concept.

I think you're suggesting that my theory is a lite version of another theory. I'm not familiar with that other theory.




An empirically adequate world model will contain no value judgments, only ascriptions of value judgments relativised to specific agents within the model.

I read that over and over, but can't make it out.




The efficient causes plus initial conditions of hurricanes suffice to capture every hurricane observation, and efficient causes are random with respect to whether gay-day-celebrators deserve it, just as efficiently caused mutations are random with respect to "the good of the organism".

You don't say.




Arif Ahmed once pithily (but accurately) summarized William Craig's argument for realism as amounting to "there must be objective moral truths because deep down you know there are."

I see it as a tactical move. He says that if moral realism is true, then god exists. His adversaries, therefore, are effectively invited to deny moral realism. If they do so, then they lose the debate right there, because Craig's packed audiences can have no sympathy for someone who doesn't believe in right and wrong.

If they don't make that mistake, then Craig pretends that his god-based morality has justification and force that other moralities lack. His opponents usually take that bait, and try to justify their morality. Craig is usually then free to attack their justification without having to defend his own, without even fielding his own.

The proper response, then is to say, "You claim your morality is better justified and more forceful than mine? Go ahead, then, and demonstrate your case. I'll justify my morality every bit as well as you justify yours. I'll show that mine has just as much force as you show that yours has."

Then he'd have an actual debate.




...
Thus, when you reply to Pyramidhead with:

Most of us want people to be happy. You hypothetically don't. That's not a disagreement about facts. The rest of us are discussing morality, how to achieve a higher level of happiness, and you are talking about something else.

you have already internalized your own personal aesthetic preference for happiness for all as definitionally the best possible thing, illicitly dismissing the possibility that another's emotive reaction could be equally logically and empirically legitimate.

I'm inviting someone to offer another standard. If Joe says, "No, honesty is more important than happiness," then we'll have a discussion, seeing whose moral intuitions are more consistent with his professed standard.




Hence, the misguided conclusion that he is "not talking about morality". When in reality, you've forgotten that it was you who first relativised morality to a hypothetical rather than categorical imperative, and he is simply pointing out the consequences of this argument form.

Again, that seems too giberishy to squeeze meaning out of.




The distinctive feature of objective morality is that its normative force applies to people regardless of their desires.

Note that I didn't introduce the word "objective" into this discussion. It is usually used as a way of equivocating. I'm not saying that you propose to equivocate.

That said, I don't see how my system is less objective than any other system.




Whereas the normative force of your hypothetical imperative to maximise happiness is contingent on others' sharing your values.

If you pass a law against murder, is it contingent on others sharing your desire not to murder?

I'm saying that increasing happiness is good, and decreasing it is bad. I happily admit that sociopaths aren't interested in such a moral system. But that doesn't mean I exempt them from the requirement to not torture babies for fun.

If a god made an objective moral rule that everybody always has to punch the person to their left, I would repudiate that god-based morality the same as a sociopath repudiates my happiness-based morality. God-based morality is no more objective than mine. It has no more normative force than mine.

Less, even, I believe. Because, and this is the key, everybody except the sociopath has interest in increasing happiness. We all have moral intuitions that lean in this direction. It seems to me that virtue-based morality has to be ultimately based on increasing happiness. Else why go along with it? Why try to be moral?



Therefore, however nice it would be if we all tried to maximise happiness, you have not shown that there is an objective moral obligation to do so.

I don't know why you stick "objective" into an otherwise perfectly good sentence.




Nor have you nor Harris shown that someone who relativises their ultimate concern to something other than happiness has made any specifically empirical, lexical, or logical mistake -- which would have to be the case if morality is objective and two people disagree over some moral truth.

I don't know what you mean by either "objective" or "relativises."
 
Therefore, however nice it would be if we all tried to maximise happiness, you have not shown that there is an objective moral obligation to do so.

I don't know why you stick "objective" into an otherwise perfectly good sentence. "
I'd guess it's because Staircaseghost assumes you're defending moral realism.

From  Moral Realism:

Moral realism is a non-nihilist form of cognitivism. In summary, it claims:

1. Ethical sentences express propositions.
2. Some such propositions are true.
3. Those propositions are made true by objective features of the world, independent of subjective opinion.​
 
That seems to imply that those atheists that use the expression “moral realism” in the way you criticize (like, say, those who describe themselves as “Moral Naturalists”, like David Copp and some other philosophers) are behaving immorally and/or misusing the expression.


Check those quantifiers. I do not say that all atheist use of the term is worthy of criticism. There is, however, a particular style of advocating for scare-quotes-moral-realism that is unique to atheists which alternates between irritating and pernicious.

<snipping the hermeneutics of my passage quoting someone else's hermeneutical summary of something yet another person said while making a different argument not currently under discussion>

a. When you say “people”, given the context of your previous posts, I reckon that you're probably using the word in a way that is broad enough to encompass any aliens from another planet who are at least capable of linguistic communication of a complexity similar to that of human language. Is that interpretation correct?
b. When you say “regardless of their desires”, similarly given context I reckon you're probably using “desire” in a broad manner as well, including any set of preferences – or values, or whatever one calls them - an agent may have; for example, it would include what in a narrower sense one calls “desire” (like a desire to have sex, or to eat a banana), but also aversions (like, say, an aversion to being wounded, to endure pain, or to snakes, etc.), even if in a narrow sense that is not called a “desire”. Is that interpretation correct?
c. When you say that the “normative force” of morality applies to a person, do you at least imply that it would be irrational for that person to behave immorally?
d. If so, would you say that it's practically irrational, or irrational in some other sense of “irrational”?
At any rate, I would like to ask for clarification of the “normative force” part.

People are moral agents. Desires are emotive dispositions to seek states of affairs out or to avoid them. It is a commonplace for morality to require that we act irrationally. Morality would be a lot less philosophically problematic if all we had to do was think through what would be "most rational" for us to do.

staircaseghost said:
Therefore, however nice it would be if we all tried to maximise happiness, you have not shown that there is an objective moral obligation to do so.
I don't think Wiploc has shown that every person has a moral obligation to maximize happiness.
However, I do not get the impression that Wiploc was attempting to show that there was an objective moral obligation to do so, in the sense in which you seem to be using the expression “objective moral obligation”.

My impression that he was attempting to show the existence of objective moral obligations is grounded in the fact that he started and continued to post in a thread defending a philosophical position asserting that such obligations exist.
 
I think you're suggesting that my theory is a lite version of another theory. I'm not familiar with that other theory.

Typical moves by nontheists claiming to present an objective morality are to either present something objective, but which on closer examination is clearly not morality, or something which is normative, but is not objective. E.g. "that which maximizes Darwinian fitness" and "do whatever fulfills the most and strongest of your desires", respectively. Hence, watered down realism.

Someone who does not hold the maximisation of others' happiness as the greatest good, if moral realism is true and that is indeed the ground of objective morality, would therefore have to be making some kind of mistake of pure logic or objective fact. But since you demurred when challenged by ????? on this, you leave the door open for every person to plug in their own subjective preferences as the unchallengable normative pole stars of their own behavior. And hey, wait a minute, isn't that exactly the problem moral objectivity was supposed to solve?

An empirically adequate world model will contain no value judgments, only ascriptions of value judgments relativised to specific agents within the model.

I read that over and over, but can't make it out.

"I can account for everything I see without saying that anything happens 'for the best' or 'for the sake of the good', although internal to that account I will say that someone did something because they thought it was the right thing to do, and external to that account I can and do say that certain things are good or desirable."

So according to the general maxim that we ought not believe in things (9/11 conspiracies, ghosts) which are not needed to explain what we see, we ought not believe in any valuer-less values baked in to the fabric of the universe. As we once did, when we thought the planets moved in circles because "circles are the best shape", or that rocks fell down because the elements seek out their best place, or that living things have adaptations "for the good of the organism".

I see it as a tactical move. He says that if moral realism is true, then god exists. His adversaries, therefore, are effectively invited to deny moral realism. If they do so, then they lose the debate right there, because Craig's packed audiences can have no sympathy for someone who doesn't believe in right and wrong.

I agree. It is rhetorical and emotional blackmail, not a substantive philosophical argument. Textbook ad consequentiam fallacy. And he and Harris both appeal to it. And last time I checked, it is fallacious to appeal to fallacies.

I'm inviting someone to offer another standard. If Joe says, "No, honesty is more important than happiness," then we'll have a discussion, seeing whose moral intuitions are more consistent with his professed standard.

I think inviting discussion on different visions of the best way to live is a wonderful thing to do. But in the above post you are explicitly disinviting him from offering another standard. To say "[t]he rest of us are discussing morality... you are talking about something else" is to say that by definition any other standard he proposes is off-topic and irrelevant.

"In-N-Out has the best burgers in town, but I invite you to suggest a better place."

"Actually, I like the secret menu at Fatburger better."

"The rest of us are discussing the best burgers in town, In-N-Out burger. You are talking about something else."

The distinctive feature of objective morality is that its normative force applies to people regardless of their desires.

Note that I didn't introduce the word "objective" into this discussion. It is usually used as a way of equivocating. I'm not saying that you propose to equivocate.

That said, I don't see how my system is less objective than any other system.

If I started a thread called "Communism", and said, "I've just read Marx's Communist Manifesto, and I'm convinced communism is the way to go," I do not need to introduce the phrase "government interference in the economy" because my readers have the right to infer my advocacy of that based on my initial endorsement of the aforementioned well-known philosophical view.

Whereas the normative force of your hypothetical imperative to maximise happiness is contingent on others' sharing your values.

If you pass a law against murder, is it contingent on others sharing your desire not to murder?

Positive, black-letter law is contingent only on the relevant sovereignty following the relevant legislative procedures. That is why homosexuality is, in actual objective fact, against the law in Uganda, regardless of whether you or I think it's a good idea that it's against the law. It has legal force, but not moral force, in Uganda, but not in California.

However, whether I ought to spend hundreds of dollars on a high-tech tennis racket is contingent on whether I give fuck all about tennis. Which I don't, except sometimes vicariously when I'm reading David Foster Wallace.

If moral realism is true and there is an objective obligation to maximise happiness, then Pyramidhead ought to do this even if he holds some other goal as the summum bonum. That is, unlike tennis or Ugandan sex law outside its borders, the normative force of the imperative still applies to him. So, on pain of modus tollens, it is incumbent upon you to explain exactly why your admonition would apply to him.

I'm saying that increasing happiness is good, and decreasing it is bad. I happily admit that sociopaths aren't interested in such a moral system. But that doesn't mean I exempt them from the requirement to not torture babies for fun.

It is good not to exempt them, just as it is proper not to exempt non-sociopaths who don't think pleasure is the most important thing in life. But whence your authority to issue these commands? It is not rationality itself, since ex hypothesi people who have different standards would be irrational to act against them. But then, where is this Transcendent Ugandan Legislature in the non-human, objective world whose badge you flash when you tell others what to do?

Therefore, however nice it would be if we all tried to maximise happiness, you have not shown that there is an objective moral obligation to do so.

I don't know why you stick "objective" into an otherwise perfectly good sentence.

Because communists want to interfere in the economy.

Nor have you nor Harris shown that someone who relativises their ultimate concern to something other than happiness has made any specifically empirical, lexical, or logical mistake -- which would have to be the case if morality is objective and two people disagree over some moral truth.

I don't know what you mean by either "objective" or "relativises."

What sporting goods purchases one ought to make are relativised to what sports one wants to play. But an objective moral rule is not relativised to whether someone shares your vision of the good. I am exempt from the normative force of recommendations I purchase a tennis racket, but if there are objective moral rules then I am not exempt from them by virtue of whatever subjective desires I happen to have. Therefore, if there are objective moral rules and two people disagree about which one is correct, at least one of them has made some kind of mistake. I've explained why I think this cannot be an empirical mistake, or a lexical mistake; and no one has argued in this thread that someone like Pyramidhead is committed to a formal logical mistake like "p and not p".

So I'm left wondering exactly what kind of mistake about matters of objective fact the moral realist thinks such people are making.
 
staircaseghost said:
Check those quantifiers. I do not say that all atheist use of the term is worthy of criticism. There is, however, a particular style of advocating for scare-quotes-moral-realism that is unique to atheists which alternates between irritating and pernicious.
I wasn't suggesting that you were suggesting all atheist use of the expression “moral realism” was morally wrong (if that's what you mean by “worthy of criticism”).
But let me clarify: What I got the impression of is that you were saying that all use of the expression “moral realism” in a particular way (i. e., not meeting some demands of your conception of it, though those are not <i>entirely</i> clear to me yet) was either immoral, or a misuse of the term.
Sorry if I got it wrong, but now you do appear to be implying the following things:

a. That the use of “moral realism” in the way it's used by, say, philosophers who describe themselves as “moral naturalists”, like Copp (or at least most of them), or by those who deny that moral naturalists are correct, but classify moral naturalism as a form of moral realism (e.g., Huemer) is at least a misuse of the expression “moral realism”.
b. That only atheists use it like that. (What about other non-theists, like agnostics?)
c. That advocating that realism in that sense is true is either irritating (to you? To some other people? That is unclear), or immoral (or maybe just harmful to some group of people that remain unidentified?).

If I got it wrong, please clarify what you're saying. If I got it right (in at least one of the three points), I would ask for some argument in support of your claims.

staircaseghost said:
me said:
a. When you say “people”, given the context of your previous posts, I reckon that you're probably using the word in a way that is broad enough to encompass any aliens from another planet who are at least capable of linguistic communication of a complexity similar to that of human language. Is that interpretation correct?
b. When you say “regardless of their desires”, similarly given context I reckon you're probably using “desire” in a broad manner as well, including any set of preferences – or values, or whatever one calls them - an agent may have; for example, it would include what in a narrower sense one calls “desire” (like a desire to have sex, or to eat a banana), but also aversions (like, say, an aversion to being wounded, to endure pain, or to snakes, etc.), even if in a narrow sense that is not called a “desire”. Is that interpretation correct?
c. When you say that the “normative force” of morality applies to a person, do you at least imply that it would be irrational for that person to behave immorally?
d. If so, would you say that it's practically irrational, or irrational in some other sense of “irrational”?
At any rate, I would like to ask for clarification of the “normative force” part.
People are moral agents. Desires are emotive dispositions to seek states of affairs out or to avoid them. It is a commonplace for morality to require that we act irrationally. Morality would be a lot less philosophically problematic if all we had to do was think through what would be "most rational" for us to do.
a. Okay, so people are moral agents, so babies aren't people, and I'm not sure whether you make any requirements about aliens – since I don't know whether your conception of “moral realism” requires that aliens like the ones I described be moral agents. So, I would like to ask whether it does.

c. As for “It is a commonplace for morality to require that we act irrationally”, if I get that right you're saying that it's always irrational to act immorally, according to moral realism. Is that interpretation correct? Or are you saying it's always irrational to behave immorally, not just according to moral realism?

So, for example, if a brutal psychopathic dictator who does not care about right or wrong decides to continue killingl any peaceful opponents for speaking out against him, because he – correctly – reckons that if they manage to convince enough people, he will be ousted, tried and eventually executed or imprisoned for life, and that that's against his interests.
Clearly, he's behaving immorally.

So, would you say that:
c.1. The dictator is behaving irrationally when he continues to behave in that fashion (whether he was rational in becoming a dictator is another matter; my question is about his actions given that he is in that position already)
c.2. If moral realism (as you understand the expression) were true, then (accepting he's behaving immorally) he's behaving irrationally.
c.3. Neither? Something else?

staircaseghost said:
My impression that he was attempting to show the existence of objective moral obligations is grounded in the fact that he started and continued to post in a thread defending a philosophical position asserting that such obligations exist.
Could you please point me to a post of Wiploc's in which he defends a philosophical position that asserts that objective moral obligations, in the sense in which you use the expression “objective moral obligations” exist?
 
Turns out I'm a moral realist. I'm reading Sam Harris' The Moral Landscape: How Science Can Determine Human Values. I find it compelling.

Any objections to moral realism?
I think that the basic idea that flourishing or welfare is better, so one ought to pursue it, is logically sound. The problem is with the utilitarianism. It is better for the majority or greater number, but not necessarily the better option for all. So if I have a duty to pursue my personal flourishing, then if utilitarianism lessens this, then it is bad for me.

Better options are always for soemone, or for people, they are connected to sentient agents in "life-worlds".

My point is if anyone has a duty to pursue their individual welfare, and I think at least one person does, then surely it is that individual him or her self? If utilitarianism undermines this, the idea of a objective better option (for the so called "group" i.e. the elect and select) seems to come into theoretical conflict with the more basic foundation of the individual having a duty unto himself in the more fundamental sense. So its an individualism vs collectivism style conflict.

Sam specifically says that hungry aliens could justifiably eat humans according to his theory, if their welfare or flourishing were superior enough to warrant this. But this idea of a self sacrifice (taken thus far) undermines the evolutionary foundation which created the phenomenon of flourishing and well being, for the sentient agent, in the first place. People fourishing is an expression of an evolutionary adaptation, you know that, and to negate ones own flourising is to exit the ecosystem. IOW you cant promote true flourising by promoting its negation, or at least this idea leads to certain inconsistencies.

The idea that some people have a duty to give up trying to be well, seems a bit counterproductive. Perhaps cronically ill with severe pain yes, because ought implies can, and if you cant -if theres NO chance at all of being well - theres no longer a duty to try. For the underdogs and outsiders however this 'spirituality' may cause social pressure to flunk morality altogether, but wrongly, because their so called "flourishing" is deemed surplus to requirements by people with bad attitude.

Sounds a bit like:

"Social Darwinists generally argue that the strong should see their wealth and power increase while the weak should see their wealth and power decrease." (Wikipedia).

Also IIRC Sam repeatedly says you cant source morality in evolution. But evolution is precisely the source of potentiality for psychosomatic "well being" in the first place, and rules over it, just as it is the source of having arms an legs and rules over them (i.e. it is a natural regulatory system or power which enables and disables such and such adaptations, physiology, morphology and behaviours in the first place). The book also contains non sequitirs like 'if evolution were the source of morlity it would be our duty breed as much as possible'. However basic biology teaches that different species breed differently, and its not always good to maximise number because of a cost benefit analysis.
 
Last edited:
But I'm not a doctor, and don't even play one on TV, so I'm happy to concede that my usage is a layman's usage. Is that what this is about?
In part. This is about me getting side-tracked (by my pet peeve about people misusing terms like "sociopath" to pathologize/dismiss my perspective)

Well that wouldn't be fair. Would you share more about your perspective?



from the fact that my main objections are on a semantic level, to this whole business of you defining moral discussion as being about X and only X.

But that's the point, the key issue, the essence of this thread. I'm not blocking other opinions; I'm inviting discourse.

If you say that you think honesty is more important than happiness, then I can ask whether you would still believe that if we knew that honesty generally has a strong tendency to cause unhappiness. If you say that obeying a god is more important, then I'll ask if that isn't --- at least in part --- because we'd be happier if more of us obeyed god.

I'm trying to start a discussion, not to dismiss other viewpoints.



A more reasonable reaction would've been to just ignore people like you and Harris, as I'm not opposed to you lot trying to figure out how to increase human well-being, but it doesn't seem that we necessarily have anything to offer one another.

Maybe before you go you could offer another viewpoint. And maybe, if you don't want to discuss it yourself, I'll wind up discussing it with someone else.

I can't speak for Unbeatable, but I personally think that suffering, not happiness, is the currency of morality. It explains why most people feel a stronger sense of obligation to avoid harming somebody than to increase their happiness. As a quick example, most people would agree they have a responsibility not to smash my iPod, but very few people would presumably agree they have the same degree of responsibility to buy me the newest model of iPod. Maybe they would have some responsibility for the latter in the context of a prior agreement, or in special cases, but the duty not to smash what I already have (not to harm) is more universal. This is my personal opinion, of course, and I can't prove it's correct. But it certainly leads to a different conclusion than a happiness-maximizing moral philosophy. And to the extent that I believe myself to be a fairly compassionate person, I am probably not a sociopath. I think that's what Unbeatable may have been getting at; that there are ways of thinking about morality that don't boil down to happiness. At least in my perspective, happiness is instrumental, not an end in itself. It's a great feeling that dispels suffering for a period of time, and to that extent it should be promoted, but I put more emphasis on minimizing/preventing suffering, a goal that is not necessarily always served best by promoting happiness.

Reducing someone's suffering increases their happiness.
 
I did a quick scan of the moral realist wikipedia page and I think I get the gist of the philosophy.

All I really know about moral philosophy is that without living things, morality is nonsense. The idea that we should be moral is intrinsically tied to 'bettering' something's life, whether that means increasing happiness reducing suffering, whatever, the terms are mostly synonymous. The point is you're making someone else' life easier as opposed to harder. I think most can agree that expending our own energy to make another life better is an objectively 'good' thing to do (the whole idea of doing good is tied up in acting outside of our own self interest), but we can also agree that 'good' means different things in different contexts. So 'doing good' is not always a 'good' thing for our own needs, which is why we aren't always moral.

In a nut-shell, that's what morality is: a push and pull between acting in our own interest and the interests of others. Whether you believe morality is a real thing, try stealing someone's purse or giving them 40 bucks, see what happens.
 
StarryNight said:
[...]

Also IIRC Sam repeatedly says you cant source morality in evolution.

[...]

If true, then Sam is a few decades behind in his understanding of evolutionary biology. The whole point to the Selfish Gene is that we are free to be selfless because it is our genes that are selfish. By thinking of evolution in terms of populations instead of individuals, it is more than easy to see how the selfless behavior seen in social species could arise through evolutionary forces, and I would argue that our conception of morality has roots in such instincts.
 
StarryNight said:
[...]

Also IIRC Sam repeatedly says you cant source morality in evolution.

[...]

If true, then Sam is a few decades behind in his understanding of evolutionary biology. The whole point to the Selfish Gene is that we are free to be selfless because it is our genes that are selfish. By thinking of evolution in terms of populations instead of individuals, it is more than easy to see how the selfless behavior seen in social species could arise through evolutionary forces, and I would argue that our conception of morality has roots in such instincts.

I think he may mean normative ethics, but I checked the index of The Moral Landscape and all mentions of evolution are pretty negative in this area IIRC. I think that the obvious point that could have been pointed out better is that welfare, well being, flourishing etc are all (if morally relavent or not) neatly tied into an evolutionary perspective, in that "doing well" is as a rule meant to feel good.

The problem faced is that of going from a naturalised meta-ethics and trying to defend oneself against what are seen as unacceptable consequences for normative ethics, like entailing sociobiology, social Darwinism, eugenics etc. So its an if A then B; not B, therefore not A style argument (denying the consequent, denyiing these implications, therefore denying naturalism).

But maybe thats a straw man in that natural meta ethics doesnt strictly imply that? My view is that culture is so plastic that "flourishing" can entail lots of diverse experiences and practices, but if we take a hint from Darwin as a rule they are going to be tied to adaptive behavior. So we ought to try and adapt (ecologically, psychologically etc) to the "system" we are thrown into. And that can be the basis for a rational critique of various forms of "flourishing" (whether that are truly adaptive in that they are responsible in a morally self aware fashion).

I dont like to equate morailty with selflessness too much. I think that stems from a religious ethics more than scientific ones.
 
If true, then Sam is a few decades behind in his understanding of evolutionary biology. The whole point to the Selfish Gene is that we are free to be selfless because it is our genes that are selfish. By thinking of evolution in terms of populations instead of individuals, it is more than easy to see how the selfless behavior seen in social species could arise through evolutionary forces, and I would argue that our conception of morality has roots in such instincts.

I think he may mean normative ethics, but I checked the index of The Moral Landscape and all mentions of evolution are pretty negative in this area IIRC. I think that the obvious point that could have been pointed out better is that welfare, well being, flourishing etc are all (if morally relavent or not) neatly tied into an evolutionary perspective, in that "doing well" is as a rule meant to feel good.

The problem faced is that of going from a naturalised meta-ethics and trying to defend oneself against what are seen as unacceptable consequences for normative ethics, like entailing sociobiology, social Darwinism, eugenics etc. So its an if A then B; not B, therefore not A style argument (denying the consequent, denyiing these implications, therefore denying naturalism).

But maybe thats a straw man in that natural meta ethics doesnt strictly imply that? My view is that culture is so plastic that "flourishing" can entail lots of diverse experiences and practices, but if we take a hint from Darwin as a rule they are going to be tied to adaptive behavior. So we ought to try and adapt (ecologically, psychologically etc) to the "system" we are thrown into. And that can be the basis for a rational critique of various forms of "flourishing" (whether that are truly adaptive in that they are responsible in a morally self aware fashion).

Possibly. My main disagreement with Sam H (in this instance) is that the end result is a not an ethical system, but a classification of ethics that conflates adaptive with good, and good with happy. Which is fine as far as it goes, but it's hardly an ethical system, and you can't base anything on such wishy-washy foundations. The first thing you'll come up against in any attempt to actually live as a moral realist is an instance where doing the right thing doesn't make someone happy (say, trying to get them off drugs), and then you realise that good isn't the same as happy, and that adaptive is a value judgement after the fact, and not a course of action.

Moral relativism largely avoids criticism by saying almost nothing useful.
 
Possibly. My main disagreement with Sam H (in this instance) is that the end result is a not an ethical system, but a classification of ethics that conflates adaptive with good, and good with happy.
I dont think he argued that entirely, that more my argument too.
Which is fine as far as it goes, but it's hardly an ethical system, and you can't base anything on such wishy-washy foundations.
Thats true. For me the proof of an ethical system is in its adaptive value but what this system actually is cant be derived from basic principles. You need psychologists, economists, ecologists, politicians etc. But that doesnt make the foundations untrue. the truth is they're just not the kind of thing someone can derive complex culture from on their own. A bit like culinary art, you cant derive a recipe from knowing someones hungry, but the hunger is the foundation all the same, and it helps to know the dietary needs of the species and individual. Whats the alternative, from a scientific perspective?
The first thing you'll come up against in any attempt to actually live as a moral realist is an instance where doing the right thing doesn't make someone happy (say, trying to get them off drugs), and then you realise that good isn't the same as happy, and that adaptive is a value judgement after the fact, and not a course of action.
I dont equate good with happy all the time. Sometimes negatives are apt, like grief, sadness etc in right context. But I define these as sub optimal, in that you cant beat a basically happy, healthy, well adjusted lifestyle (even if definitions are going to be somewhat variable, just as are "good meals" or "appropriate responses" not defined clearly once and for all).


What is actually adaptive, in such a complex and chaotic world, is a bit of a guessing game. Not an exact science. Do I watch the news or read the paper, run or cycle home etc? Yet we also know general principles too.
 
Reducing someone's suffering increases their happiness.

Not necessarily. I can eliminate the suffering of a terminally ill patient without increasing their happiness.

And anyway, even if that were always the case, promoting happiness does not always reduce suffering. Total happiness can increase exponentially while total suffering remains the same; simply have a lot of babies and make sure they are raised in affluent conditions, while doing nothing to improve the lives of less fortunate people. Suffering and happiness are are not opposite ends of a single spectrum, but distinct states with some interconnected properties.
 
Reducing someone's suffering increases their happiness.

Not necessarily. I can eliminate the suffering of a terminally ill patient without increasing their happiness.

And anyway, even if that were always the case, promoting happiness does not always reduce suffering. Total happiness can increase exponentially while total suffering remains the same; simply have a lot of babies and make sure they are raised in affluent conditions, while doing nothing to improve the lives of less fortunate people. Suffering and happiness are are not opposite ends of a single spectrum, but distinct states with some interconnected properties.

Let's make happiness more concrete by replacing it with what you really mean: contentment, equilibrium, stability, balance, lack of pain. Happiness arises when those conditions are present. Suffering is basically a lack of balance, equilibrium, stability, and the presence of pain, so they are indeed on the opposite end of a spectrum. Whether you don't give someone pain, or you give them pleasure, you are in effect causing them to be in a more or less positive state.

If you forget sociological concepts and view humans as nothing but burners of energy, this idea is tied right into whether we have an excess or a lack of energy. Give someone energy, or take it away correlates with good or bad action.
 
Back
Top Bottom