• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Jokes about prison rape on men? Not a fan.

First, it's not only that he enjoys it. It's that he does it for fun, so that is the reason (not just a secondary reason).
Second, he deserves punishment, regardless of what he needs - what he needs is food, water, air, etc.
Third, your claim againt retributivism flies is just that: a claim that goes against the ordinary human moral faculty. As is the case when you question any human faculty, the burden is on you. It would be irrational to dismiss our faculties for no good reason - reasons we can only assess by means of some of our faculties, obviously.
Fourth and foremost, this misses the point entirely. The point of the example was not that he deserves punishment (though he does), but he behaves unethically. That is sufficient to show that Jarhyn's theory is false.



No, that is not a problem. The problem is with your qualifiers. But that the ordinary human moral faculty deems a behavior unethical is in fact sufficient evidence to conclude that it is so, just as is the case with other ordinary human faculties, and barring specific evidence to the contrary.
For that matter, if it seems blue to a normal human visual system under ordinary light conditions, that is pretty much sufficient evidence to reckon that it is blue. It's what rationally one should reckon, barring a lot of counter evidence. The same for the verdicts of other human faculties, in this case the moral faculty. It's you who has the burden of showing that his behavior is not unethical.

Again, your qualifiers only complicate matters. But the 'really' qualifier does not seem to add anything, but it's an intensifier. As for the others, I already addressed them.


Now, if you were correct and the ordinary human faculty were not enough to justify our moral assessments, then nothing would, and Jarhyn's theory would be unwarranted. The reason for that is that we do not have any tools for assessing whether a behavior is unethical or not other than the ordinary moral human faculty - our own, and that of other people -, aided of course by other faculties (e.g, to make intuitive probabilistic assessments about expected consequences of some behavior), but in the end, our moral faculty is the tool to make ethical assessments.

What about moral theories?

None is true. However, even if one were true, those theories can only be properly tested against the judgments of the human moral faculty (or against something already based on it), so even then, we would only be justified in believing them true if they pass the test when their predictions are tested vs. the human moral faculty.

Incidentally, something like the above holds for color too: we may have cameras and computers that can detect blue stuff, but we only have them because they have been calibrated using human color vision (or tools already based on it).


ruby sparks said:
And in any case, you're too fond of the extreme example of causing harm for fun. It's trivially obvious that at some point on the spectrum of human behaviours, we could say something like, 'all normal, decent, intelligent humans would think this wrong'. So what? You're just operating at one extreme. At the other end of the spectrum, human morality is pretty relative and variegated.
You seem to have lost track of the conversation. Again, I was showing that Jarhyn's theory was false. In order to test a theory, I just need to compare its predictions with some known facts. So, the extreme examples are pretty adequate.

So what, you say?

So, "'all normal, decent, intelligent humans would think this wrong'", but the theory I am debunking entails it is not morally wrong.


ruby sparks said:
On the contrary, the claim that what you call the ordinary human moral sense is the proper tool to find moral truth is something for which the burden is on you.
No, that is not true. It would be irrational to question one of our faculties without good evidence against it - evidence which, of course, we also assess on the basis of our human faculties!

We do not do that normally. For example, we do not claim people who say a traffic light was red that they have to show that the human visual system is a proper tool to figure out whether something is red. Sure, there are arguments for a color error theory (they fail) but the burden is on the claimants.


ruby sparks said:
In fact, demonstrating that there is even such a thing as moral truth in the first place is a burden you might want to try to lift before you even get on the the other one.
No, not at all. That some behaviors are unethical is obvious by normal human assessments. It's on you the burden to show otherwise in the first place.

]However, in this context you miss the point again. Jarhyn's theory entails that there is such thing as moral truth - ethical truth in his terminology. So, in order to argue against it, it is proper to assume there is (else, the theory is false on that account alone). This does not even depend on whether it is proper to reckon in general (i.e., when not argue against a specific theory) that there is moral truth (it is, but really not the point here).


Jarhyn said:
I'm not and wasn't discussing Jarhyn's claim with you. You can discuss that with Jarhyn.
Are you serious?
Discussing it with Jarhyn is exactly what I was doing, when you jumped in: You jumped on a post in which I was replying to Jarhyn's ethical theory. You took that post out of context. Of course Jarhyn agrees that there is ethical truths (read his posts!), and his theory entails that there is (read his posts!), so it would be proper on my part to assume there is ethical truth in the context of testing his theory even if it were not proper to reckon in general (i.e, in other contexts) that there is ethical truth.


Let me try another manner. Suppose that someone claims that God (i.e., omnipotent, omniscient, morally perfect agent) exists. In order to argue against that claim, it is proper to assume that there is an omniscient, omnipotent agent, and then argue it is not at all morally perfect. Now, this case is different because there is moral truth, whereas there is no good reason to suspect there is an omniscient, omnipotent agent, but it is not different in the relevant sense, namely that it is okay to use as a hypothesis one or more of the implications of the theory one is criticizing.

Correction: the last quote above is from ruby sparks, not Jarhyn.
 
me said:
c. It contradictory to have justice done by the
government on those who engage in heinous crimes, but leave minor unethical behaviors out of it, and to the punishment regularly inflicted by humans on one another by means of condemning each other's behavior, or mocking each other, etc.
I meant to say "It is not contradictory...", etc.
 
Jarhyn said:
King Charles is a person, in the same fashion as all of "everyone else".

The contradiction exists quite plainly in that special pleading: the fact that King Charles' rights contradict with everyone else's rights; person X does not have equal moral value to person Y for all X and Y.

First, there is no claim that anyone has "moral value", just that King Charles may do as he pleases and everyone else has an ethical duty to obey King Charles I in all things.
Second, even if we add the clause that the existence of King Charles exists is a better state of affairs all other things equal than the existence of any other human person (a way of making sense of the idea that KC has greater moral value than any other person), you have failed to derive a contradiction.
me said:
Jarhyn said:
In fact, ethics only exists in the context of more than one person. If there is only one person, it is perfectly acceptable to be a solipsist, and all instrumental oughts are in fact ethically justified; nobody else's concerns need to be heeded in that situation because there is nobody else to be concerned with.
That is false. Purely for example, suppose that all people die due to a rogue biological weapon, except for Joe, who decides to pour gasoline on a cat and set her on fire, so that he has fun watching a fireball run. In fact, he does that every day, as there are plenty of cats around, and he can capture them with different traps and tactics. He is determined to do this, and has human intelligence and the tools left by the rest of humanity at his disposal, so the cats do not have a chance. Then Joe behaves unethically.
Do you have any reply?

me said:
Bomb#20 said:
Uh huh. So Gary wants to go to synagogue on Saturday and work on Sunday; Bob has goals that require everyone to work on Saturday and go to church on Sunday. Through social consensus it's agreed to damage Gary's metagoal of satisfying his own religious obligations, because his unilateral goal is mutually exclusive with the deemed-acceptable metagoal of the social consensus, which is to have as many people as possible be saved through knowing Jesus, so Gary's goal gets rejected.

Jarhyn said:
So, Gary's goal does not unilaterally invoke bob
Jarhyn said:
Bob's goal unilaterally invokes gary. There you go, it's already not up for debate with the social consensus. If I can change the name 'jesus' for 'muhammed' or any other arbitrary thing, it's already disallowed as a contradiction; you are already abusing the role of the social consensus in the model, and invoking special pleading to justify one form of goal over another (jesus as opposed to Muhammed, neither of which is justifiable against the observable reality; come back to me when you prove jesus and God and all that exist).

Gary wants to pour gasoline on a cat and set it on fire every Saturday, because he has fun watching a fire ball run.
Bob has goals that require that everyone refrain from setting cats on fire for fun, and further require that failing that, police try to arrest people who set cats on fire for fun.

So, Bob has goals that unilaterally invoke Gary and other people. It's already not up for debate. Bob is behaving unethically. Gary is not. This is what your ethical theory predicts. Since this is false, it follows that your ethical theory makes false predictions, so it has been tested and shown to be false (it had already been shown to be false, on other grounds, but there is no harm in showing it again).
Do you have any reply?
 
In this particular case, I was using color only to explain to you what I meant by a 'moral theory', which is analogous to what I would mean in that context by 'color theory'. So, the reasons you give for thinking the analogy is not adequate clearly fail (and if you do not see that, there is nothing I can do).

That is true. To be honest, I did not thoroughly read what you said just above about colour.

But then I did read it, and I think in the main paragraph, you tried to say that your theory was not really a theory? So when you said no moral theories are true, it's not clear to me now whether you included yours or whether you only meant other moral theories. Could you clarify?

In other contexts, I use it for different purposes. And I reject the reasons you give for reasons I gave in our previous exchanges.

Fine, but if I catch you implying that there are facts about morality because there are, or via a comparison with, facts about colour, I'll be onto you. :)
 
ruby sparks said:
But then I did read it, and I think in the main paragraph, you tried to say that your theory was not really a theory? So when you said no moral theories are true, it's not clear to me now whether you included yours or whether you only meant other moral theories. Could you clarify?
It is in a sense a theory (though it's not mine), but what I mean is that it's not one of the theories I was referring to when I said a 'moral theory' in the previous post, just as I would not have classified the view that human color vision is a proper and effective tool for finding color facts a 'color theory' in the sense described above.

So, call it a theory if you like, I do not mind, but it's not the sort of theory I was talking about when I said moral theories were all false.
 
Tragedy is when I cut my finger. Comedy is when you fall into an open sewer and die.

-Mel Brooks
 
So, call it a theory if you like, I do not mind, but it's not the sort of theory I was talking about when I said moral theories were all false.

It's a moral theory. So when you earlier said no moral theories are true, you excluded the one you are using. I expect that works nicely for you. ;)
 
So, call it a theory if you like, I do not mind, but it's not the sort of theory I was talking about when I said moral theories were all false.

It's a moral theory. So when you earlier said no moral theories are true, you excluded the one you are using. I expect that works nicely for you. ;)
Obviously, if I had included "my" theory, then I would have said that all moral theories are false, except for that one - since I clearly hold that that one is true.
However, that would not have been adequate. For example, a color theory that says that objects with such-and-such properties are red (where the such-and-such are properties not described in color terms), is tested using our own human color vision. In that context, it's not a color theory to say that human color vision is used properly to figure out colors - that's just to point out how you test theories.

But again, not really the point. You count it as a theory? Sure, then that one is correct, though it's not a theory in the same sense the ones I was talking about are theories, so it would be confusing to use it in that manner.

For that matter, I was not talking about metaethical theories I reject (like an error theory), either.
 
Obviously, if I had included "my" theory, then I would have said that all moral theories are false, except for that one - since I clearly hold that that one is true.

Indeed. Thanks for clearing that up. All moral theories are false, except your preferred one. Got it. Luckily, that isn't even slightly dogmatic.
 
Obviously, if I had included "my" theory, then I would have said that all moral theories are false, except for that one - since I clearly hold that that one is true.

Indeed. Thanks for clearing that up. All moral theories are false except yours. Got it.

Well, no, as I said that is not a moral theory in the same sense, and it's not mine as in I did not come up with it. But other than that, of course I hold it's a correct theory - else, I would not be defending it -, and so obviously those that deny it are false. But I had already explained clearly enough what I meant when I said all moral theories were false. The point is that we need to use our moral faculty, we do not have a theory that works and can be used instead.
 
.... of course I hold it's a correct theory - else, I would not be defending it -, and so obviously those that deny it are false.

Not warranted. Dear me. You of all people should have spotted that. All that follows is that you hold that others are false. And here was me thinking you were keen on doing logic.
 
.... of course I hold it's a correct theory - else, I would not be defending it -, and so obviously those that deny it are false.

Not warranted. Dear me. You of all people should have spotted that. All that follows is that you hold that others are false. And here was me thinking you were keen on doing logic.

Obviously, I mean that I hold that. It's implicit. When I argue, I say


Such-and-such is true​
and of course I am implying that those views that deny such-and-such are false (If I were to say 'I believe that such-and-such is true', the matter would be different; what I would be implying about other theories would be a matter of conversational implicatures rather than logical entailment)

Do you actually think I meant to say that if I believe that a theory is false, it follows (rather than it obviously follows from that view) that those that deny it are false? You keep just making the worst possible interpretations, and you keep misunderstanding.

Oh, and by the way, your logic is faulty. From 'Angra Mainyu holds that theory X is true' it does not follow 'Angra Mainyu holds that theories that deny X are false'. You would be correct in assessing that I hold that, if you made a probabilistic assessment in this context, and even it would be okay to say something like, 'so, you hold that the other theories are false', because that 'so' wouldn't need to be one of logical implication. But it's not true that it follows that I believe that others are false. It's consistent to say that I do not have beliefs about others (for example).
 
.. I hold it's a correct theory...and so obviously those that deny it are false.

You should express yourself more carefully, I think. Your unwarranted certitude is going to keep on slipping out like that if you don't.

Morality is not a settled matter, Angra, and you have not settled it.
 
I am not sure if the is-ought problem is overrated, but it depends what you mean.
Well, what I had in mind was that the is-ought problem isn't a special insight into morality. We face the same issue in every area where humans divide the analog world into digital categories, and we don't let it stop us. How can we get from facts about how many legs something has and how tall it is and what color it is and so forth to a logical conclusion about whether it's a horse? It's not as though children are ever taught formal criteria for horseness and then run down a checklist and consequently conclude an animal with three legs can't be a horse because "quadruped" was in the definition. As children we just saw some horses and heard people call them "horses" and got the general idea. And now when we see a new animal we judge it a horse when the neural networks for horses that grew in our brains fire up. The foundation of thought isn't logic or science; it's pattern matching. And pattern matching is an eminently sensible way to draw conclusions. It's error-prone, true; but it's not as though we have a better alternative. Heck, the only justification we have for logic and science themselves is pattern matching. How do you know (NOT NOT A implies A), other than that it fits a pattern you've seen a thousand times that's never led you wrong?

So if I can't deductively prove you ought not torture people for fun without introducing some premise with "ought" in it that's more dubious than the desired conclusion, so what? I can't deductively prove Wizard was a horse either, without introducing some premise with "horse" in it that's more dubious than the desired conclusion. But that's not sufficient reason to conclude that I don't know whether he was a horse. I met Wizard a dozen times. He ate hay out of my hand. The same pattern recurs in every field of human inquiry. But it's only in morality where for some reason this routine puzzle of human psychology is commonly blown up into an argument for radical antirealism.

Hume made pretty much the same argument about two other philosophical conundrums: inductive reasoning (How do you get from a "was" to a "will") and causality (How do you get from a "happened after" to a "was caused by"). Some philosophers have certainly been known to take those objections seriously, but Hume's skepticism about induction and cause-and-effect never made its way into the public consciousness the way his moral skepticism did. Why is that? Does the you-can't-prove-it's-a-horse argument qualify as a better argument when you make it about morality than when you make it about other areas?

Anyway, that's why I think it's overrated. I hope it helps.

I might say that it can never be got past, although this does not prevent us from coming up with moral theories nonetheless. We pragmatically need to do that, I think, not least because (a) we are stuck with having to deal with our moral intuitions and (b) we must find ways to co-exist, if only in order to survive, which I feel is probably the main driver for what we humans call (rationalise as being) morality, even though the universe is amoral.
Right. But I think the way people normally come up with moral theories is completely wrong-headed. They try to invent some principle from which all moral questions can be answered; then they infer that our moral intuitions are invalid unless the principle agrees with them. It's sort of like if Kepler had dealt with his dissatisfaction with the prevailing Ptolemaic model of the solar system by setting out to invent an explanation accounting for every phenomenon in the universe. If he'd tried that the result wouldn't have been a grand unified theory of quantum gravity; he'd have invented some gobbledygook about nature maximizing the glory of God or something. That's not the way to make progress -- reaching out too far into the unknown is recipe for "If the observations disagree with the theory they must be discarded." We'd do better to have a little modesty about our abilities, and work toward our general moral theory incrementally, finding limited explanations that handle small pieces of the overall problem, testing those explanations at every step of the way, checking if what they say still makes sense in some changed situation.

Personally, I would say that morality is not either objective or relativist. I would say that that is a false dichotomy, and too simple to reflect the enormous complexities. Does that mean I would say that there are no objective moral facts? No, I don't think I would go as far as that.
Well, that gets us into definitions of objective and relativist. Some people equate "objective" with "universal", or "certain", or "without exceptions", or "human-independent"; but personally, I'd define "objective morality" as meaning at least one moral claim is truth-apt and its truth doesn't depend on any observer's subjective opinion. As for "relativist", relative to what? To my mind the phrase refers to moral claims' truth depending on an observer's subjective opinion or preference. But there are other things truth could be relative to. "You shouldn't kill him." might well be true or false depending on whether the guy wants to be killed; but it seems to me that would not be a case of what is normally meant by "moral relativism". YMMV; in any event it's merely a semantic point. We don't need to agree on what's the best terminology as long as we're able to explain what we mean and translate our arguments into the various alternative moral languages.

There may be, but my caveats would be that (a) there might only be a very few, in clear cut situations (which I think are the minority) and (b) they are only objective in the sense that they are common to all (let's say) normal, properly-functioning humans (temporarily assuming we can define that) and are not objectively independent of our species the way that, for example, the laws of physics are.
I don't see why anything would need to be independent of our species in order to be objective; after all, it's an objective fact what species we are. It's wrong for a man to kill his meat slowly in order to have more fun in the process; it isn't wrong for a cat to do it. Cats and humans are different kinds of animals; our brains are wired differently; a human who thinks like a cat is broken while a cat who thinks like a cat isn't broken.

For example, take Angra's favourite "it is morally wrong to torture people just for the fun of it.” All 'normal, properly-functioning' humans might agree with this, but (a) that does not make it independently true and (b) once we move away from such extreme examples, the ground starts to get situationally boggy, not least when we move on to responses (just deserts).
Situationally boggy doesn't bother me. To suppose situational bogginess conflicts with objectivity gets things precisely backwards. Situational morality is objective morality: objectivity is the whole reason it's possible in the first place to discredit a proposed moral rule by identifying a situation where it gives a nutty answer. "Self defense" is only able to situationally refute "Thou shalt not kill" because there's a right answer to whether it was morally permissible for Bruce Willis to use lethal force to stop Alan Rickman from shooting him. Without a right answer independent of opinion, a situation isn't what it takes to make an exception to a rule. All it takes is "Nuh-uh!".

(Of course, in a complicated world full of non-clear-cut situations, one might well worry that there's no end to exceptions, and you can never get a definitive answer to anything because for any possible moral judgment there always might be a situation you haven't thought of where it's wrong. The thing to keep in mind, though, is that this is only a problem for moral theories, not a problem for moral facts. You can always avoid the possibility of exceptions by specifying the situation completely: i.e., by considering a specific case instead of a generalization. "It's wrong to murder the President" might well be too situationally boggy to qualify as a moral fact, true; but "Booth ought not to have murdered Lincoln" can't be, because it isn't situationally boggy at all.)
 
Well, what I had in mind was that the is-ought problem isn't a special insight into morality. We face the same issue in every area where humans divide the analog world into digital categories, and we don't let it stop us. How can we get from facts about how many legs something has and how tall it is and what color it is and so forth to a logical conclusion about whether it's a horse? It's not as though children are ever taught formal criteria for horseness and then run down a checklist and consequently conclude an animal with three legs can't be a horse because "quadruped" was in the definition. As children we just saw some horses and heard people call them "horses" and got the general idea. And now when we see a new animal we judge it a horse when the neural networks for horses that grew in our brains fire up. The foundation of thought isn't logic or science; it's pattern matching. And pattern matching is an eminently sensible way to draw conclusions. It's error-prone, true; but it's not as though we have a better alternative. Heck, the only justification we have for logic and science themselves is pattern matching. How do you know (NOT NOT A implies A), other than that it fits a pattern you've seen a thousand times that's never led you wrong?

So if I can't deductively prove you ought not torture people for fun without introducing some premise with "ought" in it that's more dubious than the desired conclusion, so what? I can't deductively prove Wizard was a horse either, without introducing some premise with "horse" in it that's more dubious than the desired conclusion. But that's not sufficient reason to conclude that I don't know whether he was a horse. I met Wizard a dozen times. He ate hay out of my hand. The same pattern recurs in every field of human inquiry. But it's only in morality where for some reason this routine puzzle of human psychology is commonly blown up into an argument for radical antirealism.

Hume made pretty much the same argument about two other philosophical conundrums: inductive reasoning (How do you get from a "was" to a "will") and causality (How do you get from a "happened after" to a "was caused by"). Some philosophers have certainly been known to take those objections seriously, but Hume's skepticism about induction and cause-and-effect never made its way into the public consciousness the way his moral skepticism did. Why is that? Does the you-can't-prove-it's-a-horse argument qualify as a better argument when you make it about morality than when you make it about other areas?

Anyway, that's why I think it's overrated. I hope it helps.

I won’t quibble, because I think I mostly (a) agree that the is-ought problem may be over-rated in the way you describe and (b) agree with what you say about the way we think, and thank you for the interesting insights and parallels.

Right. But I think the way people normally come up with moral theories is completely wrong-headed. They try to invent some principle from which all moral questions can be answered; then they infer that our moral intuitions are invalid unless the principle agrees with them. It's sort of like if Kepler had dealt with his dissatisfaction with the prevailing Ptolemaic model of the solar system by setting out to invent an explanation accounting for every phenomenon in the universe. If he'd tried that the result wouldn't have been a grand unified theory of quantum gravity; he'd have invented some gobbledygook about nature maximizing the glory of God or something. That's not the way to make progress -- reaching out too far into the unknown is recipe for "If the observations disagree with the theory they must be discarded." We'd do better to have a little modesty about our abilities, and work toward our general moral theory incrementally, finding limited explanations that handle small pieces of the overall problem, testing those explanations at every step of the way, checking if what they say still makes sense in some changed situation.

Ok, so, to me, this resort to human intuitions is what I think is a good feature (in the non-moral sense) about what I am going to call angra’s moral theory.

And while I agree with you that coming up with moral principles that conflict with typical human intuitions is a bit suspect, I'm as suspicious of claims that there are underlying moral truths (for example, entirely on the basis of human intuitions). I'm not a big fan of either, possibly for different reasons.

And I'm particularly wary of expressing certainty in such matters. I personally have not seen an unproblematic moral theory. I read of many, and some seem better than others, that's all. An ex-member of this forum used to say, 'the study of the human mind has not had its Isaac Newton yet' and I often think about that.

Well, that gets us into definitions of objective and relativist. Some people equate "objective" with "universal", or "certain", or "without exceptions", or "human-independent"; but personally, I'd define "objective morality" as meaning at least one moral claim is truth-apt and its truth doesn't depend on any observer's subjective opinion. As for "relativist", relative to what? To my mind the phrase refers to moral claims' truth depending on an observer's subjective opinion or preference. But there are other things truth could be relative to. "You shouldn't kill him." might well be true or false depending on whether the guy wants to be killed; but it seems to me that would not be a case of what is normally meant by "moral relativism". YMMV; in any event it's merely a semantic point. We don't need to agree on what's the best terminology as long as we're able to explain what we mean and translate our arguments into the various alternative moral languages.

To me defining "objective morality" as merely meaning “at least one moral claim is truth-apt and its truth doesn't depend on any observer's subjective opinion” is what I might call a low bar.

For example, and only referring to the 'at least one' component, if a relativist can claim, to a similar extent, that at least one moral claim is relative, where does that leave the argument over whether morality is objective or relative? On a bell curve of some sort? Personally, I suspect that's the sort of thing we're talking about here. Something like a bell curve (with the vertical axis being ascending disagreement and the horizontal axis being different types of moral claim) And even at the extremes representing 'objectivity' I think it's a watered-down (perhaps better to say weak) definition of objectivity that's being used.

I don't see why anything would need to be independent of our species in order to be objective; after all, it's an objective fact what species we are. It's wrong for a man to kill his meat slowly in order to have more fun in the process; it isn't wrong for a cat to do it. Cats and humans are different kinds of animals; our brains are wired differently; a human who thinks like a cat is broken while a cat who thinks like a cat isn't broken.

Literally the first dictionary definition google threw up for me for ‘objective’ was “not dependent on the mind for existence; actual". This is what I generally mean by ‘objective’ (and also ‘independent’). YMMV.


Situationally boggy doesn't bother me. To suppose situational bogginess conflicts with objectivity gets things precisely backwards. Situational morality is objective morality: objectivity is the whole reason it's possible in the first place to discredit a proposed moral rule by identifying a situation where it gives a nutty answer. "Self defense" is only able to situationally refute "Thou shalt not kill" because there's a right answer to whether it was morally permissible for Bruce Willis to use lethal force to stop Alan Rickman from shooting him. Without a right answer independent of opinion, a situation isn't what it takes to make an exception to a rule. All it takes is "Nuh-uh!".

(Of course, in a complicated world full of non-clear-cut situations, one might well worry that there's no end to exceptions, and you can never get a definitive answer to anything because for any possible moral judgment there always might be a situation you haven't thought of where it's wrong. The thing to keep in mind, though, is that this is only a problem for moral theories, not a problem for moral facts. You can always avoid the possibility of exceptions by specifying the situation completely: i.e., by considering a specific case instead of a generalization. "It's wrong to murder the President" might well be too situationally boggy to qualify as a moral fact, true; but "Booth ought not to have murdered Lincoln" can't be, because it isn't situationally boggy at all.)

I think somewhere in the middle of that (the bolded part) you seemed to assume that situational morality is objective morality?

I agree that situational variety does not necessary mean the absence of objectivity. However, what I would say is that I strongly doubt that the situational variegation we see in the real world is underlain with objective (even in the sense you mentioned above) moral facts, to the point that there would be recourse to these ‘if only the situation was specified completely’. Consider me very sceptical about that.

There may be at least one 'objective moral fact' (using what I am calling a weak definition of 'objective', and possibly 'fact' too) but if such a fact is not invoked by or involved in a particular situation (eg something not involving killing, let alone killing for fun) then why would I assume that a completely specified situation not involving such a fact is going to lead me back to an objective moral fact about that situation (or indeed about the response to it, such as the deserts, something we have been setting aside for a while now)?

Here's an afterthought. It seems to be the case that one very common type of situational difference is 'ingroup or outgroup' (friend or foe, like me or not like me) and that our intuitive sense of right and wrong varies accordingly. So, even if there were, by some definitions, an 'objective moral fact' about doing X to someone, it would often need to be amended, in many cases, to cater for that sort of situational difference. At the very least, I do not think this adds any weight to claims for objective moral facts, and I would say it weakens them. As, by the way, for me, does the specification about supposed 'normal, proper-functioning, rational' humans, which, apart from not even being species-wide, in some ways sounds a bit like 'as all right-thinking people would surely agree....". As does what I think is using a weak definition of objectivity. As does defining 'fact' as 'what is very widely agreed'. As does the bar being as low as 'at least one'. I suspect the chairs are merely, to some extent, being arranged on the deck of the ship by semantically-inclined passengers who really really want to know the ship's TrueTM course, and think rearranging the deck chairs is going to help with finding that out. To clarify what i mean there, I think there is a tendency to define things into existence.

As such, I think my bottom line opinion is that there are, in the end, no objective moral facts (I don't mean 'facts about morality'; that humans have moral intuitions is, of course, a fact. I mean that something is actually either morally right or wrong). It's just that we have evolved to think and feel that there are. I know that's quite a strong claim, but if I had to put my neck on the chopping block while being open to having my head chopped off if I'm wrong, that's what I'd say. Imo, the guiding principles are probably the same amoral ones that rule every living thing in the universe. We're not that special. See the cartoon posted by me previously.

Now, even if that's true, we can still (and always have) come up with moral standards, obviously, albeit ones that have shifted around at least somewhat throughout history. And taking human intuitions into account is probably a good idea, up to a point. But as someone might once have said, 'I like pigs, I'm just not sure I could eat a whole hog'. :)

In other words I wouldn't buy the 'moral facts from intuitions' claim wholesale.

And then there's the naturalistic fallacy to watch out for too, or the is-ought issue, or whatever. Even if it is somewhat over-rated.

It is sometimes said of moral relativism that it leads to absurd conclusions, such as that there are no morals or ways to make moral judgements, but I don't see that as absurd. Actually, it's not even the case, there are of course ways to make moral judgements, it's just that under relativism they're not made on the basis of actual moral facts, that's all.

I think I'm a moral relativist much more than I am a moral realist or objectivist. If this leaves me uncertain, I think that's currently the best, most warranted place to be, all things considered.
 
Last edited:
ruby sparks said:
Ok, so, to me, this resort to human intuitions is what I think is a good feature (in the non-moral sense) about what I am going to call angra’s moral theory.
Sigh...call it a moral theory if you like, but I am not the author. It is not my merit to invent it, so not my moral theory. Maybe you could ask B20 about his moral theory. ;)
 
ruby sparks said:
Ok, so, to me, this resort to human intuitions is what I think is a good feature (in the non-moral sense) about what I am going to call angra’s moral theory.
Sigh...call it a moral theory if you like, but I am not the author. It is not my merit to invent it, so not my moral theory. Maybe you could ask B20 about his moral theory. ;)

I’m calling it yours for convenience. If you are making the claims it makes then it’s effectively ‘yours’ (your position) in the discussion, imo.

Who invented it by the way?

Would you like to hear mine? I haven’t really thought it through. I’d be winging it. It involves voles.
 
The role of social consensus here is limited to probabilistic outcomes:
The social consensus is impressed by Pascal's Wager. It judges that the risk of Gary going to Hell outweighs the infinitesimal probability that being taught Christianity will make him more dangerous than he is as a Jew.
which assumes things not in evidence. Prove hell. They both have EQUALLY proven beliefs, proven on equal amounts of evidence.
What of it? All probability assessments assume things not in evidence. See Bayes' Theorem -- even when you have evidence you can't derive a probability from it without starting from a "prior probability". If the role of social consensus here is limited to probabilistic outcomes, keeping people from going to Hell is within the authority of social consensus.

Pascal's Wager has already fallen to trivial logical argument<snip>
I agree, of course. But judging its merit won't be up to the you and me consensus. It will be up to the social consensus.

No system based on axioms can tolerate the existence of a contradiction within it. It has nothing to do with emotion and everything to do with the fact that I expect my ethical principles to be logical. I have a logic boner.
...you haven't even exhibited a logical contradiction in ethical systems that really are asymmetrical. Here, let's make it as easy for you as it could be. Consider the ethical system "King Charles I may do whatever he pleases; everyone else has an ethical duty to obey King Charles I in all things." Go ahead: derive a logical contradiction from that.

King Charles is a person, in the same fashion as all of "everyone else".

The contradiction exists quite plainly in that special pleading: the fact that King Charles' rights contradict with everyone else's rights; person X does not have equal moral value to person Y for all X and Y.
That's not "a contradiction within it". That's a contradiction within a larger system that includes both the axiom I exhibited and a second axiom added by you: that all persons have equal moral value. Logic boners do not require that no system based on axioms can tolerate contradicting you.

I am under no obligation to accept your axiom, at any rate.
Nobody said you were; you were challenged to back up your claim that your meta-goal theory is based on logic rather than emotional attachment. To back up that claim, you'll need to show the Charles-axiom is self-contradictory. You merely declining to accept the axiom doesn't do the job. Nobody in this day and age accepts it, but that doesn't make it self-contradictory.

I am under an obligation to accept axioms I cannot deny and the other I cannot deny if I wish to be non-contradictory with an axiom no other can deny: that I claim my authority to act on the basis of my own existence (that I, ultimately, have autonomy).
How do you figure no other can deny that axiom? Hobbes denied that axiom. According to Hobbes, you surrendered your autonomy to his employer, King Charles I, when you entered the Social Compact.

Second is that if I claim this autonomy it is equal in value to all others who claim this autonomy.
You don't appear to believe that due to logic. You appear to believe it due to having a symmetry boner.

So, if our autonomies have equal value (axiom 2) and your goal requires a greater value to your autonomy than mine, you have already invoked a contradiction.
A contradiction with your axiom, not a self-contradiction. The rest of us are under no obligation to accept your axiom.

This is the ethical disproof of justification, the point at which an ought becomes qualified as "not ethically justified".
...assuming your axiom is correct. The trouble with axiomatic approaches to ethics is justifying their axioms.

Instrumental and moral oughts are only differing in whether they are symmetrically non-contradictory,
Show your work. Instrumental and moral oughts appear prima facie to be differing in that "But I don't want to reach the other side of the wall" is generally perceived to be a good reason for not doing the thing one supposedly ought.

"When my goal is to get to the other side of the wall in the example but my goal isn't to get to the other side of the wall in the example..."
You are invoking a contradiction against the initial conditions of the example. The point is that the best strategy is contextual to the goal. You are moving the goalposts, quite literally, in asserting a different goal than the one our hypothetical actor had.
I didn't assert a different goal; I exhibited an additional difference between instrumental and moral oughts, thereby rebutting your claim. Even assuming I share your goal of using the least energy to get you to the other side of the wall, suppose the easiest way is for Alice to pick the lock. So I tell her she ought to pick the lock. If she says "But I don't want to make it easy for him; I like watching Jarhyn work at it.", I'd have to admit that's sufficient reason for her not to pick the lock. In contrast, if I tell her "Pick the lock or not as you please; either way you ought to leave without getting to see Jarhyn struggle. You promised daycare you'll pick up your kid by 6:00.", then when she says "But I don't want to pick up my kid yet; she's a pain in the neck.", that's not sufficient reason for her not to go pick up her kid.

(Be that as it may, even if I were wrong about this, don't try to reverse burden-of-proof. It's your claim. Which part of "Show your work." didn't you understand?)

You frequently assert that the metagoal is a specific goal. It is not.

The metagoal represents a SET of goals, namely ALL goals for which value of autonomy of X is accepted as equal to the value of autonomy of Y.
And? Any specific goal you could describe represents a set of goals, where some details are definite and some are left as open variables. You're special-pleading.

Besides, you don't accept all goals for which value of autonomy of X is accepted as equal to the value of autonomy of Y either -- not if X isn't a person. You picked an arbitrary set to be first-class agents and you declare logic dictates that they're the set whose autonomy has to be valued equally. Well, Hobbes did the same thing, only his set of first-class agents was King Charles I. Retributivists could do the same, only our set of first-class agents would be the set of all innocent people. The PETA-folks could do the same, only their set of first-class agents would include animals some arbitrary distance down the evolutionary scale. How do you figure logic would pick out one of those sets and imply it's less arbitrary than the others?

I am using a single example where a goal is assumed to derive a strategy, so that later, when I derive a strategy that describes the metagoal, I can demonstrate an instrumental ought that is universally morally justified without engaging in special pleading.
Where "is universally morally justified" is a phrase that here apparently means "is implied by MY axiom, not by YOUR axiom".

Now, let me get back to your Pascal's Wager <expletive deleted>: first, Gary not going to church does not in any way generate risk for others. It creates exactly the outcome he has consented to on the basis of his own goals: It does not assume his justification based on his existence is superior to the justification of actions others have based on their existences. He has consented to hell if he is wrong AS IS HIS RIGHT,
When we as a community punish a wrongdoer who deserves it, do you feel our case for getting to do that is improved if we say we're punishing him AS IS OUR RIGHT? Or does justifying alleged rights using proof-by-capitalization only work when you do it?

just as the christians consent to hell if Gary happens to be right.
How do you figure? Judaism is a university; Christianity is an insurance company. The Old Testament God is a comparatively well-behaved character. He may be a genocidal maniac, true, but He wouldn't sentence people to eternal damnation just for believing in the wrong religion. It's the leaner, meaner New Testament God who sunk to that level of monstrosity.

Because Gary does not risk THEIR souls even in going to hell, he has a right to do so on the basis of his personal goals ... There are some things the principles of ethics I have laid down do not allow a vote on, namely whether a person's rights are superior to another's. Only on what risk one is allowed to subject another to, and that this risk is purely measured in terms of the impacts on another person's goals, which can even include "going to hell, if I am wrong".
But the social consensus isn't claiming Bob's rights are superior to Gary's. Bob has no more right to skip church than Gary has. And their view of risk is more symmetrical than yours. Your position implies in effect that each person should be asymmetrically focused only on his own risk, as though benefit to himself is his only concern. The social consensus is more unselfish than that. The risk they care about minimizing is everyone's risk, because everyone's soul is equally valuable in God's eyes.

(Moreover, according to the social consensus's theology, you're wrong about whose souls are at risk. If Gary skips church and goes to Hell, the rest of the community could go to Hell too just for letting it happen. As Ezekiel 3 says, "Son of man, I have made you a watchman for the house of Israel. Whenever you hear a word from my mouth, you shall give them warning from me. If I say to the wicked, ‘You shall surely die,’ and you give him no warning, nor speak to warn the wicked from his wicked way, in order to save his life, that wicked person shall die for his iniquity, but his blood I will require at your hand. But if you warn the wicked, and he does not turn from his wickedness, or from his wicked way, he shall die for his iniquity, but you will have delivered your soul." So even if they evaluated the risk your egoistic way instead of their selfless way, your rule authorizes them to shove their religion down Gary's throat.)

And of course in this situation even God himself is measured against ethics. And we could have a merry conversation in which you would probably agree with me that the very idea of hell is unethical, at least within the context of the neo-lamarckian social-technological strategic context.
Indeed so.
 
And while I agree with you that coming up with moral principles that conflict with typical human intuitions is a bit suspect, I'm as suspicious of claims that there are underlying moral truths (for example, entirely on the basis of human intuitions). I'm not a big fan of either, possibly for different reasons.

And I'm particularly wary of expressing certainty in such matters.
Antirealists often equate realism with expression of certainty. That makes no sense. When there's a fact of the matter independent of my mind, it follows that I might be wrong. But how can I be wrong if there's no fact to be wrong about? It's subjectivism that provides certainty. A lot of people think I'm weird for liking Hawaiian pizza, but pineapple is certainly yummy.

I personally have not seen an unproblematic moral theory. I read of many, and some seem better than others, that's all. An ex-member of this forum used to say, 'the study of the human mind has not had its Isaac Newton yet' and I often think about that.
Well put.

... personally, I'd define "objective morality" as meaning at least one moral claim is truth-apt and its truth doesn't depend on any observer's subjective opinion. As for "relativist", relative to what? To my mind the phrase refers to moral claims' truth depending on an observer's subjective opinion or preference. ...

To me defining "objective morality" as merely meaning “at least one moral claim is truth-apt and its truth doesn't depend on any observer's subjective opinion” is what I might call a low bar.

For example, and only referring to the 'at least one' component, if a relativist can claim, to a similar extent, that at least one moral claim is relative, where does that leave the argument over whether morality is objective or relative? On a bell curve of some sort? Personally, I suspect that's the sort of thing we're talking about here.
You say you're suspicious of claims that there are underlying moral truths. Well, nobody's suspicious of claims that there are things that aren't moral truths. So if it really is a bell-curve, with a few objective moral truths at one end, a few purely observer-relative moral judgments at the other end, and a lot of semi-objective moral semi-facts in between, then that requires a far more radical world-view shift for the relativist than for the realist. Compared to getting from zero to one, isn't getting from one to a hundred just a quibble over numbers? It seems to me saying "at least one" draws the line at the point of basic philosophical disagreement.

And even at the extremes representing 'objectivity' I think it's a watered-down (perhaps better to say weak) definition of objectivity that's being used.

I don't see why anything would need to be independent of our species in order to be objective; after all, it's an objective fact what species we are.

Literally the first dictionary definition google threw up for me for ‘objective’ was “not dependent on the mind for existence; actual". This is what I generally mean by ‘objective’ (and also ‘independent’). YMMV.
Well, I don't want to get hung up on this, because it's just labeling and we can perfectly well discuss meta-ethics without using the word "objective"; but keep in mind that lexicographers are as susceptible to human error as the rest of us. They write definitions based on whatever examples come to mind at the moment, and they can't test all their definitions thoroughly when they have a hundred thousand words to define. (I saw a dictionary once that defined "god" as "a male deity", and defined "deity" as "a god or goddess".)

So let's test google's definition. Is whether a person has schizophrenia subjective or objective? Is whether a person knows how to algebraically solve a quadratic equation subjective or objective? Is whether we're having this discussion in English or in Irish subjective or objective? All sorts of matters depend on the mind for existence and yet are objective, going by the way most people typically use the word. So it seems to me google's first definition of "objective" is just a mistake. "Actual" is a better definition. Schizophrenia, closed-form solutions to quadratic equations, and English are all actual things, anybody's contrary opinion notwithstanding.

Situationally boggy doesn't bother me. To suppose situational bogginess conflicts with objectivity gets things precisely backwards. Situational morality is objective morality: objectivity is the whole reason it's possible in the first place to discredit a proposed moral rule by identifying a situation where it gives a nutty answer. "Self defense" is only able to situationally refute "Thou shalt not kill" because there's a right answer to whether it was morally permissible for Bruce Willis to use lethal force to stop Alan Rickman from shooting him. Without a right answer independent of opinion, a situation isn't what it takes to make an exception to a rule. All it takes is "Nuh-uh!".

I think somewhere in the middle of that (the bolded part) you seemed to assume that situational morality is objective morality?
I didn't assume it; I argued for it. If you think I didn't make a good case for it, fine. So what's wrong with my argument? Do you have a better theory for why people keep challenging somebody else's moral claim by offering a situation where it gives the wrong answer, if there's no such thing as a wrong answer?

I agree that situational variety does not necessary mean the absence of objectivity. However, what I would say is that I strongly doubt that the situational variegation we see in the real world is underlain with objective (even in the sense you mentioned above) moral facts, to the point that there would be recourse to these ‘if only the situation was specified completely’. Consider me very sceptical about that.
No worries. I haven't been presenting a positive case for moral facts here; I've just been pointing out errors in Jarhyn's arguments, and then answering your questions about what I meant. If we're going beyond those narrow bounds, this subthread really belongs in M&P.

There may be at least one 'objective moral fact' (using what I am calling a weak definition of 'objective', and possibly 'fact' too) but if such a fact is not invoked by or involved in a particular situation (eg something not involving killing, let alone killing for fun) then why would I assume that a completely specified situation not involving such a fact is going to lead me back to an objective moral fact about that situation...
Do you mean when there's no fact involved that all sane people agree with, one like "killing people for fun is wrong", then why should you extrapolate from a small set of facts that all sane people agree with and infer that it's part of a larger set of facts even though lots of sane people disagree with those?

If that's what you mean, in the first place, I don't think the set that all sane people agree with is a small set. It only seems that way because nobody's talking about the cases where we all agree; disagreement is what causes moral problems to get talked about. Things don't have to be extreme and involve killing in order for morality to be obvious to practically everyone. You go to the quickie-mart to buy a snack. The clerk says it's 1 pound 50. You hand him a 10 pound note. Should the clerk (a) hand you 8 pounds 50 in change, or (b) say "Score! 4 pounds 25 windfall for me and 4 pounds 25 extra profit for the owners!"? Good luck finding a sane person who thinks "b" is the right thing to do. Living in a normal human society means living in a constant background of other people practically always doing the right thing without a second thought. They say fish don't have a word for water.

And in the second place, extrapolating is reasonable on the basis of pattern matching. Life is full of easy problems that anyone can figure out and harder problems that take some effort and skill to solve. Everybody agrees that you don't look quite the same as your parents but your parents nonetheless reproduced and thereby made you. But all we need to do to make that agreement go away is multiply that exact situation by a million. Now the world is packed with people who figure that, because an ape doesn't look the same as a human, they don't agree that some apes reproduced and their kids reproduced and after a million generations that made you. It's entirely normal to have people agree about the simple questions and disagree about the tricky questions -- and we don't normally infer that whether there's a right answer or not depends on whether people agree or not. Does anyone think whether your great-to-the-millionth grandfather was a human is a matter of opinion? Was that guy simultaneously an ape for evolutionists and a human for creationists? Of course not. When people agree about the easy stuff and disagree about the hard stuff, all it normally means is that a lot of people get hard stuff wrong, because it's hard. So why should morality be any different? If there's at least one "objective moral fact", i.e., if there's an easy case that's not a matter of opinion, then why would we expect the whole topic to suddenly stop being factual and turn into a matter of opinion as soon as the cases get too hard for everyone to figure out the same answer?

Here's an afterthought. It seems to be the case that one very common type of situational difference is 'ingroup or outgroup' (friend or foe, like me or not like me) and that our intuitive sense of right and wrong varies accordingly. So, even if there were, by some definitions, an 'objective moral fact' about doing X to someone, it would often need to be amended, in many cases, to cater for that sort of situational difference.
Sure, same as any other situational difference. If a Japanese pilot blows up our battleship on purpose and we capture him, we stick him in a POW camp and let him go when the war's over. If an American pilot blows up our battleship on purpose and we capture him, we lock him up for life or shoot him. When a moral rule turns out to have exceptions because of the situation it just means the rule was an oversimplification from the get-go.

At the very least, I do not think this adds any weight to claims for objective moral facts, and I would say it weakens them.
Why? Is there some meta-rule that says in order to be objectively true, a fact has to be explainable in ten words or less?

As, by the way, for me, does the specification about supposed 'normal, proper-functioning, rational' humans, which, apart from not even being species-wide, in some ways sounds a bit like 'as all right-thinking people would surely agree....".
Do you also equate those in the case of the non-moral components of physiology? Does "a normal, proper-functioning heart" mean a heart that does whatever all right-thinking people would surely agree it should? The conventional arguments against morality being objective are paralleled by identical arguments against disease being objective. Do you think there's no fact of the matter as to whether an animal is sick?

As does what I think is using a weak definition of objectivity.
Well, as far as I can tell, morality isn't math, and morality isn't physics. Morality is biology. So if like so many others you have prejudged the matter to be either reducible to simple axioms that take no account of how monkeys think, or else subjective, then you're going to decide it's subjective. But that's a false dilemma. Reality doesn't have to match our philosophical preconceptions. Morality is allowed to be irreducibly complicated. The human moral sense evolved from the monkey moral sense and it carries the marks of its history. That isn't a weak definition of objectivity; that's just clarity as to what is and isn't being claimed to be a fact. Likewise, humans can develop schizophrenia, but maybe somebody will show that shrimp aren't subject to schizophrenia. If that happens, it won't call the objectivity of a schizophrenia diagnosis into question; it will just be yet more proof that biology is more complicated than physics.

(And in any event, subjectivist meta-ethical theories blatantly fail to satisfy logic or observation; the only alternative to moral realism that isn't intellectual crackpottery is straight-up error theory.)

As does defining 'fact' as 'what is very widely agreed'.
Nobody did that.

As does the bar being as low as 'at least one'.
...
As such, I think my bottom line opinion is that there are, in the end, no objective moral facts
Well, that's why the bar is "at least one". The terms for the opposing positions are attempts to describe the opposing positions.

(I don't mean 'facts about morality'; that humans have moral intuitions is, of course, a fact. I mean that something is actually either morally right or wrong). It's just that we have evolved to think and feel that there are.
Likewise, we have evolved to think and feel that there's yellow stuff. The reason we have evolved this way is because thinking and feeling that there's yellow stuff confers a survival advantage. The simplest explanation for why animals who think and feel that there's yellow stuff have a survival advantage over animals who don't, is that there's yellow stuff. Being right can keep you from getting killed.

Imo, the guiding principles are probably the same amoral ones that rule every living thing in the universe. We're not that special.
Correct, we aren't. So what makes you think every living thing is the universe is ruled by amoral guiding principles? Monkeys have morals. Read Frans de Waal.

It is sometimes said of moral relativism that it leads to absurd conclusions, such as that there are no morals or ways to make moral judgements, but I don't see that as absurd. Actually, it's not even the case, there are of course ways to make moral judgements, it's just that under relativism they're not made on the basis of actual moral facts, that's all.
They're made, typically, on the basis of unsound arguments. E.g. "Judgment X is justified relative to principle P. Well, is principle P true? No, morality is always relative to some principle; principles themselves aren't true or false." So the justification for the moral conclusion is that it follows from a premise that isn't true. Reasoning from a premise that isn't true is an unsound argument. Moral relativism (at least the naive version) is faith in the justificatory capability of unsound arguments.

I think I'm a moral relativist much more than I am a moral realist or objectivist. If this leaves me uncertain, I think that's currently the best, most warranted place to be, all things considered.
In your version of moral relativism, what is morality relative to?
 
In your version of moral relativism, what is morality relative to?

It's evening and I'm heading to London by air very early tomorrow morning, so I apologise if I don't respond to everything. Point me back to something in particular if you think it needs addressed.

In fact, what I'll do is just answer that last question, but before doing that I'll briefly touch on some of the previous points.

--------------------------------------------------------------------------------------------------

I'm good with morality being biological, and that other species display similar traits on the same basis, and as I said before I agree that disagreement and complexity of themselves don't mean there aren't facts. I'm not sure about the 'fish don't have a word for water' thing. I take the point that we tend to notice disagreement more than agreement (or mutual understanding about who if anyone is at fault) but I would not feel sure about which is more prevalent, especially not after having witnessed my parents' marriage for over 40 years and having been married myself for 27, and also, just as disagreement doesn't necessarily indicate the absence of objective moral facts, agreement surely doesn't necessarily indicate the presence of them. Point taken about dictionaries, obviously.

If I've left something out there, it's either because I agree, or because I need to think about it a bit more. I admit I'm struggling with the issues around 'objectivity', and you've made some very good, interesting and challenging points, but at this time I'm still a bit inclined to stick with 'mind-independent' for now. Saying something like "moral judgement about X qualifies as an objective moral fact because all normal, adult members of a species think it so" still feels like too big a hurdle, especially if the thing itself, X, is, in the end, non-moral by what I might call fully objective standards. And we can't say those sort of things about schizophrenia. We surely can't say either "the existence of schizophrenia qualifies as an objective physical fact because all normal, adult members of a species think it so" or "schizophrenia is, in the end, non-physical, by fully objective standards".

-------------------------------------------------------------------------------------------------------

To get back to your question (above)....what is morality relative to, as I see it?

My initial reaction is to say, possibly a great multitude of things, each developing in different ways, situational and varying by proportion, and all of them interacting dynamically with each other.

I'll start with the one I mentioned, the bias towards 'like me' and against 'not like me', for, let's say, a human doing Y to another human (Y being non-benign). Sure, you can try to amend the 'facts' accordingly by taking this additional specification into account, but how can it be morally right to do Y on the basis of 'not like me' and morally wrong on the basis of 'like me'?

Counsel for the defence: 'Your honour, the victim is black-skinned and the defendant is white-skinned, I therefore call for the case against my client to be dismissed'.

Bearing in mind that when human babies first display this sort of bias, it's only statistically most of them that do it. Others don't. Ditto for preferences for fairness generally. It seems it's merely varied innate tendencies or dispositions we're looking at, at least initially (before nurture sticks its oar into the development mix). And don't even start me on the distributions of things like the big 5 personality traits (or big 7, or big 10, depending on your model, or big 1000 if I were to make one up) and how they interact with each other and vary with age, sex, culture, number of siblings, parenting experiences, other traumas, and so on and so forth. Obviously, yes, you could make up a more specific list of 'moral facts' to suit each particular configuration, but at some point it seems to me it gets silly. More accurate, I think, to say it's relative to a great many things. Possibly even better, imo, to say that it's relative to a great many ultimately non-moral things. The more I think about it, the more correct it feels to say that. I think I'm becoming even more of a moral relativist than I was. Other similar words spring to mind, such as contingent, conditional, dependent, provisional, etc. Isn't that what relative is?

Now I'm also wondering what definition of 'moral' we're using.


ETA: See also my later post #284.
 
Last edited:
Back
Top Bottom