• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

FORGIVENESS

There is a difference between moral rules (which apply to all humans) and local rules (which do not).

It's highly dubious to arbitrarily and selectively use the term 'moral' only for certain rules that might better fit with your claims, and not use the word for those that don't, especially when one minute you're counting up and citing the attitudes of other people, expressed in your favourite terms (everyday language), about what are and aren't moral issues, and the next minute ignoring that and conveniently making up your own personal, limited definitions to suit instead.

ruby sparks said:
And things like "what consenting adults do behind closed doors is no one else's moral business but theirs."
Excellent example!
When people say things like that, even if they do not realize it, they are making a vanilla moral assessment, and they are implying that those who promote the belief that that it is immoral to, say, have same-sex sex, are both mistaken and behaving immorally!!!

No. They could (and speaking for myself, I would) only be saying the exact same thing about those people, that their views in turn, about the sorts of things I listed, including the one above, are relative to their personal judgements or prejudices or their culture or upbringing or whatever. They or I may be disagreeing, sure, but what neither they nor I are necessarily saying is that there is an independent morally-real fact of the matter, which is the point at hand.

Your claim that almost all humans, except for a minuscule number, are moral realists is (not for the first time) questionable, because many people are moral relativists, and possibly pluralists, about many moral issues.

Now, if you want to say most people are moral realists about certain, selected things then fine, that's uncontroversial and was agreed a long way back in the thread, but please then stop using the word 'morality' as if it covered, you know, morality generally.

And so morality is perhaps not really so different from gustatory taste, in that there as there are some things almost all people would agree are immoral, so there are some meals that almost all humans would agree are disgusting. Similarly, perhaps, with beauty, where severe disfigurement would be widely agreed not to be beautiful, possibly disgusting and certainly, in the end, 'a strong reason not to reproduce with this individual'. This is especially true in the rest of the animal kingdom, where unless you look the part (literally look 'fit') you won't get sex. It is also where human 'disgust' and its neural correlates, apparently applies across several domains.

And if it were merely the case (as it seems to be) that people more often had stronger views on morality one way or the other than on gustatory taste or beauty, that of course could say nothing at all about moral facts, let alone independent ones. There could be other reasons, to do with beliefs in free will for example (the human psychological need to have someone to blame), or simply because it may not, as often, matter so much in terms of consequences (potential or actual) whether something is tasty or beautiful. As with gustatory taste, certain (not all, as we have shown) moral issues could merely have more import vis-a-vis maximising chances of surviving and thriving, and we would have an evolutionary explanation for the manifestation of certain imperatives (including about what to eat and not eat, who to have sex with and not have sex with, etc). This is not in dispute.

But we went through all this stuff about moral 'facts' a long time ago, and I agreed there were some. Lately we've been talking specifically about independence. So getting back to that point, you have not yet shown independence from human attitudes in any of your scenarios. And the fundamental differences with something like illness and disease remain.

Although for the umpteenth time, a candidate rule ("my/our continued existence = good") that is at least arguably the basis for morality was suggested by someone quite a while back and has been discussed (I even started a thread on it) and you have for some reason not picked up on it, even though it's independent of attitudes and as such should be right up your street.
 
Last edited:
Moral objectivism across the lifespan
https://s3.amazonaws.com/academia.e...9dd1f56e72850265d16c6afddd53e8fa024ed983e04ff

Screen Shot 2020-02-29 at 13.18.12.png

Screen Shot 2020-02-29 at 13.17.54.png

The above study (2500 participants, ages ranging from the very young to the very old, hence the title) showed that Statements of non-moral facts were more strongly deemed (correctly or mistakenly) to have objective answers (one way or the other) than moral statements, and that moral statements were more strongly deemed to have objective answers than statements on matters of taste.

However, there did not seem to be a strong consensus on whether there were objective answers for most of the statements. For example, none of the moral statements went above 80% and most were below 50%.

Note that this is only a measure of what is intuitively or otherwise deemed by the participants, only reflects their assessments of the objectivity of various types of statement, and does not say anything about whether there are or are not objective answers in each case.

There was also variation according to the specific statement. For example, the 'taste' statements, "Beethoven was a better musician than Britney Spears is" and "Barack Obama is a better public speaker than George W. Bush" were more strongly deemed to have an objective answer than the 'ethical' statements, "Anonymously donating a significant portion of one’s income to charity is morally good" and "Assisting in the death of a friend who has a disease for which there is no known cure and who is in terrible pain and wants to die is morally permissible."

There is also empirical evidence (in that study and in others referred to in it) which suggests that attributed degrees of objectivity/relativity vary according to other things such as age, gender and cultural distance from the issue.

This suggests that saying either that people generally tend to be (a) moral realists or objectivists, or (b) moral relativists or pluralists, is either inaccurate or too simplistic, and that it depends on the specific issue at hand, and other variables.
 
Last edited:
The fragmented folk: More evidence of stable individual differences in moral judgments and folk intuitions
https://pdfs.semanticscholar.org/ae...447.575377287.1582970272-219494047.1580946145

From that paper:

"Moral Scenario: John and Fred are members of different cultures, and they are in an argument. John says, “It’s okay to hit people just because you feel like it,” and Fred says, “No, it is not okay to hit people just because you feel like it.” John then says, “Look you are wrong. Everyone I know agrees that it’s okay to do that.” Fred responds, “Oh no, you are the one who is mistaken. Everyone I know agrees that it’s not okay to do that.”

and later...

"Participants [in the study, 115 US students] could respond that either one of the participants [John and Fred] in the debate was right, or they could respond that neither one was right because there is no fact of the matter. Those who responded that one of the two people in the debate was right were coded as objectivists, and those who responded that neither party to the debate was right were coded as non- objectivists.

Replicating Nichols, we found that a substantial number of people (N = 79, 69%) gave a non-objectivist answer to the moral scenario, while a minority (N = 36, 31%) gave the objectivist answer."


and later....

"Our primary concern was if stable individual differences accounted, at least in part, for these responses. They did. Those who scored high in openness to experience were much more likely to respond as non-objectivists...."

and

"Thus, differences in personalities tended to be associated with different moral intuitions."


I am very surprised that such a high percentage of responders gave non-objectivist answers to such a scenario, one which I would have expected most people to be moral objectivists about. The figure for non-objectivists is so high that I personally am inclined to be a bit sceptical about it.

Nevertheless, I have been reading a number of studies and there appears to be one common theme, namely, whatever the percentages either way (and they vary) people seem to be (a) more inclined towards moral realism for certain moral issues and (b) more inclined towards moral relativism for other moral issues.

And in both cases (a & b) individual personality traits, age, gender, culture, etc seem to be among the variables.
 
Last edited:
After objectivity: an empirical study of moral judgment
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.404.9797&rep=rep1&type=pdf

In a smaller prior study from 2004 (with only 40 participants, again US students) using the same John & Fred 'it's ok to hit' scenario, the results were 42% non-objectivist responses and 58% objectivist.

Again, for something as apparently uncomplicatedly 'bad' as 'hitting someone because you feel like it' I'd have expected an even lower non-objectivist percentage. I would not have thought of put 'hitting someone because you feel like it' on my previous list of 'scenarios with a moral aspect about which people do not necessarily agree there is an independent fact of the matter'.

It has been said that college students are a particular sort of subgroup, one in which the members might be above-averagely inclined towards moral non-objectivism because they are inclined towards non-objectivism in general, about a number of things (but not obvious conventional facts, such as the earth not being flat, all the participants in the above study agreed on that). Infants, by contrast, seem to be moral objectivists. If so, something would seem to have happened to the default settings at some point between infancy and college. Moral realist critics of such experiments on college students suggest that what has happened is confusion, that college students are merely confused. I am not sure about that. It seems equally possible that something has been developed and nuanced. Also, the first study (post 102 above) was both large and involved participants of all ages.
 
Last edited:
Revisiting Folk Moral Realism (2016)
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5486533/

"According to the argument from moral experience, ordinary people experience morality as realist-seeming, and we have therefore prima facie reason to believe that realism is true. Some proponents of this argument have claimed that the hypothesis that ordinary people experience morality as realist-seeming is supported by psychological research on folk metaethics. While most recent research has been thought to contradict this claim, four prominent earlier studies indeed seem to suggest a tendency towards realism. In this paper I provided a detailed internal critique of these four studies. I argued that, once interpreted properly, all of them turn out in line with recent research. They suggest that most ordinary people experience morality as pluralist- rather than realist-seeming, i.e., that ordinary people have the intuition that realism is true with regard to some moral issues, but variants of anti-realism are true with regard to others."
 
ruby sparks said:
No. They could (and speaking for myself, I would) only be saying the exact same thing about those people, that their views in turn, about the sorts of things I listed, including the one above, are relative to their personal judgements or prejudices or their culture or upbringing or whatever. They or I may be disagreeing, sure, but what neither they nor I are necessarily saying is that there is an independent morally-real fact of the matter, which is the point at hand.
No, that is not what they are saying. Remember, they are saying that "what consenting adults do behind closed doors is no one else's moral business but theirs.". What does the expression "is no one else's moral business but theirs" means? It means it is morally wrong to condemn them for that behavior! What else could an expression like that mean?

How about you provide some real examples of that usage?

ruby sparks said:
Your claim that almost all humans, except for a minuscule number, are moral realists is (not for the first time) questionable, because many people are moral relativists, and possibly pluralists, about many moral issues.
No, I say a minuscule proportion, not number. "Many" in a population of billions could be less than 1/1000 of the population.

ruby sparks said:
Now, if you want to say most people are moral realists about certain, selected things then fine, that's uncontroversial and was agreed a long way back in the thread, but please then stop using the word 'morality' as if it covered, you know, morality generally.
No, because if I did that, I would be lying. I'm not going to stop saying what I think.



ruby sparks said:
And if it were merely the case (as it seems to be) that people more often had stronger views on morality one way or the other than on gustatory taste or beauty, that of course could say nothing at all about moral facts, let alone independent ones.
It is not just more often. Humans normally believe that there is a fact of the matter as to whether some behavior is immoral. This is an empirical question. You like science? Take a look at how people behave. Even those (very small percentage) who claim there is no fact of the matter, the next moment they are making moral judgments and of coruse they believe there is a fact of the matter (it's just that their ideology sometimes gets in the way).


ruby sparks said:
But we went through all this stuff about moral 'facts' a long time ago, and I agreed there were some. Lately we've been talking specifically about independence. So getting back to that point, you have not yet shown independence from human attitudes in any of your scenarios. And the fundamental differences with something like illness and disease remain.
No, I already showed your mistakes on the matter. I do not need to show independence. It is obvious. The burden would be entirely on you, But still, I did show it. And I also showed that the distinctions you want to make with disease fail.

ruby sparks said:
Although for the umpteenth time, a candidate rule ("my/our continued existence = good") that is at least arguably the basis for morality was suggested by someone quite a while back and has been discussed (I even started a thread on it) and you have for some reason not picked up on it, even though it's independent of attitudes and as such should be right up your street.
Because that is obviously not true, and not even relevant to the matter at hand. If Jack is a serial killer, his continued existence is bad. If he reckons that it is a morally good thing, he is mistaken.
 
ruby sparks said:
Note that this is only a measure of what is intuitively or otherwise deemed by the participants, only reflects their assessments of the objectivity of various types of statement, and does not say anything about whether there are or are not objective answers in each case.
If it were that, it would be bad enough, because it would be about the subjects' explicit theory, not about how they actually behave (i.e., there is a fact of the matter).
However, it is worse. The answers to the question that are not ethical ones should be an obvious clue: a very significant percentage of participants, very likely most, are not really answering the question that the researchers believe. Whose fault the misunderstanding is is a different matter. But let us take a look at the so-called 'factual' questions ('so-called' not because they are not, but because the very classification indicates an obvious bias against moral facts by the researchers).

Let us take a look at the statements:


3. Julius Caesar did not drink wine on his 21st birthday.​
Seriously, the study says that about 40% of participants said it was possible for both to be correct. Do you think the study is reliable, or something went wrong and they just did not understand the question? It seems pretty obvious to me they did not understand, or else something is very, very wrong with their heads (not so likely).

Now consider this one:


1. Frequent exercise usually helps people to lose weight.
Only about 25% says if someone says this is so, and someone says it is not so, then at least one of them is mistaken. So, what is happening? Either they also misunderstood massively, or else, they reckon that in different contexts, a person asserting that might be talking about different things, etc., about people in different situations. If it's another massive misunderstanding, well there is that, whereas if they believed people may be talking about different circumstances, okay, just the same can happen in the moral case. For example, compare that with:



9. Anonymously donating a significant portion of one's income to charity is morally good.​
Well, is it? That depends on the circumstances. If you have a family, probably it is morally wrong to make them poor by giving instead to strangers. Perhaps, the participants are also assessing whether the people asserting 9. might be talking about different circumstances (see above).

Here's from the study
http://www.acsu.buffalo.edu/~jbeebe2/Beebe Sackris MOAL.pdf (this is a link that actually works; I post it because the one you posted doesn't work for me, and I'm not sure whether it works for others):


study said:
More participants attributedobjectivity to some ethical claims than to some factual claims. For example, participants were on the whole more confident that someone had to be wrong in a disagreement about racial discrimination, robbing a bank, or hitting someone than in adisagreement about global warming (.39) or human evolution (.59).
In short, according to this, the percentage of people in the study who believe there is a fact of the matter as to whether it is immoral to treat someone poorly on the basis of the race is much higher than the percentage of people who believe there is a fact of the matter as to whether humans evolved from "more primitive primate species", or whether global warming is primarily due to human activity. What's going on here? Why these judgments? The problem is not only the percentage of people who apparently agree there is a fact of the matter in the moral case (that's problematic for being too low). The problem is the percentage of people who reckon there isn't in the other cases. The problem: far too high; it just makes not sense...unless of course, participants are merely saying that words like "more primitive" or "primarily" may be used to talk about different stuff in different contexts, so people who apparently disagree about those things might just be talking past each other. But if so, then the same might be happening in the moral case, and not because each moral term has more than one ordinary meaning - they do not -, but because the other, nonmoral terms in those sentences sometimes do.

In short, this study is not measuring what it is supposed to.


ruby sparks said:
This suggests that saying either that people generally tend to be (a) moral realists or objectivists, or (b) moral relativists or pluralists, is either inaccurate or too simplistic, and that it depends on the specific issue at hand, and other variables.
Would you say the same about whether Julius Caesar drank wine on his 21st birthday, or any of the other statements called 'factual' in the study?
It seems much more likely that the study just is not doing what it is supposed to be doing.
 
Last edited:
https://pdfs.semanticscholar.org/ae...447.575377287.1582970272-219494047.1580946145

study said:
We gave 115 volunteers in lower level philosophy classes at Florida State University a brief Big Five personality measure (Gosling, Rentfrow, & Swann, 2003) along with the following scenarios from Nichols (2004):
Volunteers in lower level philosophy classes are not at all representative of the general population. They may well have read enough to fall into some flawed ideology and reject moral realism. The vast majority of people never read anything about metaethics. Sure, they got 69% of people who said there was no fact of the matter as to whether it was okay to hit people just because you feel like it. Some limitations in the design of the experiment weaken the result, but sure, this provides some significant evidence that volunteers in lower level philosophy classes at Florida State University (and the same would happen in most philosophy classes in most universities in the West, very probably) have a belief in some form of anti realism (they were limited in their options, so the form of antirealism cannot be inferred, other than it's not an error theory in the traditional sense). But that does not match how people (even those students) usually behave, in their actual lives when making moral judgments, not when participating in experiments that clearly involve their RIP. It's their religion-like belief.


ruby sparks said:
I am very surprised that such a high percentage of responders gave non-objectivist answers to such a scenario, one which I would have expected most people to be moral objectivists about. The figure for non-objectivists is so high that I personally am inclined to be a bit sceptical about it.
Well, there is the design limitation that they were only allowed to say "Participants could respond that either one of the participants in the debate was right, or they could respond that neither one was right because there is no fact of the matter", and were not allowed sophisticated forms of anti-realism. However, I don't find the results surprising. Universities in the West seem to be increasingly under the grip of a leftist ideology (or rather, a contradictory set of those), and a fuzzy sort of antirealism is part of some of them - only to be contradicted by other parts of the ideologies, but never mind.
 
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.404.9797&rep=rep1&type=pdf
ruby sparks said:
In a smaller prior study from 2004 (with only 40 participants, again US students) using the same John & Fred 'it's ok to hit' scenario, the results were 42% non-objectivist responses and 58% objectivist.

Again, for something as apparently uncomplicatedly 'bad' as 'hitting someone because you feel like it' I'd have expected an even lower non-objectivist percentage. I would not have thought of put 'hitting someone because you feel like it' on my previous list of 'scenarios with a moral aspect about which people do not necessarily agree there is an independent fact of the matter'.
These are not just Western university students, but philosophy students. The result would be unsurprising, if that were the result. That said, there is a design flaw, so who knows the real percentage. The study gives the following option:

study said:
There is no fact of the matter about unqualified claims like "It's okay to hit people just because you feel like it". Different cultures believe different things, and it is not absolutely true or false that it’s okay to hitpeople just because you feel like it.
This has two problems (at least):

1. It plays the 'culture' card. Students are very likely under a religion/ideology that says 'no culture is superior to another', so they are very likely to back away whenever two different cultures are mentioned. It's not how they behaved in real life.
2. This leaves open hat they think there is a fact of the matter as to whether it is immoral for people of culture X to hit people just because the beater feels like it. This would not lead to a 'no fact of the matter' scenario, but there is no option for them to express their beliefs.



study said:
The import of the foregoing evidence for our purposes is simply that a nontrivialpopulation of undergraduates endorse a nonobjectivist claim about a moral violationwithout endorsing a full blown metaphysical nonobjectivism
Well, sure, but that's the prevalent religion/ideology, so that is unsurprising. What one would need to study is the behavior of those students 'in the wild', e.g., when they engage in actual moral debates, in order to get more information.


That said, here is what the researchers also say:

study said:
Moral objectivity, then, is plausibly a default setting on commonsense metaethics.As a result, the rejection of moral objectivity exacts a revision of commonsensemetaethics.
Then they go on to say there is a "thriving" tradition in philosophy that argues against moral objectivism (and the author of this study is among them), so despite the bias, he recognizes it's "plausibly" a default setting (I think "plausibly" is far too weak, but I'd like to point out that even this philosopher is saying that).



ruby sparks said:
It has been said that college students are a particular sort of subgroup, one in which the members might be above-averagely inclined towards moral non-objectivism because they are inclined towards non-objectivism in general, about a number of things (but not obvious conventional facts, such as the earth not being flat, all the participants in the above study agreed on that). Infants, by contrast, seem to be moral objectivists. If so, something would seem to have happened to the default settings at some point between infancy and college. Moral realist critics of such experiments on college students suggest that what has happened is confusion, that college students are merely confused. I am not sure about that. It seems equally possible that something has been developed and nuanced. Also, the first study (post 102 above) was both large and involved participants of all ages.
Recently, I was talking to two very intelligent math students close to their PhD. Not only are they moral anti-realists, but they believe other than mathematics, we have no knowledge of anything. Of course, their behavior indicates they also believe otherwise - and they have obviously contradictory beliefs - about that, and many other things. I debated one of them over lunch (his wanted to debate :)), pointing out their contradictions, and so on (while another student literally laughed out loud when I asked them questions they couldn't answer without talking what even to them was obvious nonsense). So, they said we didn't know whether there was a table between us, whether we were eating, or whether there were cars parked outside, things like that (paraphrasing, I don't remember all the details, but they denied knowledge of the most obvious things). So, I ask '? 'Do you know what you did a minute ago?', which of course they deny: memories could be fake. They say we only have mathematical knowledge. But how, I asked? You follow a difficult proof, and you know it's correct, right? But how do you know you actually followed the proof, rather than having fake memories? The reply of course does not address it, but it's an evasion, that 'math is certain', or whatever. They didn't like it, and we didn't talk about that ever again. I on the other hand, don't want to talk to them about that again. It's pointless, and it creates tension for nothing. Their beliefs remain unchanged. They are beyond persuasion. And they're very, very intelligent. Such is life. :( Ideologies/religions are like that.



ETA: Actually, another study you link to (or rather, a study of the studies), sees the second problem I identify above, and goes further to claim:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5486533/
another study said:
In sum, then, rather than the proportion of realist versus anti-realist responses, Nichols’ study seems to have mainly measured the proportion of realist versus cultural relativist responses.
 
Last edited:
No, that is not what they are saying. Remember, they are saying that "what consenting adults do behind closed doors is no one else's moral business but theirs.". What does the expression "is no one else's moral business but theirs" means? It means it is morally wrong to condemn them for that behavior! What else could an expression like that mean?

As I said before, sure, it's a judgement, a deeming, an opinion in the end, which could be held with varying degrees of conviction So what though? A moral realist is saying there's an independent moral fact of the matter and a moral relativist isn't. After all, saying 'there is no independent moral fact of the matter' is also a judgement, and a common one, and one that can be held with strong conviction.

In any case, strength of convictions is no guide to truth, not least because they could be heavily influenced by emotions, or any number of other factors (many of which have been listed) or even merely accepted to be part of the opinion-holder's moral framework. So even if people did generally talk of moral issues 'as if' there was a right or wrong answer, that would say nothing about whether there was or wasn't. That basis for claiming moral realism is therefore ropey from the get go.

Not only that, but it appears from empirical research not to even be the case. There is apparently more moral relativism in 'ordinary, commonsense, folk morality' than morally realist philosophers asserted there was.

What appears to be the case is that people are more inclined towards moral realism about some moral issues and more inclined towards moral relativism about others, and there are a range of factors and variables which correlate to both. Do you not agree with that statement?

The many people who say, about some moral issues, 'there is no independent moral fact of the matter either way' are not just going to go away, or their views become invalid, merely because they are inconvenient to your claims.

If you do actually agree that the human brain is the proper tool to assess these issues and that people's everyday language is the proper standard then using your own criteria you should not be discounting the judgements of such people.

And saying that certain biases or ideologies are the problem is only missing the point because everyone has biases. They are partly what make up personalities, and differences in personalities appears to be one factor that influences moral judgements, including the judgement as to how relative or objective a certain claim is, or whether there even is or isn't an independent fact of the matter about something.

I do not need to show independence. It is obvious.

You failed to show it.



I don't have time right now to reply to all our other points. I may later if I have time. In all honesty, apart from the issue of independence, I am not even sure what we are disagreeing about.
 
Last edited:
ETA: Actually, another study you link to (or rather, a study of the studies), sees the second problem I identify above, and goes further to claim:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5486533/
another study said:
In sum, then, rather than the proportion of realist versus anti-realist responses, Nichols’ study seems to have mainly measured the proportion of realist versus cultural relativist responses.

The same 'study of the studies' you cited, "Revisiting Folk Moral Realism", concludes:

Revisiting Folk Moral Realism said:
I argued that, once interpreted properly, all of them [the four studies critiqued by Polzler] turn out in line with recent research. They suggest that most ordinary people experience morality as pluralist- rather than realist-seeming, i.e., that ordinary people have the intuition that realism is true with regard to some moral issues, but variants of anti-realism are true with regard to others.
 
ruby sparks said:
Although for the umpteenth time, a candidate rule ("my/our continued existence = good") that is at least arguably the basis for morality was suggested by someone quite a while back and has been discussed (I even started a thread on it) and you have for some reason not picked up on it, even though it's independent of attitudes and as such should be right up your street.
Because that is obviously not true, and not even relevant to the matter at hand. If Jack is a serial killer, his continued existence is bad. If he reckons that it is a morally good thing, he is mistaken.
Sure, he may be mistaken, just as a person who thought it was ok to kill for fun would be mistaken. In the latter case, if I cited such a person as an exception to the rule that killing for fun is bad, you would disallow it but say the rule was intact, but here you are not only now citing such a person but also suggesting the rule is not intact because of that. If you're going to disallow or allow exceptions, do it consistently.

But in a way, it's irrelevant. The serial killer in this case is still applying the rule. Unlike in any of your scenarios, the rule is truly, fully attitude-independent, that is the point. That the rule is also at least the basis for morality (which is what I am claiming) is also not undermined by the serial killer's use of it. It could even be said to further demonstrate the extent of the rule's independence.

Would you say the same about whether Julius Caesar drank wine on his 21st birthday, or any of the other statements called 'factual' in the study?
It seems much more likely that the study just is not doing what it is supposed to be doing.

Yes, I would agree there's something odd going on there. The researchers suggest that 'unknowability' is dampening assessments of objectivity, when it shouldn't. It is therefore possible that the experiment as a whole may not be measuring what it claims to be measuring.

However, because I have read quite a number of studies, including the few I posted, and despite there being the possibility of flaws of different sorts in each of them, as there equally may be in studies which obtain different or opposite results, because the issue is clearly complicated and open to contention, I am overall still happy with the claims that (a) there is more than a minuscule proportion of actual moral non-objectivists out there and (b) that people are more inclined towards moral objectivism for certain things and more inclined towards moral non-objectivism for others.

Quite apart from anything else, I feel it describes my own views, and it would be my guess, based anecdotally on my own experiences, that I am far from unusual.

Earlier you suggested is that there is a difference between what you called 'moral rules' and 'local rules'. That confused me, but thinking about it later I thought you probably meant 'universal (species-wide) moral rules' and 'local, perhaps culturally-relative moral rules', not that one sort of rule was to do with morality and the other wasn't.

If so, that might, in principle, be a useful way to distinguish between rules that people are more objectivist about and rules that they are more non-objectivist about, and you and I may have some common ground about that. Although I expect the distinctions between the two types of rule may be somewhat fuzzy and the reasons complicated.
 
Last edited:
The same 'study of the studies' you cited, "Revisiting Folk Moral Realism", concludes:

Revisiting Folk Moral Realism said:
I argued that, once interpreted properly, all of them [the four studies critiqued by Polzler] turn out in line with recent research. They suggest that most ordinary people experience morality as pluralist- rather than realist-seeming, i.e., that ordinary people have the intuition that realism is true with regard to some moral issues, but variants of anti-realism are true with regard to others.

That is correct, but I'm not sure what your point is. The 'study of studies' (actually, a philosophical study) that ruby sparks brought up makes a number of claims, some with which I agree, others (most of the important ones) not.
 
ruby sparks said:
As I said before, sure, it's a judgement, a deeming, an opinion in the end, which could be held with varying degrees of conviction So what though? A moral realist is saying there's an independent moral fact of the matter and a moral relativist isn't. After all, saying 'there is no independent moral fact of the matter' is also a judgement, and a common one, and one that can be held with strong conviction.
You seem to have lost track of this part of the exchange.

In this post you claimed:

ruby sparks said:
Because once we move away from what are generally considered (deemed) and widely agreed (by humans) to be very clear moral wrongs, there are a lot of deemed-to-be-lesser behaviours that are considered to have a moral aspect, but it is often agreed that there is no independent moral fact of the matter about them either way.
One of the alleged examples you provided (i.e., when allegedly it is "agreed that there is no independent moral fact of the matter about them either way."), is the following:

ruby sparks said:
And things like "what consenting adults do behind closed doors is no one else's moral business but theirs."
My point is that you got that wrong. That is not an example in which people agree that there is no independent moral fact of the matter (whatever 'independent' means in your terminology); rather, it is a case in which people are making an ordinary moral judgment. They are saying that "what consenting adults do behind closed doors is no one else's moral business but theirs.". What does the expression "is no one else's moral business but theirs" means? It means it is morally wrong to condemn them for that behavior!


ruby sparks said:
Not only that, but it appears from empirical research not to even be the case. There is apparently more moral relativism in 'ordinary, commonsense, folk morality' than morally realist philosophers asserted there was.
I think the studies are poorly designed for the reasons I've been explaining, but even then show far more culture-relativism (which would not result in 'no fact of the matter'), than speaker-relativism.

ruby sparks said:
What appears to be the case is that people are more inclined towards moral realism about some moral issues and more inclined towards moral relativism about others, and there are a range of factors and variables which correlate to both. Do you not agree with that statement?
No, I think the studies are poorly designed for the reasons I've been explaining. It is true that among anti-realists, slipping into realism happens more often in some matters than others, depending on other factors of their ideology.

ruby sparks said:
If you do actually agree that the human brain is the proper tool to assess these issues and that people's everyday language is the proper standard then using your own criteria you should not be discounting the judgements of such people.
As in any case of moral disagreement, I look at behavior, other things they say along side them, etc., to figure what's going on. In the studies, this is very difficult to do, because they ask questions and there is no follow-up, no conversation to look at, etc. However, from the replies to other things as well, it should be apparent that some the studies are poorly designed because the people in question - a very large percentage of them - do not understand them to mean what the researchers ask.
In other cases, they are just asking philosophy students about metaethics. It's like asking Christians about ethics that involve Biblical claims. You're bound to get skewed responses, but that's because they're using a wrong tool: religion/ideology.
 
ruby sparks said:
Sure, he may be mistaken, just as a person who thought it was ok to kill for fun would be mistaken. In the latter case, if I cited such a person as an exception to the rule that killing for fun is bad, you would disallow it but say the rule was intact, but here you are not only now citing such a person but also suggesting the rule is not intact because of that. If you're going to disallow or allow exceptions, do it consistently.
Recap?
I know that I have been consistent, but I cannot defend a specific part of the exchange from the charge of inconsistency unless you let me know what part you are talking about.

ruby sparks said:
But in a way, it's irrelevant. The serial killer in this case is still applying the rule. Unlike in any of your scenarios, the rule is truly, fully attitude-independent, that is the point. That the rule is also at least the basis for morality (which is what I am claiming) is also not undermined by the serial killer's use of it. It could even be said to further demonstrate the extent of the rule's independence.
What rule? That people reckon that their continued existence is good?
No, that is not a rule on which morality is based. Psychologically, morality is (or is the result of) rules of social behavior evolved among primates. People do not generally see themselves as having a moral obligation to continue to exist, except perhaps as long as it relates to others )(e.g., they reckon it would be wrong to commit suicide and abandon their children, in many circumstances). People who decide to commit suicide are not applying your rule, either. But even the rest of us do not generally intuitively perceive self-preservation as a moral obligation, again except in relation to potential effects on others.



ruby sparks said:
Earlier you suggested is that there is a difference between what you called 'moral rules' and 'local rules'. That confused me, but thinking about it later I thought you probably meant 'universal (species-wide) moral rules' and 'local, perhaps culturally-relative moral rules', not that one sort of rule was to do with morality and the other wasn't.
I didn't mean that. In some places, there is a rule 'take your shoes off if you enter a home as a guest', whereas in others, the rule is exactly the opposite - unless requested by the home owners. Usually (but not always, and the matter is properly assessed on a case-by-case basis), it is morally wrong to break local rules. For example, if I go to a coworker's home as a guest and take my shoes off without prompt, I definitely would be breaking a rule. And if I do it just because I feel like it, my behavior is immoral. But in other societies, that is different. The moral rule would be something like 'do not break local rules, in normal circumstances', and then a list of exceptions.
 
That is not an example in which people agree that there is no independent moral fact of the matter (whatever 'independent' means in your terminology); rather, it is a case in which people are making an ordinary moral judgment. They are saying that "what consenting adults do behind closed doors is no one else's moral business but theirs.". What does the expression "is no one else's moral business but theirs" means? It means it is morally wrong to condemn them for that behavior!

We have covered this already. Yes, it's a disagreement and a moral judgement, but moral relativists can and do make ordinary moral judgements. It's just that they are, in the end, in some particular cases, deemed by the relativist to be nothing more than strongly held opinions that may be held by individuals or cultures for a number of reasons, that there are different moral frameworks for certain things, in other words.

Now, that people have strongly held opinions on certain things, including, say, morality in general, and even where nearly all people have the same opinions about something, may say nothing about whether there are independent facts about it. There are, it seems, many actual moral relativists about some things.

But let's suppose, hypothetically, that humans generally are moral realists about at least some things, that there are at least some moral 'facts'. I don't have a problem with this. I agree with it. There are certain things which humans regard as having a factual, right or wrong answer in moral terms. The problem is, what does this tell us? It tells us what human beliefs are like. It does not necessarily extend to showing they are correct that there are objective, independent moral facts. Moral realism, if it relies on commonsense human intuitions, has built its house on sand, because human intuitions have often been shown to be wrong. All normal human brains are prone to and predisposed to false beliefs about the world. Science in particular shows this over and over. It's the achilles heel for the idea that human intuitions and commonsense and everyday language are the proper or best basis for realism about anything at all. The beliefs, including the ones deemed to be to do with what we call morality, may be pragmatically useful for successfully navigating the world, but that could be all it is.

More to the point, they are human intuitions. They are not independent of humans.


As in any case of moral disagreement, I look at behavior, other things they say along side them, etc., to figure what's going on. In the studies, this is very difficult to do, because they ask questions and there is no follow-up, no conversation to look at, etc. However, from the replies to other things as well, it should be apparent that some the studies are poorly designed because the people in question - a very large percentage of them - do not understand them to mean what the researchers ask.
In other cases, they are just asking philosophy students about metaethics. It's like asking Christians about ethics that involve Biblical claims. You're bound to get skewed responses, but that's because they're using a wrong tool: religion/ideology.

There are some flaws in the studies yes, but overall there are too many studies showing certain general results for it to be warranted to dismiss them out of hand.

Regarding asking students, not all the studies asked students.

I don't understand the point about religion/ideology. It does not seem to apply here, really. We don't know what the ideologies of the participants were. More to the point, everyone has biases. Most people have an ideology of some sort also. You have one. Moral realism. You may, like almost everyone, be at least somewhat dogmatic about it. We all are, about our beliefs and ideologies.

More to the point, why are you even disagreeing, when it seems we agree about the main, general takeaway result of the studies, that people are inclined to be moral objectivists about certain things and moral non-objectivists about others?
 
Last edited:
What rule? That people reckon that their continued existence is good?
No, that is not a rule on which morality is based.

It is odd that you can't see what is obvious.

Suppose someone or something was going to kill or harm you. You would instinctively and innately believe that was wrong and bad. So much so that you would, even before the danger had been consciously recognised, before you even formed a belief or a value judgement about it, take evasive action. That is why it's the basis.

And all living things (with a small number of exceptions) are functionally, behaviourally and effectively operating the same rule all the time, even if they are non-social species, and even if not all are capable of having experiences or value judgements associated with it.

In humans, a particular social species that has evolved the the sense capacity to (a) have experiences in the first place and (b) experience value judgements in particular, the rule becomes sensed/experienced, and later expressed, in terms of what humans call morality, and it is deemed immoral, by you and possibly others, for someone to try to kill or harm you.

It may not be the only basis or the only rule, of course. Here is a list offered by one moral philosopher:

(1) The fact that something would promote one’s survival is a reason in favor of it.
(2) The fact that something would promote the interests of a family member is a reason to do it.
(3) We have greater obligations to help our own children than we do to help complete strangers.
(4) The fact that someone has treated one well is a reason to treat that person well in return.
(5) The fact that someone is altruistic is a reason to admire, praise, and reward him or her.
(6) The fact that someone has done one deliberate harm is a reason to shun that person or seek his or her punishment.


https://pdfs.semanticscholar.org/b740/5d3515695de20fb8f817434a739c9ba447fd.pdf

Treedbear's suggested example is top of the list.

They are all explainable by recourse to blind, non-teleological or 'truth-tracking' evolution and do not require the existence of independent, realist moral facts.

ruby sparks said:
Earlier you suggested is that there is a difference between what you called 'moral rules' and 'local rules'. That confused me, but thinking about it later I thought you probably meant 'universal (species-wide) moral rules' and 'local, perhaps culturally-relative moral rules', not that one sort of rule was to do with morality and the other wasn't.
I didn't mean that. In some places, there is a rule 'take your shoes off if you enter a home as a guest', whereas in others, the rule is exactly the opposite - unless requested by the home owners. Usually (but not always, and the matter is properly assessed on a case-by-case basis), it is morally wrong to break local rules. For example, if I go to a coworker's home as a guest and take my shoes off without prompt, I definitely would be breaking a rule. And if I do it just because I feel like it, my behavior is immoral. But in other societies, that is different. The moral rule would be something like 'do not break local rules, in normal circumstances', and then a list of exceptions.

I'm not sure why you chose to do shoe-wearing customs instead of one of the things on my list, such as polygamy for instance, but no matter, it seems we agree that at least some moral rules are or are deemed relative and non-objective, which is what I had said and what the evidence generally suggests.

Perhaps now we can return to the sticking point. Independence from human attitudes. None of your scenarios showed this. However, one independent (of human attitudes) rule was suggested, and now a few more have been added. It seems to me that you should be welcoming this and exploring it, not trying to counter it.

I can even suggest another. Pain = bad. Although this could arguably be subsumed into (1) as a negative, "the absence of pain is something that would promote one’s survival and so is a reason in favor of it". Pain is often seen as a good phenomenon to study because it is seen as a 'baseline' brain sensation that is universal to all normal humans (and probably other species in fact). As such, it may in principle usefully be compared to and contrasted with other brain sensations, such as those involved in our sense of morality.

In this discussion therefore, it may be interesting to ask if there is an independent realist fact about pain. Does it exist independently of organisms capable of experiencing it? Intuitively I would say no, that unlike, say, the physical or mathematical rules apparently governing the universe, there is/was no independent 'pain' in the universe, waiting to be 'discovered' by evolution or by the species which have evolved to experience it.

The existence of pain and that it is 'bad/undesirable', and therefore forms at least the basis for moral judgements about it, is explainable by recourse to blind, non-teleological or 'truth-tracking' evolution and does not require the existence of independent, realist, moral facts.
 
Last edited:
ruby sparks said:
But let's suppose, hypothetically, that humans generally are moral realists about at least some things, that there are at least some moral 'facts'. I don't have a problem with this. I agree with it. There are certain things which humans regard as having a factual, right or wrong answer in moral terms. The problem is, what does this tell us? It tells us what human beliefs are like. It does not necessarily extend to showing they are correct that there are objective, independent moral facts. Moral realism, if it relies on commonsense human intuitions, has built its house on sand, because human intuitions have often been shown to be wrong.
It's more than at least some things. It's what people generally believe. One should take a look at behaviors, not at studies that show people being non-realists about whether a person drinks wine or things like that.

That aside, you're repeating the same points as before, so we are going in circles. While our faculties are fallible, they get it right in the vast majority of cases, and it would be irrational to reject them without specific counter evidence. Take illness realism. People are realists about illness as well. Someone might mirror your argument and say:



But let's suppose, hypothetically, that humans generally are illness realists about at least some things, that there are at least some illness 'facts'. I don't have a problem with this. I agree with it. There are certain things which humans regard as having a factual, right or wrong answer in terms of illness/health. The problem is, what does this tell us? It tells us what human beliefs are like. It does not necessarily extend to showing they are correct that there are objective, independent illness facts. Illness realism, if it relies on commonsense human intuitions, has built its house on sand, because human intuitions have often been shown to be wrong.​

Now you can tell me that illness is different from morality, etc., and you still fail to see that the analogy is apt because the same argument that you are using against morality, if it worked, it would work against illness/health as well.

Here's another problem: You say:
ruby sparks said:
Moral realism, if it relies on commonsense human intuitions, has built its house on sand, because human intuitions have often been shown to be wrong.
If that were true, not only illness realism fails, but everything. Science and everything else relies on common sense human intuitions, like the intuition that we can generally (i.e., in nearly all cases) trust our senses, our memories, generally our faculties, and of course our epistemic intuition that allows us to make epistemic probabilistic assessments, and so on. There is no way around it. When people reject some common sense intuition on the basis of new evidence, they are actually rejecting it on the basis of new evidence and a stronger commonsense intuition (of course, in many cases, they just reject common sense moral intuitions because they are confused by RIP).



ruby sparks said:
More to the point, they are human intuitions. They are not independent of humans.
But that is a mistake again. Our illness/health intuitions are not independent of humans, either. Unless you say that other animals have them too. Sure, and then so for moral intuitions, just fewer animals. And human color vision is not independent of humans, either. But that does not mean that redness, illness, or moral wrongness are not independent in the relevant sense.

Let me try again. There are different definitions of moral realism. I would go with a simple one - which would be accepted by some but not all philosophers:


1. There is a fact of the matter as to whether a moral assessment (e.g., whether Ted Bundy was a bad person) is true (this should be understood with some tolerance for vagueness, e.g., there is a fact of the matter as to whether an animal is a lion - but there might be some vagueness, as mentioned earlier).

2. There are moral properties, e.g., some humans sometimes behave immorally, some humans sometimes behave in a morally praiseworthy manner.
Now, how would the fact that we ascertain morality by means of a human intuition make a dent on realism so defined? What does the fact that we use a human intuition - just as in the illness case, or the redness case, etc. - have any relevance at all?
 
ruby sparks said:
It is odd that you can't see what is obvious.

Suppose someone or something was going to kill or harm you. You would instinctively and innately believe that was wrong and bad. So much so that you would, even before the danger had been consciously recognised, before you even formed a belief or a value judgement about it, take evasive action. That is why it's the basis.
No, if a person were going to kill me, and they had no justification, I would believe that that is morally wrong. If a dog were going to kill me, I would not believe the dog is doing anything morally wrong. I would just fight back. Now, if I am a boxer, and the other boxer is going to hurt me in accordance to the rules, I would not think his actions are immoral. I would think it's bad for me, for sure. I would take evasive action when I can, but that's not even related to morality. A shark makes no moral judgments whatsoever (that is a monkey thing, not a fishy thing), but takes evasive action when attacked.

Don't you see the difference? Some animals (prominently humans) make moral judgments. That's a specific sort of judgment. The sort of thing you make and is linked to feelings of guilt, or punitive sentiments for immoral behavior, things like that. The vast majority of animals act out of self-preservation. But they make no moral judgments. And a superintelligent AI or an alien from another planet might not make them, either (or they might, depending on the sort of mind they have).


ruby sparks said:
And all living things (with a small number of exceptions) are functionally, behaviourally and effectively operating the same rule all the time, even if they are non-social species, and even if not all are capable of having experiences or value judgements associated with it.
But that's not a moral rule!


ruby sparks said:
In humans, a particular social species that has evolved the the sense capacity to (a) have experiences in the first place and (b) experience value judgements in particular, the rule becomes sensed/experienced, and later expressed, in terms of what humans call morality, and it is deemed immoral, by you and possibly others, for someone to try to kill or harm you.
No, that's not it. Human morality and generally monkey morality is a far more complex set of rules and valuations, not self-regarding only or mainly but mostly other-regarding. It evolved because in some (the minority of) social species in some particular environment.


ruby sparks said:
They are all explainable by recourse to blind, non-teleological or 'truth-tracking' evolution and do not require the existence of independent, realist moral facts.
That's not it. There are things that are being tracked. If that were not the case, then moral judgments would just go in any direction. A system of rules would not be a stable strategy if individuals cannot track the rules.

ruby sparks said:
I'm not sure why you chose to do shoe-wearing customs instead of one of the things on my list, such as polygamy for instance, but no matter, it seems we agree that at least some moral rules are or are deemed relative and non-objective, which is what I had said and what the evidence generally suggests.
No, I do not agree with that at all. But thank you, because you give an example of why the studies can go wrong. People simply do not understand what is being asked (and I did not choose polygamy because I wanted a more clear-cut case of a local rule that is immoral not to follow in ordinary cases; the matter is more debatable for polygamy). I do believe there is an objective fact of the matter as to whether such-and-such specific person in such-and-such specific situation has a moral obligation to take her shoes off. What happens is that the moral rules are very complex, so they contain a lot of details and it's difficult to find general ones. We intuitively apply them, but that's different from being able to reverse engineer them and say what they are (just as we can tell the colors of stuff, but it's very difficult to figure what reflective properties, etc., actually we are tracking).


ruby sparks said:
Perhaps now we can return to the sticking point. Independence from human attitudes. None of your scenarios showed this. However, one independent (of human attitudes) rule was suggested, and now a few more have been added. It seems to me that you should be welcoming this and exploring it, not trying to counter it.
No, I should not, because it is false and I know it is false, and does not support my position beyond the idea that they are independent of human attitudes. For that matter, someone may posit Divine Command Theory and that too makes moral properties independent from human attitudes. I would not endorse it a bit. It's false, and I know it is false.

ruby sparks said:
I can even suggest another. Pain = bad.
No, that is not it. If you are saying that in the moral sense, then actually, that depends on whose pain it is. The pain of the person being punished as they deserve is not a bad thing. It's good.


ruby sparks said:
Although this could arguably be subsumed into (1) as a negative, "the absence of pain is something that would promote one’s survival and so is a reason in favor of it".
That's not a moral reason - and no, the absense of pain may or may not be good for survival. Indeed, when something is not properly functioning, pain let us know. In general, pain has an important survival function, and it is dangerous not to be able to feel it.


ruby sparks said:
In this discussion therefore, it may be interesting to ask if there is an independent realist fact about pain. Does it exist independently of organisms capable of experiencing it?
Intuitively I would say no, that unlike, say, the physical or mathematical rules apparently governing the universe, there is/was no independent 'pain' in the universe, waiting to be 'discovered' by evolution or by the species which have evolved to experience it.
No, of course not. But then again, there is a fact of the matter as to whether an individual organism is feeling pain. If two people disagree, one is mistaken, and so on. That's what matters, in terms of independence.

Similarly, there is no Tourette's Syndrome independently of organisms capable of experiencing it. And there is/was no independent 'Tourette's Syndrome' in the universe, waiting to be 'discovered' by evolution or by the species which have evolved to experience it. But that is not remotely the relevant sense of independence. And now if you reply as before, you will mock me, tell me that Tourette is different, and fail to realize that my analogy cuts through the heart of the argument.
 
Back
Top Bottom