• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The Great Contradiction

For example, suppose Alice reckons tomatoes are tasty, but Bob reckons they taste bad. That may well happen without any malfunctioning in the system by which Alice or Bob assess gustatory taste.

When you agree that different judgements about both gustatory taste and sexual attraction can exist without what you are calling a malfunction, can this not also be applied to different judgements about morality?

And if so, and you are allowing for differences without a malfunction, where is the universal?

On the other hand, suppose in an experiment, they are giving something harmless but which is artificially flavored to taste (to humans) like rotten meat, rotten eggs and horse droppings, combined. :) In this case, either there is agreement that it tastes horribly bad, or else either Alice or Bob (or both) have a malfunctioning system (I'm talking about sincere assessments of taste). So, human variation results in different tastes between humans - but only to a point.

Did you know that in Iceland, literally rotten shark meat is considered a delicacy? It is in fact the country's National Dish.

People from outside Iceland hate it. It has been called, "reminiscent of blue cheese but a hundred times stronger" and "the single worst, most disgusting and terrible tasting thing".

We could in fact also consider reactions to the 'less horrible' blue cheese itself. Possibly even Marmite (so famous for either being loved or hated by equal numbers of people that it has become common to describe other situations of profound disagreement between groups of people a "Marmite issue').

Again, where's the universal?

Before I go on, I would like to ask you whether you think that Scenario 4 is so improbable that we can rule it out and focus on 1., 2., and 3. I know you already said P1 and P3 are okay, so that's good. I also know you reject P2. But I would like to know if what you had in mind is captured by the disjunction of Scenarios 2 and 3. If not, is there any other alternative you think is worth considering, so I include it as well?

I am going to be honest and admit with some embarrassment that I did not understand your scenarios very well.
 
Last edited:
Wiploc said:
But, back to our own discussion, I don't see how harm can be fitting or deserved if it doesn't accomplish anything. And I don't see how harm (retribution) without side effects (like rehabilitation) can be fitting or deserved.
Well, then, let me try an example. Imagine that Bob and Jack people are marooned in a deserted island. There is no hope for them to return to civilization, and they both know it (it happened in the year 500 and they are in the middle of nowhere, in a place no one goes to where they got by accident in a freak storm, or they are from our time but were taken by aliens to another planet and abandoned on that planet, or whatever). Jack is a serial killer.

Scenario 1: Jack takes Bob by surprise. He hits him in the head, and when Bob is trying to get up, Jack stabs him repeatedly, and cuts him in many places. He laughs as Bob dies in a pool of his own blood. Jack lives the rest of his days on the island, alone. But he likes being alone - he hates people - and he enjoys recalling how he murdered his victims, the last one of which was Bob.

Scenario 2: Like Scenario 1 until Bob is dying in a pool of blood. But Jack did not know that Bob also had a knife - he just hadn't had time to grab it before Jack fatally wounded him. So, Bob knows he is dying and has no hope of returning. But Jack is very close, so Bob makes an effort and manages to stab Jack once before he loses consciousness, never to recover. But now Jack is fatally wounded, and a few minutes later, he dies as well.

In both scenarios, bad things happen. More precisely, Jack evilly kills Bob. But which world is less bad? (the rest of the world is not affected in any way morally relevant; there are some particles in different places but nothing more). I would say the world of Scenario 2 is less bad than the world of Scenario 1. Jack committed murder for fun in both cases, but in Scenario 1, he got to enjoy it for the rest of his life, whereas in Scenario 2, he did not, but instead, he got punished as he deserved.

So, there is no difference in rehabilitation, deterrence, or anything. What makes the world of Scenario 2 better? That Jack suffers and dies as he deserves. That is why just retribution is a net positive.

Now you might say that at least Bob got to feel like he had done justice, whereas in Scenario 1 he did not have that. If you think that that makes Scenario 2 better (i.e., less bad), then no problem. Here's scenario 3:

Scenario 3: Like Scenario 1 until Bob is dying in a pool of blood. But Jack did not know that Bob also had a knife - he just hadn't had time to grab it before Jack fatally wounded him. So, Bob knows he is dying and has no hope of returning. But Jack is very close, so Bob makes an effort and manages to stab Jack once before he loses consciousness, never to recover. But now Jack is wounded. However, while Bob thought he had fatally wounded Jack, in fact the wound is a flesh wound, and not that serious. Jack recovers, and lives out the rest of his life on the island, alone. But he likes being alone - he hates people - and he enjoys recalling how he murdered his victims, the last one of which was Bob.

And again, Scenario 2 is less bad than Scenario 3. The difference? Justice.

Scene 1: Bob kills Jack.
Scene 2: Bob kills Jack so Jack kills Bob.
Scene 3: Bob kills Jack so Jack thinks he kills Bob.

You did a good job of inflaming my emotions against Bob.

Scene two makes the best story. It satisfies my narrative expectations. It's poetic. It satisfies my base urges if not my intellect.

Let me ask what is the point of justice? What makes it good? Was the Hatfields vs McCoys a good story because each side kept thinking it was getting justice on the other?

I think justice is a good idea because it is socially valuable; it increases world happiness. When you reduce it to just the desire for vengeance, you aren't doing the world any favors.

One happy man is better than everybody-dead. I think that's a fair assumption. I don't like Bob--screw him--but I think the happy survival of humanity for a few more years has to be seen as a good thing. Or else what is good?

Righteous anger? Is that what we want from life?




Wiploc said:
I think retribution was probably sometimes on-balance good back before we thought about it meant. But when I read that the four reasons for punishment are rehabilitation, isolation, deterrence, and vengeance, that made sense to me.
But as I said before, take a look at people demanding justice for their loved ones, murdered by a murderer. Or rape victims demanding justice against the perpetrator. They generally want just retribution. They may well also want deterrence, or isolation. But their main motive is not that. Just look at their actions. It is just retribution. And most humans would seek it even if no other goal can be attained.

I might be irrationally angry too, in their situations, but I'd hardly promote that as a virtue. I might kill a rapist myself, but I wouldn't ask the law to do that, not if there was no purpose to it other than indulging my sense of righteous outrage.




Wiploc said:
When we dissect punishment, vengeance is the road-rage part, the malice, the desire to hurt out of anger and self-righteousness. It is the bad part.
When we look at just retribution, that is a part of morality. It is a good thing. Scenario 2 above is clearly the less bad of the three.

Wiploc said:
You think "justice" is an ordinary concept that everyone should understand. I think it is controversial, blurred and rimless, that has been disputed by experts for millennia.
You believe the same is the case of concepts like 'morally wrong', or 'morally obligatory', etc.?

Of course. What a question! You think the world agrees about morality? You hang out on a website where morality is disputed constantly.




What do you think of the scenarios I constructed above? You can easily construct a gazillion like those if you so choose.

I think morality co-evolved with humanity. The function is to allow us to live together, to get along, to cooperate as a group.

Therefore, we often run astray when, in order to illustrate a moral point, we eliminate the group.

People who don't understand rule utilitarianism will say things like, "Wiploc, you're a utilitarian. What if I meet someone in the woods, and I wonder what it would be like to kill him? What if I think it would give me pleasure to kill him? And what if he's sleeping, and I can kill him without his ever knowing about it? So it will cause him no unhappiness, but it will give me happiness by satisfying my wicked curiosity. And we're so far out in the woods that nobody will ever find his body. And he has no relatives or friends, nobody to be unhappy for him. And he's a miserable sod, always unhappy himself. So, if I kill him, I decrease his unhappiness and increase my happpiness; I increase the sum total of world happiness. According to utilitarianism, then, I'm supposed to kill him, right?"

And the answer is no. There is a strong tendency for murder to decrease world happiness. That's why we have a rule against murder.

The rule won't work if we change it to something like, "Do not murder unless you have rationalized that murder is good in your case." A rule like that would increase rationalization rather than decreasing murder.

You think that, by taking Bob and Jack out of the world, you have created a situation illustrating that flying out in a rage is a virtue. But all it really is--in that isolated situation--is poetic irony. A good story.

If we put Bob and Jack back into society, then Jack's killing Bob becomes an actual good thing. It prevents Bob from having a next victim. Doesn't matter whether you call that rehabilitation or isolation, it's good. Just not because of the brute animal emotions that you hold up as virtues.
 
ruby sparks said:
Angra Mainyu said:
And the difference between free choices and unfree choices is that the latter involve external coercion or internal compulsion.
Well then I'm afraid your definition is, imo, a complete fudge akin to either calling the tip of an iceberg an iceberg, or the natural universe god, or saying that the sun goes around the earth, and it (your definition) may be the nub of all our disagreements about free will and possibly make it pointless for us to explore in detail, so apologies for not dealing with all your recent posts.

You have a watertight, logical, semantically correct definition of free will and so what you are calling free will can exist.
No, that is not at all what is happening. I am using expressions 'of one's free will', 'of one's own accord', and so on, in their common senses. I use them intuitively. I do not have a stipulative definition of them. When I say that the difference between free choices and unfree choices is that the latter involve external coercion or internal compulsion, I am not stipulating the meaning. Rather, I am making an assessment, on the basis of linguistic evidence about how English speakers 'in the wild' use those words.

ruby sparks said:
It's just that it does not seem to actually describe what is going on, and as such is either inaccurate, mislabelling, merely colloquial, folk-psychological, and/or effectively meaningless.
It's the other way around. Rather, what you do is to use the term 'free will' inconsistently (see previous posts by Bomb#20 and by me), and at best - if you reject the contradictory parts - you use them in a way that does not match the meaning in expressions such as 'She did it of her own free will', which means the same as 'She did it of her own accord'. The meaning of such expressions is what matters morally. Whether people behaved immorally, how immorally, etc., does not depend on whether determinism is true (and yes, that means 'full'), but whether or not they act of their own free will, to what extent, etc., in the usual meaning of those expressions.

I already said that in several posts, and provided evidence about the meaning of the words by means of examples about how people use those words. Bomb#20 already made similar points (see, for example, this post). You, on the other hand, keep claiming it is a "fudge", or a "complete fudge", etc., but you have not provided any evidence whatsoever in support of your claims about the meaning of the words. Instead, you make points about neuroscience, but those miss the point entirely, because our disagreement is not about neuroscience, but about what the words mean, and more precisely, what the expression 'of one's own free will' and similar ones means in common speech. As Bomb#20 explained, you are fudging the definition, not us.

ruby sparks said:
Thank you for discussing free will with me. It was enjoyable and civilised, and we've given it a good go without coming to blows, but I think we may have to agree to disagree.
You're welcome, and thanks for the discussion too. I hope it has been or will be useful for at least one reader.

ruby sparks said:
We can still try to discuss morality instead.:)
We can discuss other parts of morality if you like, sure. :)

There is one difficulty, though: without free will, we would have a moral error theory, so the only metaethical question left would be which sort of error theory holds. This is so because when freedom is restricted, so is moral blame all other things equal. If freedom is completely eliminated, no one is ever guilty of anything. Convicting someone who did not act of their own free will (not even a little bit; if it's a bit despite significant coercion, there is room for guilt, depending on what he did) would be a case of convicting an innocent person, and regardless of whether actual court decisions would be different.

Still, there is room for discussion about moral language and so the sort of moral error theory that obtains.
 
What happens is that, as Alice and Bob are not identical, their brains will not yield the same verdicts about everything. For example, maybe Bob finds Mary sexually attractive, but Alice does not find Mary sexually attractive. That of course does not need to imply any malfunctioning. On the other hand, they yield the same verdict in the bear-counting case, and in many, many others - save for malfunctioning. P2 is the hypothesis that this phenomena that holds for many, many subsystems, in humans, and in computers, and in many other systems also holds for the moral sense.

I hope to respond to the rest of your post later. For now, the above caught my attention.

Why do you not suggest that there is a universal for 'sexually attractive'?

I think you may have hit a nail on the head there. The number of bears is an objective, empirically measurable fact about the world outside your head. Sexual attraction and morality are merely differing mental judgements.

Unless you are assuming a universality for one but not the other, which would seem to be (a) assuming conclusions and (b) being inconsistent when doing so.

That is the wrong part to catch your attention. That part of my point was that you did not need identical people or machines to get the same verdict.

Still, I will address your points. There is a question and there is an objective fact of the matter as to whether Bob finds Mary sexually attractive. And there is an objective fact of the matter as to whether Alice finds Mary sexually attracted. But there is no good reason to think that if Bob finds Mary sexually attractive but Alice does not, then either Bob or Alice have some malfunctioning system. In fact, our observations about the world around us, human behavior and that of other species, etc., clearly indicate they do not have to be in any way ill.

If Bob says 'I find Mary sexually attractive' and Alice says 'I do not find Mary sexually attractive', both statements might be true. Alice is talking about Alice, and Bob about Bob.

Now, if Bob says 'Mary is sexually attractive' and Alice says 'I disagree. She is not.', then it depends on the context, but the use of the expression I disagree suggest that they are talking about whether Mary is in fact sexually attractive to some set of beings S, which may be clear by the conversation. It might be that the set is implicitly normally functioning human males (or straight males, or whatever), and that they are talking about whether she is more attractive to people on that set than the average female is, or something like that. What they mean depends on context.

No, suppose that Alice says 'McConnell acted immorally when he supported the confirmation of Kavanaugh', and Bob says 'McConnell did not act immorally but in a praiseworthy manner when he supported the confirmation of Kavanaugh', then either Alice is in error, or Bob is in error (if you were right that McConnell did not act of his own free will, then Bob is correct). This is how people in the wild speak (a long time ago, I used to be in error, but we humans can learn and improve :)). If you claim otherwise, the burden is on you.

Now, it might be that even though either Alice is mistaken, or Bob is mistaken, their respective moral senses are not malfunctioning, and the difference in their assessments is due to a difference in input. For example, maybe Alice believes that when McConnell voted, he knew that Kavanaugh had done such-and-such thing, but Bob believes that when McConnell voted, he knew that Kavanaugh had not done such-and-such thing. That would explain the disagreement, without a malfunctioning moral sense. Of course, it might be that there is a malfunctioning epistemic sense that leads one of them to having the beliefs they have. Or maybe they just got different info. There are plenty of possibilities, but here's the key point of : suppose that Alice and Bob have exactly the same beliefs about what beliefs McConnell had. Suppose they had the same beliefs about what McConnell intended to do. Suppose they have the same beliefs about every other property their respective moral senses use to get the output about whether McConnell acted immorally. Then, the disagreement results from some malfunctioning moral sense. If that were not the case, then we would have something like Scenario 2 or Scenario 3, which in turn would lead to either some kind of relativism or a moral error theory (I mean, even accepting that we act of our own free will; the error theory would be for a different reason in this case).

If you disagree, as I said, the burden is on your side, since I'm going with ordinary human experience. But I still offer to make an argument, provided that you answer the question about your position that I asked in the last paragraph in this post. I need an answer to that question before I can go on, because otherwise any attempt to reply would be unmanageably long (and likely mostly ignored anyway).
 
Last edited:
ruby sparks said:
When you agree that different judgements about both gustatory taste and sexual attraction can exist without what you are calling a malfunction, can this not also be applied to different judgements about morality?

And if so, and you are allowing for differences without a malfunction, where is the universal?
I am only saying it is a human universal. Aliens would be a different matter (and no, I do not expect it to be universal across the universe, no pun intended).

But let's consider first the following example. Consider the following exchange:

Alice: Tomatoes are tasty.
Bob: Tomatoes taste really bad.
Alice: Well, there is no objective fact of the matter. I like them.

That is a realistic exchange. It would be ordinary, and would very probably not continue with further debate. The differences in human gustatory taste are part of our ordinary experience, and people ordinarily respond to that by saying there is no fact of the matter, or no objective fact of the matter, or similar expressions. That is the default human position. If someone claimed that humans are generally in error and there is a fact of the matter, that would be an extraordinary claim, and correctly deemed very improbable unless there is good evidence/argumentation to back it up.

But now consider the following exchange (which, for example, might be the start of one of the ubiquitous fights in the Political Discussions forum, or a similar one):

Alice: McConnell's support for the confirmation of Kavanaugh was immoral behavior.
Bob: McConnell's support for the confirmation of Kavanaugh was not immoral behavior.

Readers would ordinarily and intuitively reckon that either Alice is making a false statement (deliberately or not), or else Bob is. In fact, one could expect that Alice and Bob (apart from insulting each other) would try to give arguments in support of your position, at least if they have the time and interest to get into a debate. It is implicit in that sort of conversation that there is a fact of the matter as to whether or not McConnell's behavior was immoral.


Here's a third example: color.

Alice: Bob crossed the street when the light was red.
Bob: The light was green.

Here too, readers would ordinarily and intuitively reckon that either Alice is making a false statement (deliberately or not), or else Bob is.

So, there is a difference between how humans ordinarily react to the cases of different gustatory taste assessments (within a certain range; see my rotten meat, rotten eggs, and horse droppings example. example) on one hand, and moral assessments and color assessments, on the other. That is the default position, so to speak.

The matter of malfunctioning is related to that. In those cases in which there is a fact of the matter, then someone is making an untrue statement. If it is not deliberate, then it is either due to malfunction, or due to different inputs. For example, if Alice really sees the light as red, and Bob really sees it as green, then the visual system of at least one of them is not functioning properly.

Do you disagree with some of the above?
ruby sparks said:
Did you know that in Iceland, literally rotten shark meat is considered a delicacy? It is in fact the country's National Dish.


People from outside Iceland hate it. It has been called, "reminiscent of blue cheese but a hundred times stronger" and "the single worst, most disgusting and terrible tasting thing".

We could in fact also consider reactions to the 'less horrible' blue cheese itself. Possibly even Marmite (so famous for either being loved or hated by equal numbers of people that it has become common to describe other situations of profound disagreement between groups of people a "Marmite issue').

Again, where's the universal?
First, while I did not know that, I did say rotten meat, rotten eggs and horse droppings, combined. That works. But if you think it does not, I can always go further: fresh cat feces. Surely, if a human likes the way they taste, something is wrong with their gustatory taste. Not enough? How about, say, recently used toilet paper? :) If a human likes the way it tastes, you do realize that he is not healthy, and something is malfunctioning?

Second, I did not say there was a universal such that there were no different human gustatory tastes within the range of healthy systems. In fact, I said just the opposite, so you are not at all contradicting any of my points.

Third, consider this from an evolutionary standpoint: imagine a lion that likes grass better than meat. Surely, something is wrong with the lion. Or take a look at a documentary about, well, pretty much any animal. There is some stuff they eat, and some stuff they do not eat, even among the stuff that is within reach. So, they make distinctions based on the way things smell (or taste; actually smell is more relevant, but I'd say also in humans). Humans are not created by Yahweh in his image and separated from the rest of his creation. :) Humans come from a long evolutionary process. There is no way that senses that allowed our ancestors to stay alive by choosing what to eat and what not to (imperfectly, of course as there are some poisonous tasty things, but still it generally works) just went away.

Fourth, I actually do not need a human universal on taste to support any point about morality, as I will use gustatory taste precisely as a contrast to morality or color. Still, I make the points above for the sake of truth.

ruby sparks said:
I am going to be honest and admit with some embarrassment that I did not understand your scenarios very well.
Okay, can you tell me what part do you not understand, so that I try to clarify them?
 
Wiploc said:
Scene 1: Bob kills Jack.
Scene 2: Bob kills Jack so Jack kills Bob.
Scene 3: Bob kills Jack so Jack thinks he kills Bob.

You did a good job of inflaming my emotions against Bob.
No, that is not it. Jack is the perpetrator. Bob is not guilty.


Wiploc said:
Scene two makes the best story. It satisfies my narrative expectations. It's poetic. It satisfies my base urges if not my intellect.
It's the morally best scenario. The most just of them all. What do you mean by your intellect? When you make moral assessments, you use your moral sense. I constructed those scenarios so that, in using your moral sense as you usually do, you reckon that Scenario 2 is the least bad of the three. The only difference that can explain that is that in Scenario 2, Jack is punished as he deserves, in retribution for his heinous act of murder for fun. So, you do realize, at least at an intuitive level, that just retribution is a good thing. :)


Wiploc said:
Let me ask what is the point of justice? What makes it good?
You are missing the point. Justice does not have a further point. It is an end, not a means to an end - well, secondarily, it can be a means to deterrence and whatnot, but it does not need to be.
What makes it good? I think that justice is a good-maker, not something that needs a good-maker. In other words, some behaviors are good because they are just, and they need no further good-maker.

It's like asking what is the point of not behaving immorally, or what makes a world in which people never choose to behave wrongfully a better world than a world in which they do, all other things equal? It just is better. If there is a further truth-maker, I do not know it. But I can make a moral assessment using my moral sense (usually, we do not need to know the truth-makers in order to make true assessments).

Wiploc said:
Was the Hatfields vs McCoys a good story because each side kept thinking it was getting justice on the other?
I'm not familiar with it. But from what I see, there was plenty of injustice and evil act on the part of people on both sides. It does not seem related to the matter at hand.


Wiploc said:
I think justice is a good idea because it is socially valuable; it increases world happiness. When you reduce it to just the desire for vengeance, you aren't doing the world any favors.
No, that is not it, and I have already showed you that with an example. In Scenario 2, just retribution reduces world happiness. Indeed, compare scenarios 2 and 3:

Scenario 2: Jack takes Bob by surprise. He hits him in the head, and when Bob is trying to get up, Jack stabs him repeatedly, and cuts him in many places. He laughs as Bob dies in a pool of his own blood. But Jack did not know that Bob also had a knife - he just hadn't had time to grab it before Jack fatally wounded him. So, Bob knows he is dying and has no hope of returning. But Jack is very close, so Bob makes an effort and manages to stab Jack once before he loses consciousness, never to recover. But now Jack is fatally wounded, and a few minutes later, he dies as well.

Scenario 3: Jack takes Bob by surprise. He hits him in the head, and when Bob is trying to get up, Jack stabs him repeatedly, and cuts him in many places. He laughs as Bob dies in a pool of his own blood. But Jack did not know that Bob also had a knife - he just hadn't had time to grab it before Jack fatally wounded him. So, Bob knows he is dying and has no hope of returning. But Jack is very close, so Bob makes an effort and manages to stab Jack once before he loses consciousness, never to recover. But now Jack is wounded. However, while Bob thought he had fatally wounded Jack, in fact the wound is a flesh wound, and not that serious. Jack recovers, and lives out the rest of his life on the island, alone. But he likes being alone - he hates people - and he enjoys recalling how he murdered his victims, the last one of which was Bob.

Note that Bob suffers just as much in both scenarios. On the other hand, Jack is happy in scenario 2, enjoying the memories of how he carved up and killed his victims. He gets off recalling how they died, choking in their own blood, pleading for mercy. It is a happier world than the world of Scenario 2, in which Jack bleeds to dead, justly killed by his last victim. Note that the world that contains the greater amount of happiness of the two is the worse world of the two. The world with less happiness is better. And the amount of suffering is the same. .


Wiploc said:
One happy man is better than everybody-dead. I think that's a fair assumption. I don't like Bob--screw him--but I think the happy survival of humanity for a few more years has to be seen as a good thing. Or else what is good?
Well, humanity can survive elsewhere if you like (place Bob and Jack on a distant planet where aliens put them, or on the island I mentioned, or whatever), but even if they were the two last humans in the universe, Scenario 2 is better. In Scenario 3, what survives is not an abstraction 'humanity', it is a serial killer who enjoys recalling how he carved up his victims, how they slowly died crying and pleading. The survivor deserves to be killed. It is a bad thing that he survived. If you use your own sense of morality to make an assessment (rather than an ideology or philosophical theory), you see that (you already did).


Wiploc said:
Righteous anger? Is that what we want from life?
No, it is not righteous anger, or any anger for that matter. The amount of anger in Scenario 2 is exactly the same as the amount of anger in Scenario 3.


Wiploc said:
I might be irrationally angry too, in their situations, but I'd hardly promote that as a virtue. I might kill a rapist myself, but I wouldn't ask the law to do that, not if there was no purpose to it other than indulging my sense of righteous outrage.
First, you misunderstand the exchange. You made a claim about the reasons for punishment. I'm explaining that, as a matter of fact, the main reason is just retributioon.

Second, why would you think that calling for justice is not a virtue? They might not even call for justice outside the law. In many cases, they demand that the perpetrators be put on trial and send to prison, in retribution for their actions.

Third, you would not ask the law to kill a rapist. Why not? Because you think he does not deserve it? Fair enough, but what about a serial killer like Jack above, if he were reachable by law enforcement?

Wiploc said:
Angra Mainyu said:
Wiploc said:
You think "justice" is an ordinary concept that everyone should understand. I think it is controversial, blurred and rimless, that has been disputed by experts for millennia.
You believe the same is the case of concepts like 'morally wrong', or 'morally obligatory', etc.?

Of course. What a question! You think the world agrees about morality? You hang out on a website where morality is disputed constantly.
No, what is disputed in any of those websites is who acted immorally, how immorally it was, etc. But the people debating understand what the terms 'morally wrong', 'morally obligatory', etc., mean. If they did not, they would not be talking in the first place. The same goes for the concept of just retribution, and justice. These are ordinary concepts.

Wiploc said:
I think morality co-evolved with humanity. The function is to allow us to live together, to get along, to cooperate as a group.
Sure, but that is not what I asked.

Wiploc said:
Therefore, we often run astray when, in order to illustrate a moral point, we eliminate the group.
That does not follow. The reason I eliminated the rest of the group from the picture was to isolate the variables. In the scenarios, the differences do not involve deterrence, or rehabilitation, etc. Isolating the variables is not something that makes us run astray. Rather, it is something that allow us to study one matter - in this case, just retribution - without risking contamination with other things that might affect our judgments. It is a standard procedure to study a phenomenon.
If we want to study a human moral assessment and we want to know whether what prompts the assessment is always difference in deterrence, or rehabilitation, etc., or it can be prompted by a difference in just retribution, then then using scenarios with the same amount of deterrence, rehabilitation, etc., but different amounts of just retribution gives us: just retribution at least is enough to trigger our assessment. As a bonus, clearly we can see that increased happiness is trumped by decreased justice, in terms of which scenario is better.

To see that the other things (deterrence and the like) on their own do not trigger the assessment, we would need further scenarios. But now we know that just retribution does (well, some of us know, but you should too, after reading the scenario and understanding it; you are failing to accept that, though you have not offered a good reason).


Wiploc said:
People who don't understand rule utilitarianism will say things like, "Wiploc, you're a utilitarian. What if I meet someone in the woods, and I wonder what it would be like to kill him? What if I think it would give me pleasure to kill him? And what if he's sleeping, and I can kill him without his ever knowing about it? So it will cause him no unhappiness, but it will give me happiness by satisfying my wicked curiosity. And we're so far out in the woods that nobody will ever find his body. And he has no relatives or friends, nobody to be unhappy for him. And he's a miserable sod, always unhappy himself. So, if I kill him, I decrease his unhappiness and increase my happpiness; I increase the sum total of world happiness. According to utilitarianism, then, I'm supposed to kill him, right?"

And the answer is no. There is a strong tendency for murder to decrease world happiness. That's why we have a rule against murder.
Of course, it will usually decrease world happiness. But that is not why we have such a rule. If by 'why' you mean causation, the moral rule against it is because it improved reproductive success by allowing our ancestors to cooperate better, or something like that. If by 'why' you mean why people actually punish murder, for the vast majority of time humans have been around there was no written law, no courts, and the main reason for killing murderers was just retribution for what they did. If you talk about present-day laws, well people who pass the laws have different and multiple motivations, but surely just retribution is generally at least one of them, alongside others like deterrence (of course, a brutal dictator might have the rule just for deterrence, but that is a very evil case).


Wiploc said:
You think that, by taking Bob and Jack out of the world, you have created a situation illustrating that flying out in a rage is a virtue. But all it really is--in that isolated situation--is poetic irony. A good story.
No, not at all. There was no difference between Scenario 2 and Scenario 3 in terms of rage. The relevant difference is that in Scenario 2, there is more justice than in Scenario 3. In Scenario 3, there is more happiness and no more suffering. But that's not relevant morally. Justice is more important morally.

Wiploc said:
If we put Bob and Jack back into society, then Jack's killing Bob becomes an actual good thing. It prevents Bob from having a next victim. Doesn't matter whether you call that rehabilitation or isolation, it's good. Just not because of the brute animal emotions that you hold up as virtues.
Jack is a brutal serial killer. But Bob's killing Jack in retribution for Jack's action is a good thing, not into society but in the scenario. You see, if you put them back into society, you contaminate the scenario. The variables are no longer isolated, so you can insist that the good thing is rehabilitation, isolation, etc. But when the variables are isolated, one can see very clearly that even in the absence of rehabilitation, isolation, or whatever, just retribution is a good thing in an of itself. That is what the scenario accomplished; even if you fail to realize that because it is in conflict with your theory, your intuitive moral sense did recognize it (you just mixed up the names).
 
Clarification:
me said:
The amount of anger in Scenario 2 is exactly the same as the amount of anger in Scenario 3.
I meant the amount of anger of each kind, not only the total amount. In particular, the amount of righteous anger is exactly the same.

me said:
As a bonus, clearly we can see that increased happiness is trumped by decreased justice, in terms of which scenario is better.
To be clear, I meant in that particular scenario; it's not that it would always be so, but it depends on the situation (in fact, while increased happiness is a positive in usual situations, in this particular one, it is a negative because of who the extra happy person is) .
 
... Sara's daughter didn't kill Joe's daughter; therefore Joe killing Sara's daughter is a first-strike. It's not a counterattack. It's not symmetry. I.e., it's not retribution.

2. Surely you knew that -- when have you ever heard anyone advocate retribution against innocent people? So why did you construct that example?

I was trying to unmix the motives. I made it an accident so that deterrence and and rehabilitation wouldn't factor in.
Well, that's not a good way to do it. Making it an accident means retribution doesn't factor in, at least if the aggrieved party isn't a nutjob. In contrast, it's perfectly feasible to deter people from having accidents: if they anticipate punishment they can avoid it by being more careful, taking fewer chances, thinking about whether they really need to do the hazardous activity at all today, and so forth.

Presumably, because you want your readers to think retribution is retribution, there are no relevant distinctions to be made within that category,
I don't understand the category at all. I don't know about these "relevant distinctions" of which you speak. This is the first I've heard of them.
You haven't heard of the distinction between deserved retribution on the one hand versus disproportional retribution and misdirected retribution on the other? You haven't heard of the difference between punching the guy who punched you to pay him back tit-for-tat, and shooting him? You haven't heard of the difference between flipping off the driver who cut you off, and yanking her dog out of her car and throwing him into traffic? That's a serious ivory tower to be in.

and killing Sara's daughter for Sara accidentally killing Joe's daughter is morally on a level with killing Sara for Sara deliberately killing Joe's daughter. But if those scenarios really were no different, then what's your motivation for the switch?

I tried to construct a pure-retribution scenario. What's your motive for the accusative tone of your post?
I didn't mean to accuse you of anything other than being a perfectly normal human being: i.e., a person with cognitive dissonance who manages his internal contradictions by compartmentalization.

If I came off as hostile, sorry. I was probably feeling a little hostile. You wrote "We have an advocate of retribution in this thread. ... I'm not on Joe's side. I think he's irrational. I do not favor retribution." That's insinuating that retributionists are on Joe's side. You ought not to have done that. Retributionists are not on Joe's side either. If you didn't know this, that just means you haven't been listening to us with your listening ears on.

How did you know killing the innocent would pack more emotional punch than killing the guilty, unless you share the emotion? This implies you must understand at least on a subconscious level that bogus so-called "retribution" against the innocent really isn't the same thing as actual retribution against the guilty. So there appears to be a self-contradiction baked into your argument.

I'm not sure we're going to get along.
Possibly not. I take it you're offended. I'm not sure why. This is the "The Great Contradiction" thread. It's a thread for T.G.G. Moogly to say Christians have a self-contradictory world-view, and for remez to say T.G.G. Moogly has a self-contradictory world-view, and for me to say you have a self-contradictory world-view, and so forth. And you haven't exactly kept to yourself your opinion that our world-view makes no sense.

Are you offended because I'm playing amateur psychoanalyst at you? Sorry, but what do you expect? You made an argument that comes off as crazy.

3. You're trying to arouse an emotional reaction against retribution, and that's fine -- all moral arguments are emotional arguments -- but you're doing it by making Joe some sort of primitive Bible-writing bronze-age goat-herding bigot who thinks children are property.
You keep bringing up religion. Did I bring up religion?
Yes, indirectly. What you proposed as a pure and canonical example of retribution is a monstrosity unlike anything any of your opponents have ever advocated. What you are showing us would be an alien concept to us, and probably to you as well, except that our culture is already intimately familiar with the concept of it being good and proper to attack somebody by killing his children. It's a concept we've been exposed to from one source: the Bible. God tells the Israelites to revenge themselves on the Amalekites by killing them all including their children and livestock. God tests Job's loyalty by committing an injustice against Job by murdering his children and employees; and then afterwards God makes everything alright again by supplying Job with new children and employees. Adam disobeys God and God takes his anger out not just on Adam but on all Adam's descendants. So what the heck are we supposed to be reminded of when your villain revenges himself on the daughter of the perpetrator, if not religion? We certainly aren't reminded of our own concept of just retribution.

Retribution makes no sense to me. It seems pure villainy.
Utilitarianism -- especially rule utilitarianism -- makes no sense to me. It seems, if not pure villainy, then innocent of villainy only by reason of insanity. To me the notion of punishing a crime by killing an innocent looks like it fits a philosophy of maximizing total happiness far better than it fits a philosophy that takes into account who deserves happiness and who doesn't. If the purpose of punishment is deterrence then it makes not a particle of difference whether the punished person is guilty -- it only matters whether he's popularly thought guilty, or whether he's loved by the guilty person.

But maybe you can explain.
You want me to explain retribution? I'll have to explain morality. I doubt if either of us really has time for that; but here's a synopsis. Morality is not specifically human behavior. It's monkey behavior. When a philosopher comes up with a moral theory like Utilitarianism or the Categorical Imperatives or Divine Command Theory or what have you, there are two ways he can do it. He can do it by trying to conform his theory to the moral judgments being issued by the inherited gadgets in our brains that our monkey ancestors evolved to carry out the moral judgment function they needed, or else he can do it by defining morality to be whatever his theory says it is and then trying to reprogram our monkey brains to conform to his theory.

To my mind, the latter approach is completely wrong-headed. If we assume our monkey brain's morality organ isn't up to the challenge of competently judging the concrete moral situations it evolved to analyze, then what in god's name would make anyone imagine it's up to the challenge of competently judging an abstract theory of ethics? Conversely, if you aren't judging Utilitarianism or what have you by using your intuitive moral sense, then what the bloody hell else have you got to judge it with, to fall back on? Aesthetics?

When you claim retribution is villainy, you're de facto claiming the evolved monkey moral sense is villainous. That is a claim that requires extraordinary evidence, because it is an extraordinary claim. Supposing our evolved moral sense really were in point of fact villainous, how do you figure the human race could in principle possibly discover that fact?

Earlier you wrote:
Is Joe's motive vengeance? No, he's not acting in anger. He's acting only in the belief that retribution is somehow good.
You probably think you get to specify that, because it's your scenario and Joe is your fictional character. And that's true, if what you're writing is science fiction. But when you do that, you thereby make Joe some kind of space alien with motivations you label "retribution" but which are unrelated to the actual motivations of a real human being seeking retribution.

If Joe is intended to be a human, then of course Joe is acting in anger. Of course Joe's motivation is vengeance. Joe seeks retribution because Joe is a human, which is to say, Joe is a monkey.

Maybe you can do a better job of separating out other motives.
I think I can. Tara was a general's secretary. She's secretly a Nazi sympathizer; she betrayed a military secret to the Germans; consequently Doe's son was killed in action. Doe wants Tara dead. His motive isn't to prevent her from doing it again -- the Army found out what she did and fired her. She'll never have access to a military secret again; and even if she already knows another secret, the Germans are about to surrender. Doe's motive isn't to deter third parties -- if he kills her, he'll never be able to reveal to the public why he killed her, for the same reason the government merely fired Tara and didn't shoot her as a spy: the secret itself would become public. The Russians would find out. They're our nominal allies but they're about to be our new enemies. No, Doe's motive for killing her is straight-up retribution for her betraying her country and for his dead son.

So let's change it to cars rather than daughters.
Good plan. If you also change it to Sara having caused the car wreck on purpose, I'll get off your case. Carry on.

I'm not interested in arousing emotions against it. I'm trying to tease out justifications for it. How is it to be distinguished from arbitrary cruelty?
Arbitrary cruelty doesn't care whether the target is guilty, or whether the harm is proportional to the wrongdoing. If somebody is carefully doing it only to the seriously guilty and not to the innocent or to the relatively trivially guilty, then it evidently isn't arbitrary.

Or, in the alternative, I'm trying to show that there are no justifications for it.
You don't appear to be an error theorist, so I take it you think there are justifications for some actions, yes? What do you think justifies anything? Presumably, conformity to some rule that maximizes happiness, yes? Well, why do you think an action having that property qualifies as justification? Presumably, via some causal chain of thought that ultimately traces back to an intuitive endorsement from your monkey brain's morality engine, yes? Well, if endorsement by your monkey brain is enough to justify greatest-good-of-the-greatest-number, why isn't endorsement by most of the other monkey brains enough to justify retribution?
 
Scenario 1. Universal human moral sense.

P1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P2: If two humans have properly functioning moral senses, they will yield the same outputs given the same inputs.
P3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically, and to punish those who do.

Scenario 2. Culture-relative moral sense.

P1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P2.1: If two humans have properly functioning moral senses and the two humans are members of the same group, they (i.e., the moral senses) will yield the same outputs given the same inputs. The groups are instinctively formed depending on social interaction.
P2.1': If two humans have properly functioning moral senses, they will yield the same outputs given the same inputs within some proper subset of the moral domain (which subset is a matter for future research in human moral psychology).
P3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically, and to punish those who do.


Scenario 3. Individual taste moral sense.

P1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P2.2: If two humans have properly functioning moral senses, they (i.e., the moral senses) might not yield the same outputs given the same inputs, even if the two humans belong to the same culture, with the exception of some proper subset of the moral domain (which subset is a matter for future research in human moral psychology).
P3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically, and to punish those who do.

Scenario 4. All other alternatives.
[/indent]
(as before, P2 should be understood allowing some room for vagueness - but not much; still, we should leave that for later) to make it manageable.

Before I go on, I would like to ask you whether you think that Scenario 4 is so improbable that we can rule it out and focus on 1., 2., and 3. I know you already said P1 and P3 are okay, so that's good. I also know you reject P2. But I would like to know if what you had in mind is captured by the disjunction of Scenarios 2 and 3. If not, is there any other alternative you think is worth considering, so I include it as well?

I am still struggling to understand your scenarios 1-4 despite having reread them, but I will attempt an answer nonetheless.

First, as a preface to answering, how would it be if I amended your scenario 1 title to 'species moral sense'? I'm not saying 'universal human' is wrong, but species seems to describe what you are referring to (albeit specifically the human one). This change might avoid (at least my) confusion over the word 'universal' and also allow other species to not be excluded from wider consideration (many species also consist of groups and individuals, and we are after all part of the animal kingdom).

The hierarchy would then start (at the lower end) with 'individual', move up to 'group' (which might be roughly synonymous with culture, at least for a social species), then 'species' and then 'all creatures on earth' (perhaps I should say lifeforms). There might even be a higher category of 'extra-terrestrial creatures/lifeforms' but this may not often need to come into play except in potentially useful hypotheticals about aliens.

I don't much mind if you stick with 'universal human', now that I think I understand what you mean better and if we mean the same thing.

So much for my suggested category labels, my preface to answering.

Now I am stuck to proceed to my actual answer, because I still don't understand the distinctions your scenarios. Which is a bit annoying for me.

All I might wonder about, what might be only a gut feeling, without having thought it through completely (so many questions, so little time) is whether the scenarios are in fact exclusive to one another, or whether any given moral judgement results from a mixture of them all, as a result of a set of 'nature + nurture', individual + cultural/social' processes and influences.

If that doesn't make sense to you, it may be because I am not sure about (not understanding) your set of scenarios. As such it may not be much of an answer.

But, I think it best to move on, because, and arguably more importantly than getting stuck on the above, after reading all your subsequent posts, which I may or may not have time to respond to fully, I am, I think, starting to warm at least somewhat to what you seem to be saying, if I understand it right.

If what you are essentially saying is that humans have at least some innate moral preferences that are common enough to be called 'endemic to and within the species' (I prefer endemic to universal because the latter implies 'in all instances', which is not what manifests, for any individual moral judgement I can think of) then I think it would be possible to broadly agree, despite my initially disagreeing.

I think the above is borne out, or at least suggested in and supported by, studies on the (moral) behaviour of infants. I think we are talking about evolved traits here, at least as regards a starting position (what we are born with, the dispositions that are innate, it may be modified thereafter). As such I am now thinking that my saying stuff such as 'there are no moral facts out there in the world outside our heads' is redundant, and quite possibly missing the point, because I'm now thinking that you were never talking about that or suggesting it, or indeed anything 'truly or ultimately objective', that you are only talking about 'internal rules' (inside human brains).

The above is the net result of me having tried to absorb and pull together a number of things you have been saying, and I am thinking that limited agreement is a more useful basis for proceeding than going over the details of disagreement or confusion. But if there was any specific statement of yours that you would like me to revisit, because you think it is important enough, just point me to it.
 
Last edited:
... what you do is to use the term 'free will' inconsistently (see previous posts by Bomb#20 and by me), and at best - if you reject the contradictory parts - you use them in a way that does not match the meaning in expressions such as 'She did it of her own free will', which means the same as 'She did it of her own accord'. The meaning of such expressions is what matters morally. Whether people behaved immorally, how immorally, etc., does not depend on whether determinism is true (and yes, that means 'full'), but whether or not they act of their own free will, to what extent, etc., in the usual meaning of those expressions.


I don't intend to delve right back in to our discussion on free will, but.....



I do not know what inconsistency you keep referring to, or whether it is still relevant. Can you restate it briefly and succinctly for me?



What people usually mean seems to me an unreliable measure.

It could be nothing more than them merely being colloquial or folk-psychological.

People could be mistaken when they try to understand or describe what is actually taking place, as when saying (before anyone knew better) that the sun rises in the morning. This is a problem with introspection.

Or, what they mean might not take all relevant and important factors into account. I have given many examples and analogies (which I am not sure you have fully dealt with). Another might be saying "a car is travelling at 60mph down a steep hill because the driver is pressing the accelerator a certain amount", when in fact gravity is also contributing, and indeed friction, the gearing of the car, and so on. This is the tip of the iceberg analogy and is also a problem of the limitations of introspection.

Or, what they mean may be effectively meaningless (akin to god = the natural universe).

Finally, what in fact do people usually mean? People's definitions for free will may actually differ under detailed scrutiny and often without much scrutiny at all (they may differ substantially and quite obviously). Do they know what they mean, or have they just not considered it enough? Are they just being conveniently pragmatic? Are they using presumptions that are outdated and that recent science is now undermining, leaving them under-informed, as the law courts arguably are? (see below in blue)*




I fear I have repeated myself there in the green part.

In a nutshell, there seem to be several ways in which 'what people usually mean' is problematical.







* "We tend to regard a person’s acts as the product of his or her choice, not as events governed by physical laws. This view (roughly, the hypothesis of free will and the rejection of determinism) is of course hotly contested in philosophical literature. But whether accurate or not, the assumption of free will reflects the way most people in our culture respond to human action, and it reflects, most importantly, the premise on which notions of blame in the criminal law ultimately rest".

( Sanford H. Kadish, Stephen J. Schulhofer, Carol S. Steiker, & Rachel E. Barkow, CRIMINAL
LAW AND ITS PROCESSES 591 (9th ed. 2012)).
 
Last edited:
There is free will in the same way there is experimental philosophy. It's a rationalization, of no value, insubstantial, without basis. Or as Bridgman, not Skinner, would have said: "show me the operations."

The above relates to the OP as follows: The brain is a human construct based only on an organ. Normally those who chose to identify behavior as controlled or managed by the brain ignore obvious interrelations with which the brain serves or otherwise interacts with those systems. In substance it becomes a hypothesis served by the magic of naming rather than by what the material operations involve tell us. One has to argue that only if the brain decides does any notion of will exist. Self identity is a consequence of a being being alive and surviving among other beings which might harm it. It is not a reason for some special word magic capability to exist.
 
Last edited:
ruby sparks,

I will get to your other post later, as it takes more time. For now, briefly:

... what you do is to use the term 'free will' inconsistently (see previous posts by Bomb#20 and by me), and at best - if you reject the contradictory parts - you use them in a way that does not match the meaning in expressions such as 'She did it of her own free will', which means the same as 'She did it of her own accord'. The meaning of such expressions is what matters morally. Whether people behaved immorally, how immorally, etc., does not depend on whether determinism is true (and yes, that means 'full'), but whether or not they act of their own free will, to what extent, etc., in the usual meaning of those expressions.


I don't intend to delve right back in to our discussion on free will, but.....



I do not know what inconsistency you keep referring to, or whether it is still relevant. Can you restate it briefly and succinctly for me?

See, for example, this post and this post. I was talking about the general pattern of the thread. After you retracted your claim about the definition here, and you also retracted the claim that there is a contradiction here, then it might not be relevant anymore (if you really changed the way you use the term in order to avoid contradiction), which is why after saying that you use the term 'free will inconsistently' I continued with "and at best - if you reject the contradictory parts - you use them in a way that does not match the meaning in expressions such as 'She did it of her own free will', which means the same as 'She did it of her own accord'. The meaning of such expressions is what matters morally. "


ruby sparks said:
What people usually mean seems to me an unreliable measure.

It could be nothing more than them merely being colloquial or folk-psychological.


What people usually mean is what gives meaning to the words. Meaning is given by usage. What matters morally is whether we act of our own free will in the sense of the expression 'of one's own free will' used colloquially. It is in fact a folk-psychological term. If you define "free will" to mean something else, then it may well be that we do not have free will in that sense, but as long as we act of our own free will (in the actual sense of the expression in English), the moral error theory is blocked (at least, that error theory; you could argue for an error theory on other grounds).

ruby sparks said:
Or, what they mean may be effectively meaningless (akin to god = the natural universe).
You keep picking those particularly problematic terms. Regardless, if people usually meant the same by 'god' and 'natural universe', then the meaning of those expresions would be the same, rather than being meaningless.

ruby sparks said:
}
Finally, what in fact do people usually mean? People's definitions for free will may actually differ under detailed scrutiny and often without much scrutiny at all (they may differ substantially and quite obviously). Do they know what they mean, or have they just not considered it enough? Are they just being conveniently pragmatic? Are they using presumptions that are outdated and that recent science is now undermining, leaving them under-informed, as the law courts arguably are? (see below in blue)*
Humans usually grasp language intuitively. They do not think about the meaning of the words they use, nor do they need to in order to use them adequately. That is general human behavior. As to what people mean and whether they mean the same, it seems so. At least, generally, when people use a word, they communicate successfully, rather than talk past each other. Cases of talking past each other are the exceptions, and you would need specific evidence supporting such claims. As to what they mean when they say someone acted of their own free will, I suggest you take a look at your exchange with Bomb#20 and with me. I'm not going to give an exact definition intended to match all counterfactual weird cases (that is exceedingly difficult with nearly all terms), but I grasp the meaning by watching people talk about it, and so can you (and surely so have you already). As a good approximation, I'd say people are talking about making choices that are not externally coerced or internally compelled, also in the usual sense of those words, which include guns to the head and irresistible impulse a person is trying not to have (e.g., kleptomaniacs perhaps), but not causation, full or not.
 
What people usually mean is what gives meaning to the words. Meaning is given by usage.

I think I agree. So, for example, if saying, "lunatic behaviours are caused by entities called demons" was the usual and accepted usage of the word 'demon', then what was meant by that statement was that lunatic behaviours are caused by them.

What matters morally is whether we act of our own free will in the sense of the expression 'of one's own free will' used colloquially.
The meaning of such expressions is what matters morally.

I really don't get why anyone would say that the popular, common sense, colloquial and/or intuitive meaning of the expression of any idea is what matters, factually or morally.

As to what people mean and whether they mean the same, it seems so. At least, generally, when people use a word, they communicate successfully, rather than talk past each other. Cases of talking past each other are the exceptions, and you would need specific evidence supporting such claims. As to what they mean when they say someone acted of their own free will, I suggest you take a look at your exchange with Bomb#20 and with me. I'm not going to give an exact definition intended to match all counterfactual weird cases (that is exceedingly difficult with nearly all terms), but I grasp the meaning by watching people talk about it, and so can you (and surely so have you already). As a good approximation, I'd say people are talking about making choices that are not externally coerced or internally compelled, also in the usual sense of those words, which include guns to the head and irresistible impulse a person is trying not to have (e.g., kleptomaniacs perhaps), but not causation, full or not.

I think you're probably right that that is at least a good approximation of what most people mean. And quite possibly the legal system.
 
Last edited:
ruby sparks,

So, this will go in installments. Here's one:

ruby sparks said:
I am still struggling to understand your scenarios 1-4 despite having reread them, but I will attempt an answer nonetheless.

First, as a preface to answering, how would it be if I amended your scenario 1 title to 'species moral sense'? I'm not saying 'universal human' is wrong, but species seems to describe what you are referring to (albeit specifically the human one). This change might avoid (at least my) confusion over the word 'universal' and also allow other species to not be excluded from wider consideration (many species also consist of groups and individuals, and we are after all part of the animal kingdom).

The hierarchy would then start (at the lower end) with 'individual', move up to 'group' (which might be roughly synonymous with culture, at least for a social species), then 'species' and then 'all creatures on earth' (perhaps I should say lifeforms). There might even be a higher category of 'extra-terrestrial creatures/lifeforms' but this may not often need to come into play except in potentially useful hypotheticals about aliens.

I don't much mind if you stick with 'universal human', now that I think I understand what you mean better and if we mean the same thing.

Calling it 'species moral sense' might work, but I wanted to stress the difference between a human moral sense of the universal kind vs. of the culture-relative kind, which could also be a species trait in a sense. That is why I think "universal human moral sense" is better. But if you do not find it clear, how about something like "Species-wide human moral sense"? Does that sound alright?

That aside, let me clarify a couple of points:

1. Two different species might have very similar, or potentially the same moral sense (you could expect that if the universe happens to have infinitely many planets). The scenario says nothing about that.

2. Not all species have a species moral sense. Most do not. The moral sense is involved in some sort of assessments - like the ones associated in English with expressions such as 'morally wrong', 'morally obligatory', 'morally praiseworthy', 'morally permissible', 'morally impermissible'. They do not need to be verbalized, of course. A mute person without sign languages can also be morally outraged, and that involves having made an assessment that something is morally wrong (normally, the verdicts of the moral sense are motivating for an individual, though the motivation is defeasible). So, chimps and bonobos and capuchin monkeys have something like that. So do (since this isn't due to convergent evolution, I reckon) any species with a more recent common ancestor with humans than humans and capuchins do. They don't need to have the same moral sense as humans, of course, or the same as each other. For example, you might expect differences between orangutans and bonobos, as the former are more solitary. But still, they have a species moral sense.
On the other hand, nearly everything or everything else does not have it. This holds even for other social species, like, say, lions. If you take a look at lion behavior, they do not blame each other. They are not sensitive to considerations of fairness. And so on. Similarly, there is no zebra morality, horse morality, and so on.
As for smart aliens, that would depend on the aliens.


ruby sparks said:
All I might wonder about, what might be only a gut feeling, without having thought it through completely (so many questions, so little time) is whether the scenarios are in fact exclusive to one another, or whether any given moral judgement results from a mixture of them all, as a result of a set of 'nature + nurture', individual + cultural/social' processes and influences.
That's a good question. I had implicit further conditions in mind that I did not write, so as a result, without the implicit conditions, Scenario 1 might be seen to entail Scenario 2 or be at least compatible with it (depending on how one understands the sentence about how the groups are formed). I don't think that that is what you got, though. So, let me fix that and try to write it more clearly (Scenario 4 is clearly exclusive with respect to all of the others, since by definition it contains 'All other alternatives'). There might be some redundancy.



Scenario 1. Species-wide human moral sense.

P1.1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P1.2: For every two humans H1, H2 with properly functioning moral senses, the moral senses M(H1) and M(H2) will yield the same outputs given the same inputs.
P1.3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically (a matter they assess by their own moral sense), and to punish those who do.

Scenario 2. Culture-relative moral sense.

P2.1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P2.2: For every two humans H1, H2 with properly functioning moral senses and who are members of the same group, the moral senses M(H1) and M(H2) will yield the same outputs given the same inputs. The groups are instinctively formed depending on social interaction.
P2.3: For every two humans H1, H2 with properly functioning moral senses, the moral senses M(H1) and M(H2) will yield the same outputs given the same inputs if the inputs are in a proper subset of the moral domain (which proper subset it is is a matter for future research in human moral psychology).
P2.4: P1.2 is false.
P2.5: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically (a matter they assess by their own moral sense, and to punish those who do.


Scenario 3. Individual taste moral sense.

P3.1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P3.2: P1.2 is false.
P3.3: P2.2 is false.
P2.4: P2.3 is true.
P2.5: There are humans H1 and H2 with properly functioning moral senses such that the moral senses M(H1) and M(H2) yield different outputs given the same inputs, at least in some cases.
P3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically (a matter they assess by their own moral sense), and to punish those who do.

Scenario 4. All other alternatives.​
As before, the premises should be understood as allowing some room for vagueness (but not much; still, we should leave that for later).


Now they are certainly mutually exclusive, because P2.4 asserts that P1.2 is false - excluding Scenario 1 -, whereas P3.2 and P3.3 clearly exclude Scenarios 1. and 2. (that was also the case before, but hopefully now it is more clear).
 
ruby sparks said:
If what you are essentially saying is that humans have at least some innate moral preferences that are common enough to be called 'endemic to and within the species' (I prefer endemic to universal because the latter implies 'in all instances', which is not what manifests, for any individual moral judgement I can think of) then I think it would be possible to broadly agree, despite my initially disagreeing.
Hmmm...maybe "universal" was the better choice after all, to make clear when we disagree and when we do not. But sticking to species-wide human moral sense, let us consider a species-wide human color vision.

First, note that color vision is human. If advanced aliens (say, species#243845) had visited the Earth, say, 400 million years ago, they would not have figured out what color objects were (well, it's extremely improbable that they would have human color vision). They might have had some analogue #243845-color, but that is not the same as color. They might have studied the visual systems of many Earth species. But none of them had species-wide human color vision. So, the aliens would not have discovered colors. While the color of light, or of an object, is a property of the object, the measuring system which we use (and which informs the usage and then meaning of our color terms) is specifically human.

Second, consider again the color scenario here. Now, the assessment that the light was red (or green) is a human universal in the sense that any human with a properly functioning color vision who looks at the traffic light under the same conditions (which I will add here were just ordinary, daylight conditions) under which Alice and Bob looked at it, will make the same assessment (let's say it was red). There might be people whose color vision is not functioning properly. They might be people color vision functions properly but they look at the red light through a green-colored glass (different input). However, given the same input, the verdict is the same assuming proper function.
Now, there might be some very slight differences in properly functioning human color vision, which will correspond with some slight vagueness in color terms. However, in the vast majority of real-life cases at least, the above holds: same inputs, no malfunctioning, then same verdict.


Now, Scenario 1 says something along those lines about morality. But keep in mind the 'same inputs' condition. Consider again the McConnell/Kavanaugh scenario:

Alice: McConnell's support for the confirmation of Kavanaugh was immoral behavior.
Bob: McConnell's support for the confirmation of Kavanaugh was not immoral behavior.

Suppose that both Alice and Bob are being sincere. Then, at least one of them is mistaken (do you agree?). Scenario 1 says that the mistake results either because one of their moral senses is not functioning properly, or else they both are, but they have different inputs. Note that the inputs can e very different even if the behavior they are talking about is in fact the same, because they might have different non-moral beliefs about it. This part is crucial, so I will quote it:

me said:
For example, maybe Alice believes that when McConnell voted, he knew that Kavanaugh had done such-and-such thing, but Bob believes that when McConnell voted, he knew that Kavanaugh had not done such-and-such thing. That would explain the disagreement, without a malfunctioning moral sense. Of course, it might be that there is a malfunctioning epistemic sense that leads one of them to having the beliefs they have. Or maybe they just got different info. There are plenty of possibilities, but here's the key point of : suppose that Alice and Bob have exactly the same beliefs about what beliefs McConnell had. Suppose they had the same beliefs about what McConnell intended to do. Suppose they have the same beliefs about every other property their respective moral senses use to get the output about whether McConnell acted immorally. Then, the disagreement results from some malfunctioning moral sense.
That is what you get from Scenario 1.

Now, you say that " prefer endemic to universal because the latter implies 'in all instances', which is not what manifests, for any individual moral judgement I can think of". I think once you consider the matter of differnet inputs, the situation is very different.

Another point to consider is this: In fact, moral disagreement is far less common than moral agreement. Else, we would not be able to navigate the social world around us, and society would collapse. From an evolutionary standpoint, also it would not work to have a set of rules of behavior if individuals widely disagree about them. Now, moral disagreement appears very common because it is salient, because of how we instinctively care about morality. Moral agreement is not salient. It's like plane crashes. An airliner crashes, and that's on the news. An airliner does not crash (and has no other mishap), and that definitely is not on the news. It's an ordinary event. But airliners crash far less often than they do not crash. And moral disagreement is far less frequent than agreement, which is ubiquitous.

But let's take a look at the cases of agreement. In the vast majority of cases, they seem to stem from disagreements about the non-moral facts of the matter that are used as inputs for the moral senses of the people who disagree (see the example of the disagreement about what McConnel knew, believed, etc., above).
ruby sparks said:
As such I am now thinking that my saying stuff such as 'there are no moral facts out there in the world outside our heads' is redundant, and quite possibly missing the point, because I'm now thinking that you were never talking about that or suggesting it, or indeed anything 'truly or ultimately objective', that you are only talking about 'internal rules' (inside human brains).
Well, it does not follow directly from Scenario 1, but it is closely related that there is indeed an objective fact of the matter when it comes to moral assessments. The problem here might be the word 'objective', which is another case in which many philosophers fudged the definition. :(

Sure, the rules are in the heads of humans. But for that matter, the instrument by which we measure color in standard cases (namely, human color vision) is also in the heads of humans. It remains the case that there is a fact of the matter as to whether the traffic light was red. If Bob believes it is green, then Bob is mistaken.

Granted, redness is a property of the traffic light. But for that matter, immorality (or permissibility, whichever it is) is a propery of McConnel's behavior (and more precisely, of McConnell's mind, as he was either acting permissibly, or immorally). Why would there not be a fact of the matter as to whether McConnell acted immorally? Yes, the rules are in our heads. But as in the color case, that does not preclude there being a fact of the matter. And the standard human position is that there is. As I said in the other post,

me said:
Readers would ordinarily and intuitively reckon that either Alice is making a false statement (deliberately or not), or else Bob is. In fact, one could expect that Alice and Bob (apart from insulting each other) would try to give arguments in support of your position, at least if they have the time and interest to get into a debate. It is implicit in that sort of conversation that there is a fact of the matter as to whether or not McConnell's behavior was immoral.
I am not sure whether you disagree here, so I would like to ask what you think?

ruby sparks said:
The above is the net result of me having tried to absorb and pull together a number of things you have been saying, and I am thinking that limited agreement is a more useful basis for proceeding than going over the details of disagreement or confusion. But if there was any specific statement of yours that you would like me to revisit, because you think it is important enough, just point me to it.
I think the above links should do. Let's see if we can clarify the matter a bit more.
 
ruby sparks said:
I think I agree. So, for example, if saying, "lunatic behaviours are caused by entities called demons" was the usual and accepted usage of the word 'demon', then what was meant by that statement was that lunatic behaviours are caused by them.
I'm not sure I understand that.

There is a big difference between using the word 'demon' to mean 'whatever causes lunatic behaviors', and using the word 'demon' to mean something like 'Biblical fallen angel', and claiming that lunatic behaviors are caused by them.
In the first case, demons do exist; in the second, they do not, and the claim that they cause lunatic behavior is false.

ruby sparks said:
I really don't get why anyone would say that the popular, common sense, colloquial and/or intuitive meaning of the expression of any idea is what matters, factually or morally.
Let me try with an example. One thing that is morally important is whether people act deliberately. For example, it is not the same if a driver runs over a pedestrian he had no connection with (i.e., no enemy, nothing) because the driver got distracted by a big ad beside the road, and because the driver just deliberately overran the pedestrian. Morally, the two are different cases, right?
But what matters is, of course, whether the driver did it deliberately in the usual, colloquial sense of the word 'deliberately' in English. If, say, I choose to use the word 'deliberately' to mean 'after 2.30 UTC', then in that sense of 'deliberately' it is not morally relevant whether the driver hit the pedestrial deliberately (all other things equal).

Now, suppose this common defense:

John: Yes, I stole the car. But it is not my fault. I did not do it of my own free will. They were going to murder my kid if I did not steal it.
Jill: That is not true. You stole it of your own free will.
People would generally recognize that the facts involving this defense are morally relevant. In other words, it does matter whether John stole the car of his own free will. The threat to his kid would very significantly reduce his freedom, even if not completely eliminate it. But when people recognize that this matters, they understand of course the words 'of my own free will', etc., in the usual sense because, well, that is their usual sense and the people who recognize the relevance of the question are English speakers.

Of course, if I were to redefine 'of one's free will' to mean, say, 'before 13.45 UTC', then it would be morally irrelevant (all other things equal) whether John stole it of his own free will. What matters from a moral perspective is the colloquial, usual sense of the expression.
 
immorality (or permissibility, whichever it is) is a propery of McConnel's behavior
Ok.

Would you say deliciousness (or unpleasantness, whichever it is) is a property of anchovies? In other words is there a fact of the matter as to whether anchovies are delicious (or not).

If you disagree, on what basis do you make a distinction between moral evaluations and gustatory evaluations?.
 
Calling it 'species moral sense' might work, but I wanted to stress the difference between a human moral sense of the universal kind vs. of the culture-relative kind, which could also be a species trait in a sense. That is why I think "universal human moral sense" is better. But if you do not find it clear, how about something like "Species-wide human moral sense"? Does that sound alright?

Yes. All that matters is that we are talking about the same thing.

That aside, let me clarify a couple of points:

1. Two different species might have very similar, or potentially the same moral sense (you could expect that if the universe happens to have infinitely many planets). The scenario says nothing about that.

2. Not all species have a species moral sense. Most do not. The moral sense is involved in some sort of assessments - like the ones associated in English with expressions such as 'morally wrong', 'morally obligatory', 'morally praiseworthy', 'morally permissible', 'morally impermissible'. They do not need to be verbalized, of course. A mute person without sign languages can also be morally outraged, and that involves having made an assessment that something is morally wrong (normally, the verdicts of the moral sense are motivating for an individual, though the motivation is defeasible). So, chimps and bonobos and capuchin monkeys have something like that. So do (since this isn't due to convergent evolution, I reckon) any species with a more recent common ancestor with humans than humans and capuchins do. They don't need to have the same moral sense as humans, of course, or the same as each other. For example, you might expect differences between orangutans and bonobos, as the former are more solitary. But still, they have a species moral sense.
On the other hand, nearly everything or everything else does not have it. This holds even for other social species, like, say, lions. If you take a look at lion behavior, they do not blame each other. They are not sensitive to considerations of fairness. And so on. Similarly, there is no zebra morality, horse morality, and so on.
As for smart aliens, that would depend on the aliens.

I think I broadly agree. Animal morality may be a huge topic all of itself but it would be a diversion here, so let's stick with humans.


That's a good question. I had implicit further conditions in mind that I did not write, so as a result, without the implicit conditions, Scenario 1 might be seen to entail Scenario 2 or be at least compatible with it (depending on how one understands the sentence about how the groups are formed). I don't think that that is what you got, though. So, let me fix that and try to write it more clearly (Scenario 4 is clearly exclusive with respect to all of the others, since by definition it contains 'All other alternatives'). There might be some redundancy.



Scenario 1. Species-wide human moral sense.

P1.1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P1.2: For every two humans H1, H2 with properly functioning moral senses, the moral senses M(H1) and M(H2) will yield the same outputs given the same inputs.
P1.3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically (a matter they assess by their own moral sense), and to punish those who do.

Scenario 2. Culture-relative moral sense.

P2.1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P2.2: For every two humans H1, H2 with properly functioning moral senses and who are members of the same group, the moral senses M(H1) and M(H2) will yield the same outputs given the same inputs. The groups are instinctively formed depending on social interaction.
P2.3: For every two humans H1, H2 with properly functioning moral senses, the moral senses M(H1) and M(H2) will yield the same outputs given the same inputs if the inputs are in a proper subset of the moral domain (which proper subset it is is a matter for future research in human moral psychology).
P2.4: P1.2 is false.
P2.5: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically (a matter they assess by their own moral sense, and to punish those who do.


Scenario 3. Individual taste moral sense.

P3.1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P3.2: P1.2 is false.
P3.3: P2.2 is false.
P2.4: P2.3 is true.
P2.5: There are humans H1 and H2 with properly functioning moral senses such that the moral senses M(H1) and M(H2) yield different outputs given the same inputs, at least in some cases.
P3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically (a matter they assess by their own moral sense), and to punish those who do.

Scenario 4. All other alternatives.​
As before, the premises should be understood as allowing some room for vagueness (but not much; still, we should leave that for later).


Now they are certainly mutually exclusive, because P2.4 asserts that P1.2 is false - excluding Scenario 1 -, whereas P3.2 and P3.3 clearly exclude Scenarios 1. and 2. (that was also the case before, but hopefully now it is more clear).

Thanks. Sadly, I am still not wholly sure I understand (I think you are more fluent in logic than I am) but I will still try to give an answer. I prefer scenario 3. That said, scenario 1 (and 2) may be approximately correct, imo. I'm not sure that will make sense to you. I think I say it because while I think, and as I said before, that I am now inclined to agree that there is such a thing as species-wide morality (for certain things within its domain) it is merely common and perhaps endemic, but not 'in all cases', even for those things within its domain. So that as I am seeing it, two similar but not identical human brains can both be 'properly functioning' and still give different outputs after the same inputs.
 
Last edited:
Hmmm...maybe "universal" was the better choice after all, to make clear when we disagree and when we do not. But sticking to species-wide human moral sense, let us consider a species-wide human color vision.

First, note that color vision is human. If advanced aliens (say, species#243845) had visited the Earth, say, 400 million years ago, they would not have figured out what color objects were (well, it's extremely improbable that they would have human color vision). They might have had some analogue #243845-color, but that is not the same as color. They might have studied the visual systems of many Earth species. But none of them had species-wide human color vision. So, the aliens would not have discovered colors. While the color of light, or of an object, is a property of the object, the measuring system which we use (and which informs the usage and then meaning of our color terms) is specifically human.

Second, consider again the color scenario here. Now, the assessment that the light was red (or green) is a human universal in the sense that any human with a properly functioning color vision who looks at the traffic light under the same conditions (which I will add here were just ordinary, daylight conditions) under which Alice and Bob looked at it, will make the same assessment (let's say it was red). There might be people whose color vision is not functioning properly. They might be people color vision functions properly but they look at the red light through a green-colored glass (different input). However, given the same input, the verdict is the same assuming proper function.
Now, there might be some very slight differences in properly functioning human color vision, which will correspond with some slight vagueness in color terms. However, in the vast majority of real-life cases at least, the above holds: same inputs, no malfunctioning, then same verdict.

I think that is broadly ok, imo, as regards colour. We can temporarily step around getting into what 'slight' might involve.


Now, Scenario 1 says something along those lines about morality. But keep in mind the 'same inputs' condition. Consider again the McConnell/Kavanaugh scenario:

Alice: McConnell's support for the confirmation of Kavanaugh was immoral behavior.
Bob: McConnell's support for the confirmation of Kavanaugh was not immoral behavior.

Suppose that both Alice and Bob are being sincere. Then, at least one of them is mistaken (do you agree?). Scenario 1 says that the mistake results either because one of their moral senses is not functioning properly, or else they both are, but they have different inputs. Note that the inputs can e very different even if the behavior they are talking about is in fact the same, because they might have different non-moral beliefs about it. This part is crucial, so I will quote it:


That is what you get from Scenario 1.

Now, you say that " prefer endemic to universal because the latter implies 'in all instances', which is not what manifests, for any individual moral judgement I can think of". I think once you consider the matter of differnet inputs, the situation is very different.

Another point to consider is this: In fact, moral disagreement is far less common than moral agreement. Else, we would not be able to navigate the social world around us, and society would collapse. From an evolutionary standpoint, also it would not work to have a set of rules of behavior if individuals widely disagree about them. Now, moral disagreement appears very common because it is salient, because of how we instinctively care about morality. Moral agreement is not salient. It's like plane crashes. An airliner crashes, and that's on the news. An airliner does not crash (and has no other mishap), and that definitely is not on the news. It's an ordinary event. But airliners crash far less often than they do not crash. And moral disagreement is far less frequent than agreement, which is ubiquitous.

But let's take a look at the cases of agreement. In the vast majority of cases, they seem to stem from disagreements about the non-moral facts of the matter that are used as inputs for the moral senses of the people who disagree (see the example of the disagreement about what McConnel knew, believed, etc., above).
ruby sparks said:
As such I am now thinking that my saying stuff such as 'there are no moral facts out there in the world outside our heads' is redundant, and quite possibly missing the point, because I'm now thinking that you were never talking about that or suggesting it, or indeed anything 'truly or ultimately objective', that you are only talking about 'internal rules' (inside human brains).
Well, it does not follow directly from Scenario 1, but it is closely related that there is indeed an objective fact of the matter when it comes to moral assessments. The problem here might be the word 'objective', which is another case in which many philosophers fudged the definition. :(

Sure, the rules are in the heads of humans. But for that matter, the instrument by which we measure color in standard cases (namely, human color vision) is also in the heads of humans. It remains the case that there is a fact of the matter as to whether the traffic light was red. If Bob believes it is green, then Bob is mistaken.

Granted, redness is a property of the traffic light. But for that matter, immorality (or permissibility, whichever it is) is a propery of McConnel's behavior (and more precisely, of McConnell's mind, as he was either acting permissibly, or immorally). Why would there not be a fact of the matter as to whether McConnell acted immorally? Yes, the rules are in our heads. But as in the color case, that does not preclude there being a fact of the matter. And the standard human position is that there is. As I said in the other post,

me said:
Readers would ordinarily and intuitively reckon that either Alice is making a false statement (deliberately or not), or else Bob is. In fact, one could expect that Alice and Bob (apart from insulting each other) would try to give arguments in support of your position, at least if they have the time and interest to get into a debate. It is implicit in that sort of conversation that there is a fact of the matter as to whether or not McConnell's behavior was immoral.
I am not sure whether you disagree here, so I would like to ask what you think?

ruby sparks said:
The above is the net result of me having tried to absorb and pull together a number of things you have been saying, and I am thinking that limited agreement is a more useful basis for proceeding than going over the details of disagreement or confusion. But if there was any specific statement of yours that you would like me to revisit, because you think it is important enough, just point me to it.
I think the above links should do. Let's see if we can clarify the matter a bit more.

I think you might have to say exactly what M did (when he supported K) that is supposed to have been either immoral or not, in other words, can you be as specific with that scenario as with the colour one? I think I need to see the equivalent of a red or a green. :)

If if involves an unintended falsity, that might be one thing (and possibly not immoral).

If it involved M telling a lie, then I don't initially feel it is necessarily clear (yet) if there is a morally right or a morally wrong fact of the matter.

Suppose M lied, but for 'overall beneficial' reasons (eg M sincerely believed that K becoming a supreme court judge would be overall a good thing on balance, despite the thing M was lying about, and let's say M's assessment/beliefs were accurate).

Perhaps you did not have 'lying' in mind. Or if you did, we might now want to explore what I just said by going into more detail, possibly including what Alice or Bob knew of what was in M's mind.
 
Last edited:
ruby sparks said:
I think I agree. So, for example, if saying, "lunatic behaviours are caused by entities called demons" was the usual and accepted usage of the word 'demon', then what was meant by that statement was that lunatic behaviours are caused by them.
I'm not sure I understand that.

There is a big difference between using the word 'demon' to mean 'whatever causes lunatic behaviors', and using the word 'demon' to mean something like 'Biblical fallen angel', and claiming that lunatic behaviors are caused by them.
In the first case, demons do exist; in the second, they do not, and the claim that they cause lunatic behavior is false.

I said 'entities' because I meant to convey the latter, which we agree is false (as far as we know) and is at least now deemed false in modern courtrooms in most countries, as far as I know (there may be places it is still an allowable defence).

My only point was to try to illustrate that the statement seems to conform to your saying "What people usually mean is what gives meaning to the words. Meaning is given by usage" and perhaps illustrate that this may say nothing about truth or falsity. Now, you may not have intended that particular statement to say anything about truth or falsity, but I wanted to clarify, because I am trying to make the point that something is not necessarily the case just because we have a definition, and especially one that is only based on intuition, because an intuition can be incorrect.

ruby sparks said:
I really don't get why anyone would say that the popular, common sense, colloquial and/or intuitive meaning of the expression of any idea is what matters, factually or morally.
Let me try with an example. One thing that is morally important is whether people act deliberately. For example, it is not the same if a driver runs over a pedestrian he had no connection with (i.e., no enemy, nothing) because the driver got distracted by a big ad beside the road, and because the driver just deliberately overran the pedestrian. Morally, the two are different cases, right?
But what matters is, of course, whether the driver did it deliberately in the usual, colloquial sense of the word 'deliberately' in English. If, say, I choose to use the word 'deliberately' to mean 'after 2.30 UTC', then in that sense of 'deliberately' it is not morally relevant whether the driver hit the pedestrial deliberately (all other things equal).

Now, suppose this common defense:

John: Yes, I stole the car. But it is not my fault. I did not do it of my own free will. They were going to murder my kid if I did not steal it.
Jill: That is not true. You stole it of your own free will.
People would generally recognize that the facts involving this defense are morally relevant. In other words, it does matter whether John stole the car of his own free will. The threat to his kid would very significantly reduce his freedom, even if not completely eliminate it. But when people recognize that this matters, they understand of course the words 'of my own free will', etc., in the usual sense because, well, that is their usual sense and the people who recognize the relevance of the question are English speakers.

Of course, if I were to redefine 'of one's free will' to mean, say, 'before 13.45 UTC', then it would be morally irrelevant (all other things equal) whether John stole it of his own free will. What matters from a moral perspective is the colloquial, usual sense of the expression.

Thank you, but I didn't ask you to illustrate the intuitive/colloquial, I asked you something more, something trickier, why it should be what ultimately matters, factually or morally. I read all the above and merely thought, 'yes, that comports with the intuitive/colloquial understanding of free will, which could be incorrect". As such you are only appealing to our mutual intuitions and if I agreed with you, it would only show that we have the same intuitions.

You see, it could be (and I think it is the case) that despite our intuitions, being distracted by the prominent ad at the side of the road is in reality on a par with any and all of the other influences on the driver. Let's hypothetically say that one (for this driver) was that he had a gene which made him prone to violence, for example, and some set of circumstances beyond his control had led to that gene 'being strongly activated'. That is of course only one small factor and not of itself sufficient. There may have been 10,000 others to go along with it, or more, basically all the things which went to make up his brain state and added up to everything being fully determined, for him, at every instant.

It still seems to me that there is something about 'fully determined' that you are not, er, fully taking into account. Fully determined in fact means fully determined, at every instant. No wiggle room, Angra.

So, that a prior intention that formed, for whatever multiple reasons (of which a hypothetical gene-activation may only be one possible example) in the mind of the driver who we say intended to knock down the pedestrian, is just something that itself was fully determined, and not something the driver could have freely willed to happen. The same goes for whether the driver acted on the intention or didn't, in the end, act on the intention (eg he swerved away at the very last possible moment) because in the end his not acting would also have been fully determined. He literally, it seems, could not freely choose to do other than what he did, may not even have been merely able to do other than he did, and nor would you had you been fully in his shoes.

People generally, including you and I, do not intuitively feel that every single thing they do, indeed every single thought they have at any instant, is in fact fully determined. But under full determinism, you and I and they would seem to be wrong. So much for human intuitions. That is the point.

That we might actually be fully-determined biological machines may in fact be literally counter-intuitive, as therefore may not having free will. Which is why I asked you why intuitions (which I hope we can agree can be incorrect) should be the things that ultimately matter, factually or morally.
 
Last edited:
Back
Top Bottom