Angra Mainyu
Veteran Member
One of the objections often raised in these forums is that one cannot derive a moral 'ought' from an 'is'. Technically that is false, because once the meaning of the words is considered, from 'It would be immoral for agent A to do X' it follows that 'Agent A ought not to do X'. But the objection remains about deriving moral conclusions from non-moral premises, i.e., deriving conclusions containing moral terms from those that do not, where 'moral terms' is defined ostensively: 'morally permissible', 'unethical', 'immoral', 'morally wrong', 'morally praiseworthy', etc., are moral terms, whereas 'cat', 'mouse', 'planet', 'car', 'table', etc., are not.
Is this true?
Well, sort of. We can derive anything (including moral conclusions) from a contradiction, even if the contradiction is stated without using any kind of moral term. But leaving that case aside (and perhaps other anomalous cases, like deriving a tautology involving moral terms, etc. ), it seems to me that in the sense of a deduction, one should not be able to do that. But what about probabilistic assessments? Now it might be said that we still need conditional probabilities described using moral terms or something like that. But if this is so, then it would still not be in a way that is vulnerable to the 'is-ought' objection, as usually put.
So, to make my case, consider first not moral assessments, but color assessments. For example, how do I assess that a ball is red? One way of doing so would be to look at the apple. It looks red to me. Under ordinary light conditions. And I know from experience that our color vision is pretty ordinary for a human. Then I am justified in assessing that the apple is red, barring counterevidence. I would say that I would be assigning a very very high probability to the hypothesis that the ball in question is red. We do this intuitively, and without using numbers.
Now suppose I do not see the ball. However, I observe that many humans who look at the ball tell me it's red. Assuming I can tell that they are being sincere (how I do that is not the issue), I also have justification to reckon that the ball is red, again with very, very high probability. Now suppose no humans look at the ball, but there is a robot with cameras for eyes and whose color vision is calibrated using the color vision of ordinary humans. The robot has been tested in thousands of experiments, and under ordinary conditions, it makes the color assessments humans ordinarily do. If I get conclusive information that the robot says the ball is red (again, ordinary light conditions), then I can use that to reckon that the ball is red.
More to the point: if I know (I have sufficient information) that a human with ordinary color vision would reckon that the ball is red, that is very good evidence that the ball is red. And if I reckon then that the ball is red, is there a fallacy involved?
Maybe intuitively I am making probabilistic assessment that P(A is red| ordinary human color vision detects it as red under normal light conditions) is extremely high. Or something like P(Q|ordinary human faculties say it's Q under normal circumstances) is also high, plus the assessment that color vision is an ordinary human faculty.
At any rate, maybe there is a logical error somewhere, but if there is, it is pervasive. It's pretty much everywhere except perhaps for immediate assessments like the example in which we directly look at the ball. But maybe the problem - if there is one - happens when I try to use language, and also when I factor in the information that my color vision is ordinary. Then again, maybe I'm making intuitive probabilistic assessments with the conditional probabilities already intuitively fixed, and there is no fallacy.
At any rate, if at some point in my probabilistic assessments, I made a logical error and as a result the assessment in question is not justified, then as they say here 'Estamos en el horno', literally 'We are in the oven', or in other words, we're screwed, because if even that sort of normal assessment fails and is not justified, very few things (if any) are.
Perhaps, there is a logical error, but it is justified to make it? Given that - again, I know it because it's intuitively obvious!!!, and I have no good reason to doubt my intuitions on the matter -, I am justified in assessing that the ball is red, i.e., very probably red (in all of the scenarios above), then it seems to me that if there is a logical error, then there are logical errors we are regularly justified to make and this is one of them...
Let us move to the moral case: Suppose that all people die due to a rogue biological weapon, except for Joe, who decides to pour gasoline on a cat and set her on fire, so that he has fun watching a fireball run. In fact, he does that every day, as there are plenty of cats around, and he can capture them with different traps and tactics. He is determined to do this, and has human intelligence and the tools left by the rest of humanity at his disposal, so the cats do not have a chance. Then Joe behaves unethically. How do I know? As in the color case, I use my own faculties, in this case my moral sense, instead of my color vision. So far, it seems similar.
But can I also use the faculties of others? I do not see why not. The information that the human ordinary moral sense reckons a behavior immoral seems to provide good evidence that it is immoral. Any potential fallacy here was also there in the color case.
My point is that from the perspective of logic, the moral case and the color case appear similar. But it's not only the moral and the color case. It's everywhere, as we rely on human faculties (we have no others) all the time. And in science too, of course. Suppose I read statements by many scientists (e.g., in textbooks) saying that water is composed of H2O. I reckon this is the case, on the basis of that evidence. But then I am of course relying on my own faculties (I can't not do that), and also using the information the reliability of science, etc.
Someone might say that science is more reliable in than human faculties (to which I would reply that that depends on the faculty, but that's a matter for another debate), but at any rate, a key point is that making an assessment that something is blue or morally wrong using as evidence that ordinary human faculties say it is relevantly similar from making an assessment that water is composed of H2O using as evidence that scientists say it is, and the relevant part is that one of them always involves the fallacy moral assessment and/or statements are usually charged with if and only if all of them do.
In particular, just as it does not logically follow from the fact that the ordinary human moral sense says a behavior is morally wrong that it actually is, it also does not logically follow from the fact that scientists say that water is composed of H2O that it actually is so (and the same for color).
Is this true?
Well, sort of. We can derive anything (including moral conclusions) from a contradiction, even if the contradiction is stated without using any kind of moral term. But leaving that case aside (and perhaps other anomalous cases, like deriving a tautology involving moral terms, etc. ), it seems to me that in the sense of a deduction, one should not be able to do that. But what about probabilistic assessments? Now it might be said that we still need conditional probabilities described using moral terms or something like that. But if this is so, then it would still not be in a way that is vulnerable to the 'is-ought' objection, as usually put.
So, to make my case, consider first not moral assessments, but color assessments. For example, how do I assess that a ball is red? One way of doing so would be to look at the apple. It looks red to me. Under ordinary light conditions. And I know from experience that our color vision is pretty ordinary for a human. Then I am justified in assessing that the apple is red, barring counterevidence. I would say that I would be assigning a very very high probability to the hypothesis that the ball in question is red. We do this intuitively, and without using numbers.
Now suppose I do not see the ball. However, I observe that many humans who look at the ball tell me it's red. Assuming I can tell that they are being sincere (how I do that is not the issue), I also have justification to reckon that the ball is red, again with very, very high probability. Now suppose no humans look at the ball, but there is a robot with cameras for eyes and whose color vision is calibrated using the color vision of ordinary humans. The robot has been tested in thousands of experiments, and under ordinary conditions, it makes the color assessments humans ordinarily do. If I get conclusive information that the robot says the ball is red (again, ordinary light conditions), then I can use that to reckon that the ball is red.
More to the point: if I know (I have sufficient information) that a human with ordinary color vision would reckon that the ball is red, that is very good evidence that the ball is red. And if I reckon then that the ball is red, is there a fallacy involved?
Maybe intuitively I am making probabilistic assessment that P(A is red| ordinary human color vision detects it as red under normal light conditions) is extremely high. Or something like P(Q|ordinary human faculties say it's Q under normal circumstances) is also high, plus the assessment that color vision is an ordinary human faculty.
At any rate, maybe there is a logical error somewhere, but if there is, it is pervasive. It's pretty much everywhere except perhaps for immediate assessments like the example in which we directly look at the ball. But maybe the problem - if there is one - happens when I try to use language, and also when I factor in the information that my color vision is ordinary. Then again, maybe I'm making intuitive probabilistic assessments with the conditional probabilities already intuitively fixed, and there is no fallacy.
At any rate, if at some point in my probabilistic assessments, I made a logical error and as a result the assessment in question is not justified, then as they say here 'Estamos en el horno', literally 'We are in the oven', or in other words, we're screwed, because if even that sort of normal assessment fails and is not justified, very few things (if any) are.
Perhaps, there is a logical error, but it is justified to make it? Given that - again, I know it because it's intuitively obvious!!!, and I have no good reason to doubt my intuitions on the matter -, I am justified in assessing that the ball is red, i.e., very probably red (in all of the scenarios above), then it seems to me that if there is a logical error, then there are logical errors we are regularly justified to make and this is one of them...
Let us move to the moral case: Suppose that all people die due to a rogue biological weapon, except for Joe, who decides to pour gasoline on a cat and set her on fire, so that he has fun watching a fireball run. In fact, he does that every day, as there are plenty of cats around, and he can capture them with different traps and tactics. He is determined to do this, and has human intelligence and the tools left by the rest of humanity at his disposal, so the cats do not have a chance. Then Joe behaves unethically. How do I know? As in the color case, I use my own faculties, in this case my moral sense, instead of my color vision. So far, it seems similar.
But can I also use the faculties of others? I do not see why not. The information that the human ordinary moral sense reckons a behavior immoral seems to provide good evidence that it is immoral. Any potential fallacy here was also there in the color case.
My point is that from the perspective of logic, the moral case and the color case appear similar. But it's not only the moral and the color case. It's everywhere, as we rely on human faculties (we have no others) all the time. And in science too, of course. Suppose I read statements by many scientists (e.g., in textbooks) saying that water is composed of H2O. I reckon this is the case, on the basis of that evidence. But then I am of course relying on my own faculties (I can't not do that), and also using the information the reliability of science, etc.
Someone might say that science is more reliable in than human faculties (to which I would reply that that depends on the faculty, but that's a matter for another debate), but at any rate, a key point is that making an assessment that something is blue or morally wrong using as evidence that ordinary human faculties say it is relevantly similar from making an assessment that water is composed of H2O using as evidence that scientists say it is, and the relevant part is that one of them always involves the fallacy moral assessment and/or statements are usually charged with if and only if all of them do.
In particular, just as it does not logically follow from the fact that the ordinary human moral sense says a behavior is morally wrong that it actually is, it also does not logically follow from the fact that scientists say that water is composed of H2O that it actually is so (and the same for color).
Last edited: