• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

The is/ought issue.

ruby sparks said:
We agree that getting a moral conclusion from nonmoral premises is a formal fallacy and not valid.
That is unclear, because it depends on what you classify as 'nonmoral premises'. The following for example is valid:

P1: Ordinary human faculties reckon that it is immoral for a human being to rape another just for fun.
P2: If ordinary human faculties reckon that A, then very probably A.
C: Very probably, it is immoral for a human being to rape another just for fun.

Do you reckon that that contains also moral premises?

ruby sparks said:
Therefore we agree there is an inherent is/ought problem, surely an important one to someone like you who values logic highly, at least sometimes.
That does not follow. And here are two ways in which it does not follow:

First, even if I agree that getting a moral conclusion from nonmoral premises is a formal fallacy and not valid (which depends on what you call "nonmoral premises"; see example above), it is not the case that I agree that that is what people usually do. For all I know, maybe we are doing exactly the sort of argument as above, or rather the nonconscious version of it. In fact, I do think we are making an intuitive assessment as we make in nearly all cases.

Second, that I value logic highly does not mean that I think there is a problem if it happens to be the case that our daily probabilistic assessments commit a fallacy - and if that happens with moral assessments, it happens with all of those too.

In fact, under the hypothesis that it is a fallacy, my assessment is precisely that this fallacy is not at all a problem, as our usual scientific assessments are fine and incur it all the time, and in fact, even beyond science and in our daily lives, our usual assessments incur this fallacy if it happens at all, of which I am not convinced.


In short, I disagree that there is an is/ought problem. My position is that either we are not deriving moral conclusions from nonmoral premises (or if the premises in the argument above count as nonmoral, maybe we are, but that is not invalid), or we are but in this case the fallacy is not a problem.


ruby sparks said:
Whether something else, something different, is or isn't valid, is a fallacy, or is problematical in some other way, is another matter.
No, it's not, because I am saying that the something else is not problematic and it is a fallacy if and only if the moral case is. And the something else is not problematic, but rather, what would be irrational would be to try not to make ordinary assessments like that, about the world around us, about what happened or will happen in either scientific or daily conditions, and the like.



ruby sparks said:
You are de facto not comparing like with like (first, you are comparing nonmoral issues to moral ones, and second, your analogous phenomena arguably and apparently have mind-independent properties, whereas morality arguably and apparently doesn't) so the analogies and comparisons you are using ultimately fail by your own preferred standards.
Whether they have mind-independent properties is irrelevant. The relevant question here is whether there is a problem due to a fallacy. My argument is that either there is no fallacy, or if there is, it is not a problem, and in fact, it would be irrational not to make it all the time.


ruby sparks said:
Also, it is interesting that you can't yet see that the prejudice argument above is valid. I would say that points up one of the limitations of a non-logical, often intuitive system (eg the human brain) trying to do logic.
Nah, that was a brain failure this morning, when I saw it again I saw it was valid, and it now looks obviously valid. Maybe I wasn't paying enough attention. Maybe it's because I'm ill and also not sleeping properly (due to being ill), or maybe it was just a brain fart. But the fact that I can see clearly that it is valid also indicates that whatever caused a temporary failure in my logical system at the moment, it's not something that is a general limitation (I usually deal with far more complex arguments, and yes, sometimes I make mistakes. But I correct them later, when checking.). For that matter, sometimes people are walking and just trip or stumble despite not being an obstacle. It happens, and speaks of the fallibility of the human walking ability, not of a general limitation to the capability of the system (which allows humans to even run for long distances).
 
Whether they have mind-independent properties is irrelevant. The relevant question here is whether there is a problem due to a fallacy. My argument is that either there is no fallacy, or if there is, it is not a problem, and in fact, it would be irrational not to make it all the time.

As you know, I do not agree it is irrelevant.

Also, I think it's odd that someone who claims to think logic is very important in all of this is willing to say it's not a problem if the is/ought argument is a formal fallacy. Although maybe it's not that odd, because it's exactly what someone with a prior belief about something would do, they would, at times when it was necessary or convenient, temporarily ditch a standard of inquiry they previously claimed was crucial to that inquiry if it didn't in fact fit with the prior belief. There's a word for that way of going about things.

Furthermore, the 'arguments from analogy' that try to justify exactly why it's supposedly not a problem seem to me potentially flawed in and of themselves, possibly for any and all such arguments. In this particular case, in your selected analogies you are not comparing like with like (you seem to have convinced yourself that in some way you are, but you are obviously not) therefore you can't necessarily draw conclusions about one from the other.

Because even if the thought or reasoning processes involved in both cases are at least superficially similar, there are epistemic considerations in one case that do not pertain to the other, and the brain will be automatically factoring these into its thinking processes. There are good reasons to think that the complex reasoning involved in one (eg colour) is partly based on there apparently being (as far as the brain is concerned) mind-independent facts about it, but not in the other case (eg morality). So the one does not lend the credibility to the other that you think it does.

Let me put it another way. 'Humans tend to believe there are objectively right and wrong answers about colours, therefore it is reasonable to say that there are objectively right or wrong answers about colours' does not necessarily translate to 'Humans tend to believe there are objectively right and wrong answers about morality, therefore it is reasonable to say that there are objectively right or wrong answers about morality' and I think you are pinning a lot on that particular point of comparison. Too much, imo. Yes, one day you had an interesting and useful lightbulb moment about this, but I think you're reaching about the conclusions.
 
Last edited:
Yeah, okay, it's valid if they're all people. My bad, brain failure this morning. :rolleyes: But that still makes no difference that is relevant in this context. You're only showing that there are valid arguments from nonmoral to nonmoral. But there are also valid arguments from noncolor to noncolor, from nonillness to nonillness, etc., and yet the parallels also do not give us proper classifications of arguments.

I have no idea what your point is there. My point is that there are no valid nonmoral-to-moral arguments.

But just on the ('my') prejudice argument, I am now thinking it is invalid.
 
ruby sparks said:
Also, I think it's odd that someone who claims to think logic is very important in all of this is willing to say it's not a problem if the is/ought argument is a formal fallacy.
Let us assume that the way in which we make moral assessments using our moral sense is a fallacy, and also making them on the basis of information about the moral senses of others, is a fallacy, then what we get is that the way we gain information about the outside world involves a fallacy in all of the times( or at least all but what Torin classifies as 'sensations'; while I argue the classification makes no relevant difference in this context, I can even grant it does and the problem remains just as much).

Then, under that assumption, my options - logically - are as follows: Either

a. This particular kind of fallacy is not a problem, so even if logic is important in other contexts, it is not in this one.

b. We simply have no information about the outside world, except perhaps (with another assumption) our sensations (which of course also require brain processing, but never mind), and not even what we can assess from them. The result is epistemological solipsism, which surely is false.


Using again my faculties - the only ones I have - I reckon that under the assumptions that the way in which we make moral assessments using our moral sense is a fallacy, and also making them on the basis of information about the moral senses of others, is a fallacy, the proper assessment is a. So, it turns out that logic is not very important in all of these cases. The reason I do not make that assessment without qualification is that I am not at all convinced that we are making a fallacy all the time. But if we are, then well clearly it is a., not b.

ruby sparks said:
Although it's not that odd, because it's exactly what someone with a prior belief about something would do, they would, at times when it was necessary or convenient, temporarily ditch a standard of inquiry they previously claimed was crucial to that inquiry if it didn't in fact fit with the prior belief. There's a word for that way of going about things.
You are very mistaken about me and the way I am doing things, and you should realize that upon reading my posts. But you persist in your attacks on me, without correcting your errors. But I will keep posting, because maybe I will persuade some of the other readers/posters.

ruby sparks said:
Furthermore, the sophistry and semantic that tries to justify exactly why it's supposedly not a problem seems to me odd in itself. In your selected analogies you are not comparing like with like (you seem to have convinced yourself that in some way you are, but you are obviously not) therefore you can't necessarily draw conclusions about one from the other.
The fact that you do not realize that the analogies are indeed correct because the matters are analogous in the sense that is relevant in this context has several potential explanations, ranging simply from anger due to hostility and contempt towards me, to other problems that would not be so easy to fix. At any rate, it's not something I can fix. This one is not on my end.


ruby sparks said:
Let me put it another way. 'Humans feel there are objectively right and wrong answers about colours, therefore it is reasonable to say that there are objectively right or wrong answers about colours' does not necessarily translate to 'Humans feel there are objectively right and wrong answers about morality, therefore it is reasonable to say that there are objectively right or wrong answers about morality' and I think you are pinning a lot on that particular point of comparison. Too much, imo.
Okay, let me address this point. First, I did not say 'feel'. Humans ordinarily believe, think, reckon, assess, etc., that there are right and wrong answers about color, and about morality. But leaving that aside, what I am saying is that there is no difference in regard to whether there is a fallacy in the following assessments:


P0: Humans with ordinary faculties, under standard light conditions, reckon that this ball is red.
C: Very probably, this ball is red.

P1: Humans with ordinary faculties reckon, upon observation of many cancer patients, that cancer is an illness.
C: Very probably, cancer is an illness.

P2: Humans with ordinary faculties reckon, after considering the matter, that it is immoral for a human being to rape another just for fun.
C: Very probably, it is immoral for a human being to rape another just for fun.
One might of course reckon that not all of these assessments are equally rational and/or equally reasonable, but that is due to other pieces of information not listed in the premises that change the proper probabilistic assessment. On the other hand, there is no difference in these assessments in regards to whether the conclusion follows from the premises. It does not. Of course, one can bridge the gap with a probabilistic premise, but that is equally doable in all 3 of them.


Remember, the purpose of this thread is not to show that there is objective morality, but rather, to show that the is/ought objection against morality fails. This is compatible with there being other objections that succeed. But not this one.
 
I think it has already been agreed that it is often arguably reasonable to go from an is to an ought.

For example:

Humans need forests. Therefore, humans ought to preserve forests.

The expanded version might be:

If (or given that) humans want to survive, they need forests. Therefore, they ought to preserve them.

It could even (validly I think) be put as follows:

P1. Humans want to survive.
P2. Humans need forests in order to survive.
C1. Humans ought to preserve forests.

I guess one question is, is this a normative moral issue, and I would say, if humans think it is, then it effectively is (and I would think it).
 
Last edited:
I think it has already been agreed that it is often arguably reasonable to go from an is to an ought.

For example:

Humans need forests. Therefore, humans ought to preserve forests.

The expanded version might be:

If (or given that) humans want to survive, they need forests. Therefore, they ought to preserve them.

It could even (validly I think) be put as follows:

P1. Humans want to survive.
P2. Humans need forests in order to survive.
C1. Humans ought to preserve forests.

I guess one question is, is this a normative moral issue, and I would say, if humans think it is, then it effectively is (and I would think it).

That's debatable, but my argument above shows that the is/ought objection against morality fails regardless of the answer to this problem. And I would say no, that one is not a moral 'ought', but that is a side issue in this context.
 
And I would say no, that one is not a moral 'ought', but that is a side issue in this context.

How is an example of getting from an is to an ought a side issue in a thread entitled 'The is/ought issue'? :)

It is a side issue because the central argument provided in the OP (and further explained in later posts) is not affected by the answer to the question of whether this 'ought' is a moral one. :)
 
There's a central argument in the OP?

By the way, why, precisely, do you think the prejudice argument I posted earlier is valid?
 
P1. Humans want to survive.
P2. Humans need forests in order to survive.
C1. Humans ought to preserve forests.

I guess one question is, is this a normative moral issue, and I would say, if humans think it is, then it effectively is (and I would think it).

That's debatable.....

I think it is, and the chap who wrote the philosophical paper I borrowed it from thinks it is. The Stanford Encylopedia even has a page on Environmental Ethics. But you're not sure it's a moral issue. Where does that leave your claim that moral matters are not a matter of opinion, since this would include whether something was a moral matter or not?

I guess you're going to say that you do feel there's a right or wrong answer to that question, and that if there are some who say it is and some who say it isn't, one opinion is mistaken. Ok, so if so, how would you get to that?
 
Last edited:
ruby sparks said:
There's a central argument in the OP?
Yes, obviously.

ruby sparks said:
By the way, why, precisely, do you think the prejudice argument I posted earlier is valid?
That one is also obvious; no need to insist. Yes, I made a bluder. I'm not infallible. As I mentioned, maybe I wasn't paying attention that morning. Maybe it's because I was ill (still not fully okay, but much better) and also not sleeping properly (due to being ill), or maybe it was just a brain fart. When I saw it again, I saw immediately it was valid. Why do I think so? Well, I look at it and looks valid :D, but if you want me to explain it, sure: once you add the implicit premise that they are all humans, P2 entails that Jim is a prejudiced human. It follows then from P1 that every human is prejudiced against Jim. But then, every human is prejudiced, so by P1 again, every human is prejudiced against every human. Now the conclusion follows from that and the hypothesis that Angela and Mary are both human.
 
ruby sparks said:
I think it is, and the chap who wrote the philosophical paper I borrowed it from thinks it is. The Stanford Encylopedia even has a page on Environmental Ethics. But you're not sure it's a moral issue. Where does that leave your claim that moral matters are not a matter of opinion, since this would include whether something was a moral matter or not?
I did not say that environmental issues were not moral matters. I was talking about the sort of 'ought' in your argument. And it was apparently not, because it was conditional to 'humans want to survive'. Moral 'oughts' are usually understoods not conditional. I think arguably they are like ordinary means-to-end 'oughts' but with the implicit condition 'in order not to behave immorally'. However, the person making the claim might be making a moral claim, even if it is written in a way that clearly indicates a non-moral matters. It happens, and claims that "Humans ought to preserve forests." usually are moral claims, so one would have to weigh what is usually the case vs. the fact that it is put in an argument that indicates it is not. I would go with the argument not to assume an error in the logic of the other person, but what do I know? In the end, more information about the context in which the argument is made is required to settle the matter.

Now, that was a side-issue, to clarify my previous words given that they have been misinterpreted. The heart of my reply follows now:

Your question "Where does that leave your claim that moral matters are not a matter of opinion, since this would include whether something was a moral matter or not?" indicates a mistaken assessment of what it is for something not to be a matter of opinion. If I do not know whether something is the case, that in no way implies that it's a matter of opinion, or in other words, that there is no fact of the matter.
 
And since you raised that argument: (but also, side issue):
ruby sparks said:
P1. Humans want to survive.
P2. Humans need forests in order to survive.
C1. Humans ought to preserve forests.
P1 is false. Most humans want to survive, but not all.
P2 is false. Most humans do not need forests in order to survive. Historically, there have been humans living in environments without forests, and that holds true today. If all forests disappeared and were gradually replaced with some other environment (as it is likely to happen), humanity would survive, and so would the vast majority of humans. Moreover, the extinction of forests does not need to be a bad thing for humans, on average. It depends on how it happens (and whether it's a bad thing in general, I don't know. How much suffering exists due to forests and other wild areas? ). And in the future, chances are humans will even live on places like Mars. No forests required.
C1. is false as a moral claim. There might be some humans in a position where they have the obligation of helping preserve a forests (e.g., park rangers, police in some cases), but it is not the case that humans, in general, have a moral obligation to go around preserving forests.
C1. is also false as a nonmoral claim. Sure, some humans ought to do that as a means to ends, but surely most humans do not, as they do not have a goal that requires of them to try to preserve forests.
 
I did not say that environmental issues were not moral matters. I was talking about the sort of 'ought' in your argument. And it was apparently not, because it was conditional to 'humans want to survive'. Moral 'oughts' are usually understoods not conditional. I think arguably they are like ordinary means-to-end 'oughts' but with the implicit condition 'in order not to behave immorally'. However, the person making the claim might be making a moral claim, even if it is written in a way that clearly indicates a non-moral matters. It happens, and claims that "Humans ought to preserve forests." usually are moral claims, so one would have to weigh what is usually the case vs. the fact that it is put in an argument that indicates it is not. I would go with the argument not to assume an error in the logic of the other person, but what do I know? In the end, more information about the context in which the argument is made is required to settle the matter.

I don't understand very much of that at all, so I'll just pick out one thing. Can you explain what you mean by, and substantiate, saying that moral oughts are usually understood as non-conditional? I know there are some who would say or have said that (eg Kant) but I didn't realise it was now the norm. At the very least, it would seem clear they are conditional on whether the issue is or isn't seen as a moral one. Personally, I would say modern, especially secular moral oughts are often fundamentally conditional, as in 'one ought to do X if....'. This seems particularly true of any morality that has identified goals or rationales, and there are a number of those. Or perhaps you are only referring to adherents of The One True Moral TheoryTM (ie your preferred type). Perhaps we could say that for you and them, morality is not about opinions, but about unconditional, objective, universal facts?

Your question "Where does that leave your claim that moral matters are not a matter of opinion, since this would include whether something was a moral matter or not?" indicates a mistaken assessment of what it is for something not to be a matter of opinion. If I do not know whether something is the case, that in no way implies that it's a matter of opinion, or in other words, that there is no fact of the matter.

We have previously agreed that disagreement does not necessarily mean an absence of an objective moral fact. That's in the bag, Angra, quite a while ago. But, how exactly do you get to saying that there really is an objective moral fact of the matter, in this case (of whether something is or isn't a moral issue)? How, in this case and many others, can you reliably or with any certainty tell the difference between disagreement because of some sort of relativity, and disagreement because someone is making a mistake about an objective fact?
 
Last edited:
P1 is false. Most humans want to survive, but not all.

Again I'll just single out one item from what is, imo, a very problematic post.

It's not all humans who think something (even killing) is morally wrong. And yet, as on a few similar occasions when you switch standards, that didn't bother you when you wanted to claim universality about morals, because you inserted and accepted caveats such as 'normal' (human), 'non-defective' (human) and 'widespread'. Now those sorts of caveats have conveniently disappeared.
 
Last edited:
ruby sparks said:
I don't understand very much of that at all, so I'll just pick out one thing. Can you explain what you mean by, and substantiate, saying that moral oughts are usually understood as non-conditional?
Yes. When people make moral assessments like "Person A ought to X", they do not withdraw the judgment upon learning that A does not want to X. On the other hand, in ordinary means-to-ends judgments, people usually do withdraw the judgment "Person A ought to X" upon learning that A does not want to X. The most common interpretation of this is that 'ought' has a different meaning in the moral vs. means-to-ends case. An alternative would be that the moral 'ought' is a means-to-ends 'ought' but with an implicit 'In order not to behave immorally' or 'if you intend to avoid immoral behavior' clause, so it's telling someone what they means-to-end ought to do in order not to behave immorally (which, if true, remains the case even if the person wants to behave immorally, because it's conditional to the person having that intent).

ruby sparks said:
We have previously agreed that disagreement does not necessarily mean an absence of an objective moral fact. That's in the bag, Angra, quite a while ago. But, how exactly do you get to saying that there really is an objective moral fact of the matter, in this case (of whether something is or isn't a moral issue)?
But that is another issue. You were raising an objection to there being an objective fact of the matter. Still, I would say this:

In this particular case, it is unclear whether the person is making a moral claim or not (but that is a matter of evidence of what the person means to say), so:


1, Assuming hypothetically that the person is making a moral claim, then I reckon that there is a fact of the matter, for the same reasons I do in the other moral cases.

2. Assuming hypothetically the person is not making a moral claim, then I still reckon that there is a fact of the matter, for the same reasons I do in the other means-to-ends 'ought' cases.

Recall, however, that this thread is to deal with the is/ought objection, not to argue that there is a fact of the matter in moral cases, in general. I could make another thread, but maybe B20 will make his case, and you will have better chances of understanding him than me, given previous experience.


ruby sparks said:
It's not all humans who think something (even killing) is morally wrong. And yet, as on a few similar occasions when you switch standards, that didn't bother you when you wanted to claim universality about morals, because you inserted and accepted caveats such as 'normal' (human), 'non-defective' (human) and 'widespread'. Now those sorts of caveats have conveniently disappeared.
I never switched standards in these threads. Of course, if you had said 'Humans ordinarily want to survive', or even 'Normally functioning humans want to survive', etc., I would not have objected to that particular premise. But of course, the others would have been subject to objections.
But let us interpret the argument in one of those manners:

P1. Humans ordinarily want to survive.
P2. Humans ordinarily need forests in order to survive.
C1. Humans ordinarily ought to preserve forests.
Then P1 is true. But P2 is false. Most humans do not need forests in order to survive, even under ordinary conditions. Historically, there have been humans living in environments without forests, and that holds true today. If all forests disappeared and were gradually replaced with some other environment (as it is likely to happen), humanity would survive, and so would the vast majority of humans. Moreover, the extinction of forests does not need to be a bad thing for humans, on average. It depends on how it happens (and whether it's a bad thing in general, I don't know. How much suffering exists due to forests and other wild areas? ).

But the argument is also a bad one in a different sense, if we assume that the 'ought' is a moral one, and especially if 'preserve' involves an active duty, rather than a duty not to destroy (but even then). In general, the problem is that from the fact that one wants X, it does not follow (nor is it likely) that one has a moral obligation to take action so that one can in fact obtain X (in a simplified manner, generally there is no moral obligation to obtain what one wants to obtain; it is not immoral to fail to obtain what one wants, again in general).
 
Back
Top Bottom