ruby sparks said:
Also, I think it's odd that someone who claims to think logic is very important in all of this is willing to say it's not a problem if the is/ought argument is a formal fallacy.
Let us assume that the way in which we make moral assessments using our moral sense is a fallacy, and also making them on the basis of information about the moral senses of others, is a fallacy, then what we get is that the way we gain information about the outside world involves a fallacy in
all of the times( or at least all but what Torin classifies as 'sensations'; while I argue the classification makes no relevant difference in this context, I can even grant it does and the problem remains just as much).
Then, under that assumption, my options - logically - are as follows: Either
a. This particular kind of fallacy is not a problem, so even if logic is important in other contexts, it is not in this one.
b. We simply have no information about the outside world, except perhaps (with another assumption) our sensations (which of course also require brain processing, but never mind), and not even what we can assess from them. The result is
epistemological solipsism, which surely is false.
Using again my faculties - the only ones I have - I reckon that
under the assumptions that the way in which we make moral assessments using our moral sense is a fallacy, and also making them on the basis of information about the moral senses of others, is a fallacy, the proper assessment is a. So, it turns out that logic is not very important in all of these cases. The reason I do not make that assessment without qualification is that I am not at all convinced that we are making a fallacy all the time. But
if we are, then well clearly it is a., not b.
ruby sparks said:
Although it's not that odd, because it's exactly what someone with a prior belief about something would do, they would, at times when it was necessary or convenient, temporarily ditch a standard of inquiry they previously claimed was crucial to that inquiry if it didn't in fact fit with the prior belief. There's a word for that way of going about things.
You are very mistaken about me and the way I am doing things, and you should realize that upon reading my posts. But you persist in your attacks on me, without correcting your errors. But I will keep posting, because
maybe I will persuade some of the other readers/posters.
ruby sparks said:
Furthermore, the sophistry and semantic that tries to justify exactly why it's supposedly not a problem seems to me odd in itself. In your selected analogies you are not comparing like with like (you seem to have convinced yourself that in some way you are, but you are obviously not) therefore you can't necessarily draw conclusions about one from the other.
The fact that you do not realize that the analogies are indeed correct because the matters are analogous in the sense that is relevant in this context has several potential explanations, ranging simply from anger due to hostility and contempt towards me, to other problems that would not be so easy to fix. At any rate, it's not something
I can fix. This one is not on my end.
ruby sparks said:
Let me put it another way. 'Humans feel there are objectively right and wrong answers about colours, therefore it is reasonable to say that there are objectively right or wrong answers about colours' does not necessarily translate to 'Humans feel there are objectively right and wrong answers about morality, therefore it is reasonable to say that there are objectively right or wrong answers about morality' and I think you are pinning a lot on that particular point of comparison. Too much, imo.
Okay, let me address this point. First, I did not say 'feel'. Humans ordinarily believe, think, reckon, assess, etc., that there are right and wrong answers about color, and about morality. But leaving that aside, what I am saying is that there is no difference
in regard to whether there is a fallacy in the following assessments:
P0: Humans with ordinary faculties, under standard light conditions, reckon that
this ball is red.
C: Very probably,
this ball is red.
P1: Humans with ordinary faculties reckon, upon observation of many cancer patients, that cancer is an illness.
C: Very probably, cancer is an illness.
P2: Humans with ordinary faculties reckon, after considering the matter, that it is immoral for a human being to rape another just for fun.
C: Very probably, it is immoral for a human being to rape another just for fun.
One might of course reckon that not all of these assessments are equally rational and/or equally reasonable, but that is due to
other pieces of information not listed in the premises that change the proper probabilistic assessment. On the other hand, there is no difference in these assessments
in regards to whether the conclusion follows from the premises. It does not. Of course, one can bridge the gap with a probabilistic premise, but
that is equally doable in all 3 of them.
Remember, the purpose of this thread is
not to show that there is objective morality, but rather, to show
that the is/ought objection against morality fails. This is compatible with there being other objections that succeed. But not this one.