ruby sparks said:
It seems to me you are assuming the word 'identical' and that it needs to be included. Otherwise, your P2 does not work, as I see it.
I do not understand why it looks like that to you, given that I had just explained (in the very post you were replying to) that I was not making that assumption at all. If you say P2 does not work with the assumption, then fine, we have a disagreement: I say it is true, you say it is false. No problem, we may discuss a disagreement. But since P2 is explaining what
I mean by the words I am using, if you intend to discuss what I'm actually saying, you may not change it for something else. No, P2 does not have that assumption at all. It is simply not what I am saying.
ruby sparks said:
To try to demonstrate, consider the following as regards P2: if you provide the same inputs to two machines, but the two machines are only very similar but not identical, the outputs will not be the same.
That is false. For example, you have a computer, and I have a computer. They are similar, but surely not identical. I am typing on my keyboard, and making text documents with a program named
xed. Suppose you install xed (first install Linux if you are running another OS). Suppose then that you type "The cat is on the mat", and then save the document. Suppose I also type "The cat is on the mat" and save the document. Suppose then that we open the documents, and copy and paste and then post it here. What happens is that as long as both your computer (including software and hardware) and my computer are functioning properly, the outcome is the same.
But now suppose they are not running the same program. I keep using xed, and you use whatever it is you use, which very probably is not xed. You can still produce a text document with
the same content. That is the outcome that matters. You do not need an exact particle configuration. What you need is that both systems say 'morally wrong' when given the same description of a behavior, or 'X is more immoral than Y', and so on.
As my example in the previous post shows, this is common with humans. I do not know why the bear example did not persuade you. But let me try again:
Suppose that Alice and Bob enter a cave, take a look, and see there is not more than one exit. Then they leave the cave, and they see a bear get in the cave. As they continue to look at the cave's entrance, they see another bear get in. A while later, they see one bear get out. They do not see any other bears. Now, Alice and Bob are not identical humans. But at this point, either Alice and Bob both believe there is exactly one bear in the cave (maybe a dead one, but one), or else something in Alice's brain or in Bob's brain is malfunctioning. It would not be the moral sense, but rather, whatever mechanism counts objects, or whatever makes intuitive probabilistic assessments, or some other system. But something needs to be malfunctioning.
In other words, the systems of two different humans who witness the bears as described will yield the same verdict: there is exactly one bear in the cave. Any other verdict would involve a malfunctioning of at least one of the brains.
ruby sparks said:
'Properly functioning' does not seem to cover it. Two very similar but slightly different machines (which I am suggesting is what human brains are) could both be properly functioning. In short, 'properly functioning', I'm suggesting, has wiggle room. Identical does not.
No, it does cover it. Two very similar machines give the same output in many conditions, barring malfunctioning. What happens is that, as Alice and Bob are not identical, their brains will not yield the same verdicts about everything. For example, maybe Bob finds Mary sexually attractive, but Alice does not find Mary sexually attractive. That of course does not need to imply any malfunctioning. On the other hand, they yield the same verdict in the bear-counting case, and in many, many others - save for malfunctioning. P2 is the hypothesis that this phenomena that
holds for many, many subsystems, in humans, and in computers, and in many other systems also holds for the moral sense.
There is still some extra wiggle room in the edges so to speak, because I'm the one stipulating the conditions and you asked me what I meant, and what I mean is P2 with some that wiggle room, which is meant to account for the vagueness of terms resulting from very slight differences. But those would be minimum (there are people would not make that room, and would assert P2 without any exceptions for vagueness, but you asked me).
ruby sparks said:
Unless you are assuming the conclusion at the outset, that there is a universal morality. In that case, yes, only one of the machines would be functioning 'properly' (in line with the supposed universal morality). But we don't know if it's the case, and that's a problem. It could just be that the two machines differ. End of. Personally, I suspect that this is the case, partly because I think morality is a mental construct that does not exist in the natural world. And if I were to temporarily use your approach back at you, I might ask you to demonstrate to me that that is false.
Well, I reckon that there is universal human morality. But let me point out that nearly everyone believes that, not as an assumption any more than they 'assume' that other people also have minds. It's part of the normal human experience and normal human beliefs. You would need some argument to challenge it and knock it out, not to establish it - people are justified in believing others have minds too, without having to have an argument for it; the same goes for universal human morality.
I would say it is not my burden to demonstrate that that is false. It would be yours to demonstrate your claims. For example, you claim that morality does not exist in the natural world. Well, I do not know how you distinguish between the natural world and the rest of the world. But it is your claim and you should establish it. At any rate, experiments with monkeys and apes of other species show that they too make judgements of fairness, rule breaking, desert, they punish, and the like. So, this is not something humans invented out of thin air - but again, this would not be up to me to show, as it is the ordinary, default human position.
Still, I will give it a try if you want me to. Suppose that there is no universal moral sense. So, different people just make different moral assessments because they have different systems. Then what we have is either a system that works in a culture-relative manner, or a system that works at an individual level. What do I mean by this? I will present the scenarios below, with additional premises to give it at least
some universality. Let us compare three options.
Scenario 1. Universal human moral sense.
P1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P2: If two humans have properly functioning moral senses, they will yield the same outputs given the same inputs.
P3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically, and to punish those who do.
Scenario 2. Culture-relative moral sense.
P1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P2.1: If two humans have properly functioning moral senses and the two humans are members of the same group, they (i.e., the moral senses) will yield the same outputs given the same inputs. The groups are instinctively formed depending on social interaction.
P2.1': If two humans have properly functioning moral senses, they will yield the same outputs given the same inputs within some proper subset of the moral domain (which subset is a matter for future research in human moral psychology).
P3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically, and to punish those who do.
Scenario 3. Individual taste moral sense.
P1: There is a system by which humans make moral assessments, on the basis of some inputs, which consist on some information.
P2.2: If two humans have properly functioning moral senses, they (i.e., the moral senses) might not yield the same outputs given the same inputs, even if the two humans belong to the same culture, with the exception of some proper subset of the moral domain (which subset is a matter for future research in human moral psychology).
P3: Humans have motivations linked to the verdict of their moral sense. In particular, they are inclined not to behave unethically, and to punish those who do.
Scenario 4. All other alternatives.
(as before, P2 should be understood allowing some room for vagueness - but not much; still, we should leave that for later) to make it manageable.
The reason for the exception in P.2.1' and is that it holds pretty much everywhere, even when individual verdicts often differ on other matters. For example, suppose Alice reckons tomatoes are tasty, but Bob reckons they taste bad. That may well happen without any malfunctioning in the system by which Alice or Bob assess gustatory taste. On the other hand, suppose in an experiment, they are giving something harmless but which is artificially flavored to taste (to humans) like rotten meat, rotten eggs and horse droppings, combined.
In this case, either there is agreement that it tastes horribly bad, or else either Alice or Bob (or both) have a malfunctioning system (I'm talking about sincere assessments of taste). So, human variation results in different tastes between humans - but only to a point. Of course, one could remove that hypothesis too, but if I'm going to argue against it later, I'm not going to make it harder for me by including conditions that rule out the most obvious forms of debunking.
Before I go on, I would like to ask you whether you think that Scenario 4 is so improbable that we can rule it out and focus on 1., 2., and 3. I know you already said P1 and P3 are okay, so that's good. I also know you reject P2. But I would like to know if what you had in mind is captured by the disjunction of Scenarios 2 and 3. If not, is there any other alternative you think is worth considering, so I include it as well?