• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Jokes about prison rape on men? Not a fan.

I don't see the Norwegians actually keeping people in prison longer than they deserve.
Again, begging the question that anyone "deserves" prison

You know there's a huge discussion that could be had about moral deserts and corrupt motive of both kinds.
The people they've subjected to indefinite preventive detention are generally murderers and rapists. Whether Norwegians are monsters is not determined by how they choose to label their penal practices.
Nobody has claimed here that monstrosity is a function of labeling. It is a function of whether they have a corrupt motive.
I actually read them; the reason you're accusing me of not actually reading them is because you have no moral compunctions about libeling your outgroup.
Again, clearly you did not. Or you are throwing up a massive series of straw man arguments. Just because you dislike being put on the spot for being expected to not hurt people for your own enjoyment doesn't mean I am libeling you. I suspect it just makes you feel bad. While I don't think feelings are self-justifying, often they can contain hints. This is one of the hints you should take to heart.
And no, those discussions didn't answer my inane questions -- you did not supply any reason to think your selection of ethical premises isn't based on "aroused emotional drive".
I did. You just don't pay attention, maybe because you don't like to think that someone could possibly get past Hume.

I have laid this out a few times, but I guess I'll do it again because you just really seem to like ignoring it. Pay close attention here: I have derived two classes of oughts. You can guess what those classes are pretty easily if you aren't too busy flogging your log to Hume.

First, I pointed out the class of all oughts that can be derived from is. Then I pointed out that there is a subclass, the metagoal which are the class of oughts that are not unilaterally asymmetrical, and thus contradictory against a basic moral justification when compared to someone else.

You regularly ignore that little fact instead straw-manning against bullshit claims like this:
If I AM on one side of a wall, AND it IS my goal to use the least energy to reach the other side, ... I ought do that thing (it is the solution to the problem).
That's an equivocation fallacy. "Ought" has two meanings -- instrumental and moral -- corresponding to what Kant called hypothetical imperatives and categorical imperatives. Adding goals gets you from an "is" to an instrumental "ought", not to a moral "ought".

Have you been influenced by Randroids? Those guys imagine whatever they dislike violates the non-contradiction principle.

...
Jesus said it; you believe it; that settles it?
...
Well then you should never sue anyone.
...
Well then you should never imprison anyone.
...

"unnecessarily" ... [=] ... "undeservedly".
Talk about fucking equivocations. First, we have a discussion that must be had about "the golden rule". You are equivocating the biblical formulation (the "positive formulation") against the negative formulation which is "don't do unto others that which you would not have done into you", which has in the invocation of the metagoal been further distilled to "you have no justification for doing something without symmetrical consent to others that you expect others to not do to you without symmetrical consent". If you want to talk about "useful rules of thumb, maybe we can invoke your unsupported virtues that you invent from your own feelings.

At any rate, there's a big difference between "unnecessary" with respect to whether we are talking about extrinsic utility vs intrinsic desert. One says "I'm going to do this because I do not deserve to be violated" vs "I'm going to do this thing to them because they deserve to be hurt." Good job drawing that equivocation.
I mean, speaking in terms of a specific goal for the derivation of general "oughts" is a losing battle. There is no specific goal. There is the possibility, though, of discussing a meta-goal to derive general oughts.

To me, that goal is "to have all that is necessary to do X" where X does not deprived anyone else of the same.
That's a special-pleading fallacy -- you sound like that philosopher who spent the first half of his book proving all moral claims are errors and the second half making moral claims. "To have all that is necessary to do X where X does not deprive anyone else of the same." is a specific goal. Just calling it a "meta-goal" doesn't make your attempt to derive general "oughts" from it a winning battle.
That's quite a claim, in the presence of a variable. Also dead fucking wrong. Instrumental and moral oughts are only differing in whether they are symmetrically non-contradictory, whether they can be invoked without creating a social contradiction.

...Social contract theory is logically incapable of delivering that which it exists for the purpose of delivering.
That's one hell of a (bullshit) claim. In fact, ethics only exists in the context of more than one person. If there is only one person, it is perfectly acceptable to be a solipsist, and all instrumental oughts are in fact ethically justified; nobody else's concerns need to be heeded in that situation because there is nobody else to be concerned with.

It is trivially easy to see that ethics are a function of society, and that ethics must be derived from where our goals conflict (or, from where they do not).

Now whether or not the broad, complicated sophistry that is "formal social contract theory" that you might use as a straw man for my actual arguments is in fact 'correct' I would stay that it has a lot of holes. HOORAY, WE ALREADY AGREED SOCIAL CONTRACT THEORY IS BULLSHIT. In fact if you read my posts like you claim, you would have already noticed that I pointed out the extent of it's function: risk level acceptance and resource allocation for zero- or limited-sum pools.
I can easily identify that if I wish to have my meta-goal stay as intact as possible, I must respect the meta-goals of others as much as possible. Punishment for the sake of vengeance rather than only as a last resort in behavior modification fits right into "unnecessary", almost trivially so.
Some people's meta-goal is to have justice done.
No, it is not. Because that metagoal is contradictory; their goal is, I guarantee you, including to not have 'justice' done to them, to be free of 'justice'. Nobody wants to be punished, else it is not "punishment".
So Gary wants to go to synagogue on Saturday and work on Sunday
So, Gary's goal does not unilaterally invoke bob
; Bob has goals that require everyone to work on Saturday and go to church on Sunday.
Bob's goal unilaterally invokes gary. There you go, it's already not up for debate with the social consensus. If I can change the name 'jesus' for 'muhammed' or any other arbitrary thing, it's already disallowed as a contradiction; you are already abusing the role of the social consensus in the model, and invoking special pleading to justify one form of goal over another (jesus as opposed to Muhammed, neither of which is justifiable against the observable reality; come back to me when you prove jesus and God and all that exist).

The role of social consensus here is limited to probabilistic outcomes: we accept some probability of being harmed because our actions generate a probability of harming others; the social contract only determines the probabilities and the extents of harm allowed against the metagoal in the context of the risks we generate for others. For instance, I accept the risks of being harmed while others are driving by driving and imposing those risks on others, with consent through action. The formal social contract in this context merely formalizes the observation and makes explicit the vote.

Of course, I do invoke a second role of social contract: can also serve to formalize etiquettes for the disposition of limited social resources.

In this society you have invoked, you have already taken things too far in invoking an expectation of jesus worship as that neither speaks to what probabilities of harmful actions are allowed outside of special pleading or the disposition of limited resources.
Clearly what's going on here is you picked symmetry as your ethical premise, due to an aroused emotional drive. You transparently have a symmetry boner.
No system based on axioms can tolerate the existence of a contradiction within it. It has nothing to do with emotion and everything to do with the fact that I expect my ethical principles to be logical. I have a logic boner. So should you. So should everyone. Anything else is, well, illogical.
Stomping the bread into the ground so you could both die is by definition harming the goals of others, as a goal in and of itself.
It's my bread, not his. His metagoal cannot be harmed because he has no justification to take the bread. The destruction of the bread is, in fact, a product of their own unethical behavior as a result of what was an absolutely fair way to decide what happened to the bread; all other things equal, if might makes right we both die because I will be forced to fight to the death lest I die anyway; we both die. All other things being equal if we play any other game for the bread, it comes down to probabilistics anyway. So no matter what it comes down to probabilistics.

So his goals END as soon as he loses whatever game we decide on. It is by definition not harming them because his goals have already by his own consent been ended.

The only option for either of us was always "get a 50/50 chance at bread"; the cost of accepting a chance of getting that bread without mortal harm is accepting the consequences of cheating (namely mutually assured destruction); I just figure it's better to be starving to death without also being heavily injured in a fight that's likely to destroy the bread anyway.

Maybe you missed the fact that RS in this scenario of stomping the bread is already presumed to have of lost the coin toss. If he wins, he gets bread without violence, and I starve.

At that point, who is being unethical, again? Oh yeah, the person who would create a situation where they may get bread without injury and someone else starves, but they refuse to offer a situation where the other may attain the same. If we both play by the rules, we universally have a better chance at survival.
 
Jarhyn said:
In fact, ethics only exists in the context of more than one person. If there is only one person, it is perfectly acceptable to be a solipsist, and all instrumental oughts are in fact ethically justified; nobody else's concerns need to be heeded in that situation because there is nobody else to be concerned with.
That is false. Purely for example, suppose that all people die due to a rogue biological weapon, except for Joe, who decides to pour gasoline on a cat and set her on fire, so that he has fun watching a fireball run. In fact, he does that every day, as there are plenty of cats around, and he can capture them with different traps and tactics. He is determined to do this, and has human intelligence and the tools left by the rest of humanity at his disposal, so the cats do not have a chance. Then Joe behaves unethically.


Side note: The part about the cat is not my idea. When I was in high school, some kids were bragging about how they did that to a cat and how much fun it was; I do not know whether it was true or they just made it up. But they wanted others to believe them.

Jarhyn said:
No, it is not. Because that metagoal is contradictory; their goal is, I guarantee you, including to not have 'justice' done to them, to be free of 'justice'. Nobody wants to be punished, else it is not "punishment".
a. It is not contradictory to want to have justice done to everyone, and even on oneself if one were to deserve it.

b. It is not even contradictory to be biased and want to have justice done to other people.

c. It contradictory to have justice done by the
government on those who engage in heinous crimes, but leave minor unethical behaviors out of it, and to the punishment regularly inflicted by humans on one another by means of condemning each other's behavior, or mocking each other, etc.

d. There is no reason to even suspect that the number of people who would want to be imprisoned when they do not deserve it is greater than the number of people who would want to be imprisoned if they were to do something for which they would deserve it..


Jarhyn said:
No system based on axioms can tolerate the existence of a contradiction within it. It has nothing to do with emotion and everything to do with the fact that I expect my ethical principles to be logical. I have a logic boner. So should you. So should everyone. Anything else is, well, illogical.
But your logic is flawed. B20's is not. Neither is mine. You are making logical errors in believing that we are making logical errors.


Bomb#20 said:
Uh huh. So Gary wants to go to synagogue on Saturday and work on Sunday; Bob has goals that require everyone to work on Saturday and go to church on Sunday. Through social consensus it's agreed to damage Gary's metagoal of satisfying his own religious obligations, because his unilateral goal is mutually exclusive with the deemed-acceptable metagoal of the social consensus, which is to have as many people as possible be saved through knowing Jesus, so Gary's goal gets rejected.
Jarhyn said:
So, Gary's goal does not unilaterally invoke bob

Jarhyn said:
Bob's goal unilaterally invokes gary. There you go, it's already not up for debate with the social consensus. If I can change the name 'jesus' for 'muhammed' or any other arbitrary thing, it's already disallowed as a contradiction; you are already abusing the role of the social consensus in the model, and invoking special pleading to justify one form of goal over another (jesus as opposed to Muhammed, neither of which is justifiable against the observable reality; come back to me when you prove jesus and God and all that exist).

Gary wants to pour gasoline on a cat and set it on fire every Saturday, because he has fun watching a fire ball run.
Bob has goals that require that everyone refrain from setting cats on fire for fun, and further require that failing that, police try to arrest people who set cats on fire for fun.

So, Bob has goals that unilaterally invoke Gary and other people. It's already not up for debate. Bob is behaving unethically. Gary is not. This is what your ethical theory predicts. Since this is false, it follows that your ethical theory makes false predictions, so it has been tested and shown to be false (it had already been shown to be false, on other grounds, but there is no harm in showing it again).
 
... suppose that all people die due to a rogue biological weapon, except for Joe, who decides to pour gasoline on a cat and set her on fire, so that he has fun watching a fireball run. In fact, he does that every day, as there are plenty of cats around, and he can capture them with different traps and tactics. He is determined to do this, and has human intelligence and the tools left by the rest of humanity at his disposal, so the cats do not have a chance. Then Joe behaves unethically.

If there were other people around, most of them would likely agree Joe behaves unethically. Also, if other humans either arrived, or in fact had, unbeknownst to Joe, survived the effects of the biological weapon, and it was found that Joe, who had never before harmed animals, had been very severely traumatised by either believing himself to be or being the only human left alive, they might give him compassionate therapy rather than any form of punishment.

So in the first instance, it does not seem possible to say that Joe was actually, independently, really, factually, objectively being unethical, and in the second instance retribution is not deemed the correct response. Which I suggest puts a dent in both moral realism and retributivism.
 
My original ethical derivations came from a pretty radical idea: that there is some principle in nature, some thing derived from the context of our existence in the universe, that caused the emergence of ethics in humans, that our theories and ethics are attempts to approximate in the same way that there, in fact, mere approximations.

I just want to get back to this. Yes, there are, imo, principles in nature that caused the emergence of ethics in humans, but I would say they are not themselves ethical principles.
 
... suppose that all people die due to a rogue biological weapon, except for Joe, who decides to pour gasoline on a cat and set her on fire, so that he has fun watching a fireball run. In fact, he does that every day, as there are plenty of cats around, and he can capture them with different traps and tactics. He is determined to do this, and has human intelligence and the tools left by the rest of humanity at his disposal, so the cats do not have a chance. Then Joe behaves unethically.

If there were other people around, most of them would likely agree Joe behaves unethically. Also, if other humans either arrived, or in fact had, unbeknownst to Joe, survived the effects of the biological weapon, and it was found that Joe, who had never before harmed animals, had been very severely traumatised by either believing himself to be or being the only human left alive, they might give him compassionate therapy rather than any form of punishment.

So in the first instance, it does not seem possible to say that Joe was actually, independently, really, factually, objectively being unethical, and in the second instance retribution is not deemed the correct response. Which I suggest puts a dent in both moral realism and retributivism.

In the first instance, it is stipulated that Joe does it for fun. As it is stipulated that Joe has human intelligence and no further stipulation is made, that seems to suffice to make it unethical. You are just making claims that go against the ordinary human moral sense, which is a proper tool to find moral truth (if you claim otherwise, the burden is on your side, as is on anyone claiming that our faculties are, in a specific case, misleading us; and yes, sometimes they fail, but we can only assess that using also some of our faculties, which we trust; failure is very unlikely barring specific evidence).

However, as I only need a counterexample to show that Jarhyn's claim is false, I can just modify the scenario (not needed, but why not? ):

1. Suppose that all people die due to a rogue biological weapon, except for Joe, who was a serial killer. While he is happy to see all the suffering and death caused by the biological weapon, a few weeks after everyone else is dead, he is frustrated by the lack of humans to murder. So, as a substitute, Joe decides to pour gasoline on a cat and set her on fire, so that he has fun watching a fireball run. In fact, he does that every day, as there are plenty of cats around, and he can capture them with different traps and tactics. He is determined to do this, and has human intelligence and the tools left by the rest of humanity at his disposal, so the cats do not have a chance. Then Joe behaves unethically.


2. Suppose that all people die due to a rogue biological weapon, except for Joe, who has several times set cats on fire for fun. To have further fun, Joe decides to pour gasoline on a cat and set her on fire, so that he has fun watching a fireball run. In fact, he does that every day, as there are plenty of cats around, and he can capture them with different traps and tactics. He is determined to do this, and has human intelligence and the tools left by the rest of humanity at his disposal, so the cats do not have a chance. Then Joe behaves unethically.
 
In the first instance, it is stipulated that Joe does it for fun.

Ok, so he enjoys it. Whatever. If so, he arguably deserves compassionate therapy, not punishment. So much for your retributivism.

As it is stipulated that Joe has human intelligence and no further stipulation is made, that seems to suffice to make it unethical.

Obviously, it seems that way to you, possibly to me, possibly to most people, maybe even all 'normal' people. But the problem is, 'seems to be unethical (to humans)' falls short of being really, actually, objectively, factually unethical. Which is where your theories eventually run into trouble.

And in any case, you're too fond of the extreme example of causing harm for fun. It's trivially obvious that at some point on the spectrum of human behaviours, we could say something like, 'all normal, decent, intelligent humans would think this wrong'. So what? You're just operating at one extreme. At the other end of the spectrum, human morality is pretty relative and variegated. The idea that there are correct answers to all moral issues is not demonstrated merely by picking an easy example of where almost everyone would agree.

You are just making claims that go against the ordinary human moral sense, which is a proper tool to find moral truth (if you claim otherwise, the burden is on your side, as is on anyone claiming that our faculties are, in a specific case, misleading us; and yes, sometimes they fail, but we can only assess that using also some of our faculties, which we trust; failure is very unlikely barring specific evidence).

On the contrary, the claim that what you call the ordinary human moral sense is the proper tool to find moral truth is something for which the burden is on you.

However, as I only need a counterexample to show that Jarhyn's claim is false.....

I'm not and wasn't discussing Jarhyn's claim with you. You can discuss that with Jarhyn. I'm not sure how much I agree with Jarhyn either, since he seems to think nature is ethical.
 
ruby sparks said:
Ok, so he enjoys it. Whatever. If so, he arguably needs compassionate therapy, not punishment. So much for your retributivism.
First, it's not only that he enjoys it. It's that he does it for fun, so that is the reason (not just a secondary reason).
Second, he deserves punishment, regardless of what he needs - what he needs is food, water, air, etc.
Third, your claim againt retributivism flies is just that: a claim that goes against the ordinary human moral faculty. As is the case when you question any human faculty, the burden is on you. It would be irrational to dismiss our faculties for no good reason - reasons we can only assess by means of some of our faculties, obviously.
Fourth and foremost, this misses the point entirely. The point of the example was not that he deserves punishment (though he does), but he behaves unethically. That is sufficient to show that Jarhyn's theory is false.


ruby sparks said:
Obviously, it seems that way to you, possibly to me, possibly to most people, maybe even all 'normal' people. But the problem is, 'seems to be unethical (to humans)' falls short of being really, actually, objectively, factually unethical. Which is where your theories eventually run into trouble.
No, that is not a problem. The problem is with your qualifiers. But that the ordinary human moral faculty deems a behavior unethical is in fact sufficient evidence to conclude that it is so, just as is the case with other ordinary human faculties, and barring specific evidence to the contrary.
For that matter, if it seems blue to a normal human visual system under ordinary light conditions, that is pretty much sufficient evidence to reckon that it is blue. It's what rationally one should reckon, barring a lot of counter evidence. The same for the verdicts of other human faculties, in this case the moral faculty. It's you who has the burden of showing that his behavior is not unethical.

Again, your qualifiers only complicate matters. But the 'really' qualifier does not seem to add anything, but it's an intensifier. As for the others, I already addressed them.


Now, if you were correct and the ordinary human faculty were not enough to justify our moral assessments, then nothing would, and Jarhyn's theory would be unwarranted. The reason for that is that we do not have any tools for assessing whether a behavior is unethical or not other than the ordinary moral human faculty - our own, and that of other people -, aided of course by other faculties (e.g, to make intuitive probabilistic assessments about expected consequences of some behavior), but in the end, our moral faculty is the tool to make ethical assessments.

What about moral theories?

None is true. However, even if one were true, those theories can only be properly tested against the judgments of the human moral faculty (or against something already based on it), so even then, we would only be justified in believing them true if they pass the test when their predictions are tested vs. the human moral faculty.

Incidentally, something like the above holds for color too: we may have cameras and computers that can detect blue stuff, but we only have them because they have been calibrated using human color vision (or tools already based on it).


ruby sparks said:
And in any case, you're too fond of the extreme example of causing harm for fun. It's trivially obvious that at some point on the spectrum of human behaviours, we could say something like, 'all normal, decent, intelligent humans would think this wrong'. So what? You're just operating at one extreme. At the other end of the spectrum, human morality is pretty relative and variegated.
You seem to have lost track of the conversation. Again, I was showing that Jarhyn's theory was false. In order to test a theory, I just need to compare its predictions with some known facts. So, the extreme examples are pretty adequate.

So what, you say?

So, "'all normal, decent, intelligent humans would think this wrong'", but the theory I am debunking entails it is not morally wrong.


ruby sparks said:
On the contrary, the claim that what you call the ordinary human moral sense is the proper tool to find moral truth is something for which the burden is on you.
No, that is not true. It would be irrational to question one of our faculties without good evidence against it - evidence which, of course, we also assess on the basis of our human faculties!

We do not do that normally. For example, we do not claim people who say a traffic light was red that they have to show that the human visual system is a proper tool to figure out whether something is red. Sure, there are arguments for a color error theory (they fail) but the burden is on the claimants.


ruby sparks said:
In fact, demonstrating that there is even such a thing as moral truth in the first place is a burden you might want to try to lift before you even get on the the other one.
No, not at all. That some behaviors are unethical is obvious by normal human assessments. It's on you the burden to show otherwise in the first place.

]However, in this context you miss the point again. Jarhyn's theory entails that there is such thing as moral truth - ethical truth in his terminology. So, in order to argue against it, it is proper to assume there is (else, the theory is false on that account alone). This does not even depend on whether it is proper to reckon in general (i.e., when not argue against a specific theory) that there is moral truth (it is, but really not the point here).


Jarhyn said:
I'm not and wasn't discussing Jarhyn's claim with you. You can discuss that with Jarhyn.
Are you serious?
Discussing it with Jarhyn is exactly what I was doing, when you jumped in: You jumped on a post in which I was replying to Jarhyn's ethical theory. You took that post out of context. Of course Jarhyn agrees that there is ethical truths (read his posts!), and his theory entails that there is (read his posts!), so it would be proper on my part to assume there is ethical truth in the context of testing his theory even if it were not proper to reckon in general (i.e, in other contexts) that there is ethical truth.


Let me try another manner. Suppose that someone claims that God (i.e., omnipotent, omniscient, morally perfect agent) exists. In order to argue against that claim, it is proper to assume that there is an omniscient, omnipotent agent, and then argue it is not at all morally perfect. Now, this case is different because there is moral truth, whereas there is no good reason to suspect there is an omniscient, omnipotent agent, but it is not different in the relevant sense, namely that it is okay to use as a hypothesis one or more of the implications of the theory one is criticizing.
 
Jarhyn... he seems to think nature is ethical.

This is a very inaccurate statement. I do not think nature is ethical. I think that certain contexts in nature create situations where particular strategies are optimally beneficial. Not all nature is a river, but rivers are created at the intersection of gravity, atmospheric gas, a particular range in temperature, and erosive rock.

Ethics are natural, but not all of nature is ethical.
 
Bomb#20 said:
I don't see the Norwegians actually keeping people in prison longer than they deserve.
Again, begging the question that anyone "deserves" prison

You know there's a huge discussion that could be had about moral deserts and corrupt motive of both kinds.
Any time, sir, any time. (Though we ought to take that to M&P.)

The people they've subjected to indefinite preventive detention are generally murderers and rapists. Whether Norwegians are monsters is not determined by how they choose to label their penal practices.
Nobody has claimed here that monstrosity is a function of labeling. It is a function of whether they have a corrupt motive.
You're missing the point. You advocated 'that incarceration become indefinite and the bar for ending it being that they are considered rehabilitated rather than "suitably punished".' I argued that such a policy would be barbaric because it would allow for extreme harm to perpetrators in response to minor transgressions. You replied "I don't see anyone calling Norwegians monsters." as a counterargument. The point of my reply was not to suggest you claimed monstrosity is a function of labeling; the point of my reply was to propose an alternate explanation for your observation -- that the Norwegians may be labeling their policy with your policy's name but they are not putting it into practice -- thereby showing that your counterargument fails to imply your conclusion.

I put it to you that the reason nobody is calling Norwegians monsters is not because locking criminals up far out of proportion to what they deserve isn't monstrous, but rather because the Norwegians do not actually lock criminals up far out of proportion to what they deserve. When the Norwegians release a person convicted of a minor crime, they might well label it "We judge him to be rehabilitated" rather than labeling it "He's served the deserved sentence", but what they call it is irrelevant. People are deciding the Norwegians aren't monsters because when the criminal doesn't deserve to be in prison any more, the Norwegians set him free.

So we have two competing explanations for your observation. That means if you want your observation to qualify as supporting evidence for your favored policy not being barbaric, then you'll need to show your explanation is right and mine is wrong, because if my explanation is correct then your observation is perfectly compatible with "the bar for ending it being that they are considered rehabilitated" nonetheless being barbaric. The Norwegians are letting people go once they've been "suitably punished".

I actually read them; the reason you're accusing me of not actually reading them is because you have no moral compunctions about libeling your outgroup.
Again, clearly you did not. Or you are throwing up a massive series of straw man arguments. Just because you dislike being put on the spot for being expected to not hurt people for your own enjoyment doesn't mean I am libeling you.
No, it's the fact that you make up false damaging claims about me with reckless disregard for the truth that means you are libeling me. You have no rational basis for thinking you are an expert witness as to whether I read your posts. It takes a special level of arrogance on your part for you to imagine that your arguments are so spectacularly good that all anyone needs is to read them and he will necessarily recognize them as solving the greatest philosophical conundrums of the ages, and presumably therefore recognize you as the greatest philosopher of all time -- the man who beat Aristotle and Kant and Mill and finally figured out how to derive ethics from pure reason. I read your arguments. I was unimpressed. And you think my not being impressed proves I'm lying about reading them. Oh, for the love of god, get over yourself.

(And speaking of massive strawman arguments, you are not putting me on the spot "for being expected to not hurt people for my own enjoyment." You are putting me on the spot for favoring justice. Hurting people in order to do justice is not the same thing as hurting people for enjoyment; your willingness to equivocate on this point does not do you credit. When people hurt for the sake of enjoyment, it's okay with them if they're hurting innocent people -- a characteristic which puts them in the same camp with people who hurt for the sake of deterrence, or for the sake of rehabilitation, or for the sake of incapacitation.)

And no, those discussions didn't answer my inane questions -- you did not supply any reason to think your selection of ethical premises isn't based on "aroused emotional drive".
I did. You just don't pay attention, maybe because you don't like to think that someone could possibly get past Hume.
You should probably lay off speculating about other posters' psychology -- you stink at it. Let me remind you that you invoked Hume first; I merely repaid you in like coin. I think it's entirely plausible that someone will "get past Hume" -- the "Is-ought problem" is overrated -- but AM's approach appears to me to have better prospects for success than yours.

I have laid this out a few times, but I guess I'll do it again because you just really seem to like ignoring it. Pay close attention here: I have derived two classes of oughts. You can guess what those classes are pretty easily if you aren't too busy [vulgarity once again imputing to me your own fantasy about] Hume.

First, I pointed out the class of all oughts that can be derived from is.
By "pointed out", you appear to be referring to something you asserted. You did not supply any evidence that there were no others besides the hypothetical imperatives you exhibited.

Then I pointed out that there is a subclass, the metagoal which are the class of oughts that are not unilaterally asymmetrical, and thus contradictory against a basic moral justification when compared to someone else.
By "contradictory against a basic moral justification" you appear to be referring to their incompatibility with your personal favorite ethical premise. The circumstance that your preferred hypothetical imperative does not contradict some ethical assumption that you happen to like is not enough to magically transform it into a categorical imperative, and thereby "get past Hume".

You regularly ignore that little fact
I didn't ignore it. Which part of '"To have all that is necessary to do X where X does not deprive anyone else of the same." is a specific goal. Just calling it a "meta-goal" doesn't make your attempt to derive general "oughts" from it a winning battle.' didn't you understand? You deciding you're more impressed by your own verbiage than by my refutation does not give you license to falsely claim I ignored you -- particularly seeing as how you quoted my response back to me, and you swore at me over it. Why don't you have any moral compunctions about just making up garbage about your opponents?

If I AM on one side of a wall, AND it IS my goal to use the least energy to reach the other side, ... I ought do that thing (it is the solution to the problem).
That's an equivocation fallacy. "Ought" has two meanings -- instrumental and moral -- corresponding to what Kant called hypothetical imperatives and categorical imperatives. Adding goals gets you from an "is" to an instrumental "ought", not to a moral "ought".

Have you been influenced by Randroids? Those guys imagine whatever they dislike violates the non-contradiction principle.
...
Jesus said it; you believe it; that settles it?
...
Well then you should never sue anyone.
...
Well then you should never imprison anyone.
...
"unnecessarily" ... [=] ... "undeservedly".
Talk about <expletive deleted> equivocations.
You just put words in my mouth. I didn't say or imply that "unnecessarily" = "undeservedly". I didn't write "[=]" or anything that meant "[=]". "[=]" is completely unreasonable as an attempt at paraphrase. You just made it up and spliced it between two words I did say. That's unethical. What makes you think a person who would do something that unethical to another poster is competent to lecture the rest of us about ethics?

First, we have a discussion that must be had about "the golden rule". You are equivocating the biblical formulation (the "positive formulation") against the negative formulation which is "don't do unto others that which you would not have done into you",
Sorry, my bad. Make that "Confucius said it; you believe it; that settles it?". The distinction between the positive and negative formulations is a quibble. You can't derive either form from pure logic, and both forms are vulnerable to the problems I pointed out.

which has in the invocation of the metagoal been further distilled to "you have no justification for doing something without symmetrical consent to others that you expect others to not do to you without symmetrical consent".
And you have evidence, do you, that we all consent to be locked up if others want to rehabilitate us, but we don't consent to be locked up if we deserve it?

If you want to talk about "useful rules of thumb, maybe we can invoke your unsupported virtues that you invent from your own feelings.
What's your point? Did I claim my own useful rules of thumb get past Hume? You're the one making the big claims here, so you're the one with burden of proof.

At any rate, there's a big difference between "unnecessary" with respect to whether we are talking about extrinsic utility vs intrinsic desert. One says "I'm going to do this because I do not deserve to be violated" vs "I'm going to do this thing to them because they deserve to be hurt." Good job drawing that equivocation.
I made no such equivocation. I simply pointed out that the Golden Rule is inherently ambiguous: in your latest phrasing, it's the word "something" that's ambiguous. Whether what you do to others qualifies as the same thing as the "something" you don't want them to do to you depends entirely on how you choose to characterize it, and you can characterize the same act in a million different ways. Utility and desert are simply two examples of that ambiguity. I didn't claim they were equal to each other. They're two different tools that people with two different moral judgments can equally well use to shoehorn what they do into satisfying the ambiguous Golden Rule.

All that aside, you have evidence, do you, that we all consent to be locked up if others suspect we will violate them so they think it has extrinsic utility to them, but we don't consent to be locked up if we intrinsically deserve it?

I mean, speaking in terms of a specific goal for the derivation of general "oughts" is a losing battle. There is no specific goal. There is the possibility, though, of discussing a meta-goal to derive general oughts.

To me, that goal is "to have all that is necessary to do X" where X does not deprived anyone else of the same.
That's a special-pleading fallacy... "To have all that is necessary to do X where X does not deprive anyone else of the same." is a specific goal. Just calling it a "meta-goal" doesn't make your attempt to derive general "oughts" from it a winning battle.
That's quite a claim, in the presence of a variable.
You seriously think you can derive philosophy from surface syntax? The goals you call "specific" to distinguish them from your "meta-goals" have variables too. "Maximize total happiness." means "If X leads to more happiness than Y, choose X." "Moderation in all things; seek the Golden mean." means "If X < Y < Z, choose Y."

Also dead <expletive deleted> wrong. Instrumental and moral oughts are only differing in whether they are symmetrically non-contradictory,
Why should anyone take your word for that? Because you say it with a resonant and well modulated voice? Because you have a symmetry boner? Show your work. Instrumental and moral oughts appear prima facie to be differing in that "But I don't want to reach the other side of the wall" is generally perceived to be a good reason for not doing the thing one supposedly ought.

whether they can be invoked without creating a social contradiction.
Is that the same thing as a regular contradiction, or is it something different?

...Social contract theory is logically incapable of delivering that which it exists for the purpose of delivering.
That's one hell of a (<expletive deleted>) claim.
Social contract theory was made up by Thomas Hobbes to justify his claim that we all owe absolute obedience to the King, as a rhetorical device, because the traditional justification -- the Divine Right of Kings -- had stopped impressing people. In the absence of a god to magically prevent infinite regress in justifications for ethical obligation, people were proposing all manner of alternative foundations, or becoming skeptical about ethical claims in general. Whenever somebody made a moral claim, somebody else would say "Why?", and to whatever answer was given, somebody would say "Why?" to that too, so it was getting harder and harder to make the public believe "Because the King said so" was a good reason for anything. Hobbes' solution was to short-circuit all those "Why?s" and all those conflicting theories, by answering "Because you promised to". Nearly everybody agreed that people should keep their promises. But of course, as a matter of logic, this fails. "Why should people keep their promises?" is every bit as good a question as "Why should people take orders from gods?".

In fact, ethics only exists in the context of more than one person. If there is only one person, it is perfectly acceptable to be a solipsist, and all instrumental oughts are in fact ethically justified; nobody else's concerns need to be heeded in that situation because there is nobody else to be concerned with.
AM has refuted this admirably.

In fact if you read my posts like you claim, you would have already noticed that I pointed out the extent of it's function: risk level acceptance and resource allocation for zero- or limited-sum pools.
Yes, certainly. What's your point? You said "by having a common agreement". We don't have a common agreement; and even if we did, having a common agreement wouldn't magically make the "Why should people do what they agreed to?" question go away. To suppose it would is an appeal to magic or an appeal to "aroused emotional drive". The fact that you only want to apply it to risk and resource distribution rather than to every stupid command some Stewart king issues is great -- go you! Big step in the right direction. Just like if you rely on your horoscope only for scheduling your appointments and don't base actual foreign policy on it. Doesn't change the fact that when you wrote "by having a common agreement", you doomed any remaining possibility of having your theory "get past Hume", at least as far as risk acceptance and resource allocation are concerned.

I can easily identify that if I wish to have my meta-goal stay as intact as possible, I must respect the meta-goals of others as much as possible. Punishment for the sake of vengeance rather than only as a last resort in behavior modification fits right into "unnecessary", almost trivially so.
Some people's meta-goal is to have justice done.
No, it is not. Because that metagoal is contradictory; their goal is, I guarantee you, including to not have 'justice' done to them, to be free of 'justice'. Nobody wants to be punished, else it is not "punishment".
And? Nobody wants to be incarcerated for the sake of behavior modification either. People change their minds when their perspective changes; and people are biased in favor of themselves. What somebody thinks satisfies his goals while he hasn't committed a crime, and once he has, are probably going to be two different things no matter what his philosophical stance is. This isn't rocket science. Your double standard is painfully obvious, probably to everyone but you. Your whole meta-goal approach to ethics was pretty thoroughly anticipated by the grand poobah of symmetry, Immanuel "Always act according to that maxim whose universality as a law you can at the same time will" Kant -- and Kant was a dedicated retributivist. Retributive penal principles are every bit as symmetrical as utility-based principles. Deal with it.

So, Gary's goal does not unilaterally invoke bob
Bob's goal unilaterally invokes gary. There you go, it's already not up for debate with the social consensus.
Says you. The social consensus says otherwise.

If I can change the name 'jesus' for 'muhammed' or any other arbitrary thing, it's already disallowed as a contradiction; you are already abusing the role of the social consensus in the model, and invoking special pleading to justify one form of goal over another (jesus as opposed to Muhammed, neither of which is justifiable against the observable reality;
Don't shoot the messenger. I'm doing nothing of the sort; it's the social consensus that's doing that. I'm on your side here -- Gary and you and I and the rabbi will all vote for Gary's right not to go to church; the motion is carried, 996 to 4.

come back to me when you prove jesus and God and all that exist).
The social consensus already voted that Brother Justin the preacher proved it beyond reasonable doubt.

The role of social consensus here is limited to probabilistic outcomes:
The social consensus is impressed by Pascal's Wager. It judges that the risk of Gary going to Hell outweighs the infinitesimal probability that being taught Christianity will make him more dangerous than he is as a Jew. Actually, they figure that the case for Christianity is so strong that he must not have read the pamphlets they gave him.

we accept some probability of being harmed because our actions generate a probability of harming others; the social contract only determines the probabilities and the extents of harm allowed against the metagoal in the context of the risks we generate for others. For instance, I accept the risks of being harmed while others are driving by driving and imposing those risks on others, with consent through action.
Um, no, you accept those risks by writing "I accept the risks". "Consent through action" is another way to say "consent by proxy" -- it is you determining what somebody else consents to. It is every bit as logical as Christianity's sin-by-proxy and atone-by-proxy. Social contract theory is a religion.

Of course, I do invoke a second role of social contract: can also serve to formalize etiquettes for the disposition of limited social resources.
The social consensus votes to dismantle the synagogue to deploy its bricks and lumber to a more socially desired use.

Clearly what's going on here is you picked symmetry as your ethical premise, due to an aroused emotional drive. You transparently have a symmetry boner.
No system based on axioms can tolerate the existence of a contradiction within it. It has nothing to do with emotion and everything to do with the fact that I expect my ethical principles to be logical. I have a logic boner. So should you. So should everyone. Anything else is, well, illogical.
You haven't exhibited an asymmetry in retributive ethical systems, merely a tendency for people to change their minds when it's their own ox being gored. But never mind that -- you haven't even exhibited a logical contradiction in ethical systems that really are asymmetrical. Here, let's make it as easy for you as it could be. Consider the ethical system "King Charles I may do whatever he pleases; everyone else has an ethical duty to obey King Charles I in all things." Go ahead: derive a logical contradiction from that.

:eating_popcorn:

Stomping the bread into the ground so you could both die is by definition harming the goals of others, as a goal in and of itself.
It's my bread, not his. His metagoal cannot be harmed because he has no justification to take the bread. The destruction of the bread is, in fact, a product of their own unethical behavior as a result of what was an absolutely fair way to decide what happened to the bread; all other things equal, if might makes right we both die because I will be forced to fight to the death lest I die anyway; we both die. All other things being equal if we play any other game for the bread, it comes down to probabilistics anyway. So no matter what it comes down to probabilistics.

So his goals END as soon as he loses whatever game we decide on. It is by definition not harming them because his goals have already by his own consent been ended.

The only option for either of us was always "get a 50/50 chance at bread"; the cost of accepting a chance of getting that bread without mortal harm is accepting the consequences of cheating (namely mutually assured destruction); I just figure it's better to be starving to death without also being heavily injured in a fight that's likely to destroy the bread anyway.

Maybe you missed the fact that RS in this scenario of stomping the bread is already presumed to have of lost the coin toss. If he wins, he gets bread without violence, and I starve.

At that point, who is being unethical, again? Oh yeah, the person who would create a situation where they may get bread without injury and someone else starves, but they refuse to offer a situation where the other may attain the same. If we both play by the rules, we universally have a better chance at survival.
All that comes under the heading of "everyone is the hero of their own story and people will jump through all kinds of hoops to prove it to themselves." You are killing him, not because it's necessary to prevent a future crime, but because of his past crime. Your justifications -- that he's unethical, that he created the situation, that it's a product of his own unethical behavior, that the goals you consider legitimate for him to have are forfeited due to his cheating -- are all just high-falutin' ways to say you're killing him because he deserves it.
 
This is a very inaccurate statement. I do not think nature is ethical. I think that certain contexts in nature create situations where particular strategies are optimally beneficial.

I'm fine with that you see, but then you say things like 'ethics is the selection pressure' and that you have a radical idea about guiding principles.

The way I would put it is, there are natural laws and natural selection pressures that of themselves are neither ethical or unethical (are a-ethical) and that what we feel, and name, ethical, is our response to them.

An analogy might be beauty. Not Angra's example of colour, because there are, I would say, objective facts about colour (or at least about the wavelengths of light) that are independent of human judgements about them. I would say that this is not the case for beauty, or morality, or perhaps any human value judgement.

Ethics are natural, but not all of nature is ethical.

I do not even understand how you can say any of nature is ethical (of itself) rather than that we call it that because we have evolved certain capacities and traits.
 
First, it's not only that he enjoys it. It's that he does it for fun, so that is the reason (not just a secondary reason).
Second, he deserves punishment, regardless of what he needs - what he needs is food, water, air, etc.
Third, your claim againt retributivism flies is just that: a claim that goes against the ordinary human moral faculty. As is the case when you question any human faculty, the burden is on you. It would be irrational to dismiss our faculties for no good reason - reasons we can only assess by means of some of our faculties, obviously.
Fourth and foremost, this misses the point entirely. The point of the example was not that he deserves punishment (though he does), but he behaves unethically. That is sufficient to show that Jarhyn's theory is false.



No, that is not a problem. The problem is with your qualifiers. But that the ordinary human moral faculty deems a behavior unethical is in fact sufficient evidence to conclude that it is so, just as is the case with other ordinary human faculties, and barring specific evidence to the contrary.
For that matter, if it seems blue to a normal human visual system under ordinary light conditions, that is pretty much sufficient evidence to reckon that it is blue. It's what rationally one should reckon, barring a lot of counter evidence. The same for the verdicts of other human faculties, in this case the moral faculty. It's you who has the burden of showing that his behavior is not unethical.

Again, your qualifiers only complicate matters. But the 'really' qualifier does not seem to add anything, but it's an intensifier. As for the others, I already addressed them.


Now, if you were correct and the ordinary human faculty were not enough to justify our moral assessments, then nothing would, and Jarhyn's theory would be unwarranted. The reason for that is that we do not have any tools for assessing whether a behavior is unethical or not other than the ordinary moral human faculty - our own, and that of other people -, aided of course by other faculties (e.g, to make intuitive probabilistic assessments about expected consequences of some behavior), but in the end, our moral faculty is the tool to make ethical assessments.

What about moral theories?

None is true. However, even if one were true, those theories can only be properly tested against the judgments of the human moral faculty (or against something already based on it), so even then, we would only be justified in believing them true if they pass the test when their predictions are tested vs. the human moral faculty.

Incidentally, something like the above holds for color too: we may have cameras and computers that can detect blue stuff, but we only have them because they have been calibrated using human color vision (or tools already based on it).


ruby sparks said:
And in any case, you're too fond of the extreme example of causing harm for fun. It's trivially obvious that at some point on the spectrum of human behaviours, we could say something like, 'all normal, decent, intelligent humans would think this wrong'. So what? You're just operating at one extreme. At the other end of the spectrum, human morality is pretty relative and variegated.
You seem to have lost track of the conversation. Again, I was showing that Jarhyn's theory was false. In order to test a theory, I just need to compare its predictions with some known facts. So, the extreme examples are pretty adequate.

So what, you say?

So, "'all normal, decent, intelligent humans would think this wrong'", but the theory I am debunking entails it is not morally wrong.


ruby sparks said:
On the contrary, the claim that what you call the ordinary human moral sense is the proper tool to find moral truth is something for which the burden is on you.
No, that is not true. It would be irrational to question one of our faculties without good evidence against it - evidence which, of course, we also assess on the basis of our human faculties!

We do not do that normally. For example, we do not claim people who say a traffic light was red that they have to show that the human visual system is a proper tool to figure out whether something is red. Sure, there are arguments for a color error theory (they fail) but the burden is on the claimants.


ruby sparks said:
In fact, demonstrating that there is even such a thing as moral truth in the first place is a burden you might want to try to lift before you even get on the the other one.
No, not at all. That some behaviors are unethical is obvious by normal human assessments. It's on you the burden to show otherwise in the first place.

]However, in this context you miss the point again. Jarhyn's theory entails that there is such thing as moral truth - ethical truth in his terminology. So, in order to argue against it, it is proper to assume there is (else, the theory is false on that account alone). This does not even depend on whether it is proper to reckon in general (i.e., when not argue against a specific theory) that there is moral truth (it is, but really not the point here).


Jarhyn said:
I'm not and wasn't discussing Jarhyn's claim with you. You can discuss that with Jarhyn.
Are you serious?
Discussing it with Jarhyn is exactly what I was doing, when you jumped in: You jumped on a post in which I was replying to Jarhyn's ethical theory. You took that post out of context. Of course Jarhyn agrees that there is ethical truths (read his posts!), and his theory entails that there is (read his posts!), so it would be proper on my part to assume there is ethical truth in the context of testing his theory even if it were not proper to reckon in general (i.e, in other contexts) that there is ethical truth.


Let me try another manner. Suppose that someone claims that God (i.e., omnipotent, omniscient, morally perfect agent) exists. In order to argue against that claim, it is proper to assume that there is an omniscient, omnipotent agent, and then argue it is not at all morally perfect. Now, this case is different because there is moral truth, whereas there is no good reason to suspect there is an omniscient, omnipotent agent, but it is not different in the relevant sense, namely that it is okay to use as a hypothesis one or more of the implications of the theory one is criticizing.

Ok too much of that is confusingly interwoven with the mistaken idea that I am or was discussing Jarhyn's claims with you. As I said already, I am not and wasn't.

I will extract something though...

What about moral theories?

None is true.

This is not something I would have expected you to say, although it is what I would say. Are you not in fact, after all, claiming that your theory is true?
 
I think it's entirely plausible that someone will "get past Hume" -- the "Is-ought problem" is overrated -- but AM's approach appears to me to have better prospects for success than yours.

Without (I stress) getting involved in your exchanges with Jarhyn, I would like to offer my thoughts on this, separate to your disagreements with Jarhyn. I am not sure if the is-ought problem is overrated, but it depends what you mean. I might say that it can never be got past, although this does not prevent us from coming up with moral theories nonetheless. We pragmatically need to do that, I think, not least because (a) we are stuck with having to deal with our moral intuitions and (b) we must find ways to co-exist, if only in order to survive, which I feel is probably the main driver for what we humans call (rationalise as being) morality, even though the universe is amoral.

As to AM's approach, I agree it has its merits, obviously. But I am a bit skeptical about where he goes with it.

Personally, I would say that morality is not either objective or relativist. I would say that that is a false dichotomy, and too simple to reflect the enormous complexities. Does that mean I would say that there are no objective moral facts? No, I don't think I would go as far as that. There may be, but my caveats would be that (a) there might only be a very few, in clear cut situations (which I think are the minority) and (b) they are only objective in the sense that they are common to all (let's say) normal, properly-functioning humans (temporarily assuming we can define that) and are not objectively independent of our species the way that, for example, the laws of physics are.

For example, take Angra's favourite "it is morally wrong to torture people just for the fun of it.” All 'normal, properly-functioning' humans might agree with this, but (a) that does not make it independently true and (b) once we move away from such extreme examples, the ground starts to get situationally boggy, not least when we move on to responses (just deserts).
 
Last edited:
ruby sparks said:
Ok too much of that is confusingly interwoven with the mistaken idea that I am or was discussing Jarhyn's claims with you. As I said already, I am not and wasn't.
But you raise objections that do not make sense as objections to a post in which I reply to Jarhyn. Even if your claims about burden, etc., were correct in general, they would not work against my arguments in the post you were replying to, because that post was a reply to Jarhyn's theory, and it is proper to use some of the implications of the theory one is testing (in this case, Jarhyn's) in the arguments against it.


ruby sparks said:
This is not something I would have expected you to say, although it is what I would say. Are you not in fact, after all, claiming that your theory is true?
I see I haven't been clear. I was talking about first-order ethical theories that make specific predictions about moral assessments, and I mentioned them in opposition to the use of the human moral sense to make moral assessments. The reason I say none is true is that when tested using a human moral sense, they all fail.

Let me illustrate the distinction with an analogy. Imagine philosophers and/or scientists come up with different theories about color that make predictions about which objects are, say, blue. But when we look at some of the objects in question, under ordinary light conditions (i.e., objects in our vecinity, daylight, no difficulty to see, so pretty ordinary conditions), several of them do not look blue. On the basis of this, I would say all color theories are false. Someone might object and say 'But what about the color theory that says that our visual system, under ordinary conditions, is a good guide to ascertain the color of an object?' I would then say that that is not what I called a 'color theory'. But regardless of terminology, my point would be as above for moral theories.

Also, it's not my theory; it's not my invention, except for some details.
 
But you raise objections that do not make sense as objections to a post in which I reply to Jarhyn. Even if your claims about burden, etc., were correct in general, they would not work against my arguments in the post you were replying to, because that post was a reply to Jarhyn's theory, and it is proper to use some of the implications of the theory one is testing (in this case, Jarhyn's) in the arguments against it.

I don't know how I can be any clearer. Regardless of whether I came in on a conversation between you and Jarhyn, my points to you were to you and about your theory, not Jarhyn's. I am at this point discussing Jarhyn's theories with him separately.


I see I haven't been clear. I was talking about first-order ethical theories that make specific predictions about moral assessments, and I mentioned them in opposition to the use of the human moral sense to make moral assessments. The reason I say none is true is that when tested using a human moral sense, they all fail.

I don't understand that. Are you or are you not saying that yours is true?

Or are you merely saying that yours is not true, but at least accords with what you are calling 'human moral sense'? If so, good, but I would say that that is more complicated, variegated and relative than you seem to allow for. As such, it may be applicable where someone hypothetically kills purely for fun, temporarily assuming that ever happens, but beyond that I'm not so sure. Nor am I sure about the next step, deserts.

Let me illustrate the distinction with an analogy. Imagine philosophers and/or scientists come up with different theories about color that make predictions about which objects are, say, blue. But when we look at some of the objects in question, under ordinary light conditions (i.e., objects in our vecinity, daylight, no difficulty to see, so pretty ordinary conditions), several of them do not look blue. On the basis of this, I would say all color theories are false. Someone might object and say 'But what about the color theory that says that our visual system, under ordinary conditions, is a good guide to ascertain the color of an object?' I would then say that that is not what I called a 'color theory'. But regardless of terminology, my point would be as above for moral theories.

Also, it's not my theory; it's not my invention, except for some details.

I've said many times that I do not think colour is a good comparison. There may be independent (of humans) objective facts about colour (at least in terms of wavelengths of light) but I do not think there are such facts about morals.

I have suggested (aesthetic) beauty as a comparison instead, or some other human value judgement. I think that would be much better, given that we would be dealing with human value judgements in both comparative cases.
 
ruby sparks said:
I don't know how I can be any clearer. My points to you were to you and about your theory, not Jarhyn's.
And again, in that context, they are out of place, because even if your points about what you call my theory were correct, my points against Jarhyn's would work for the reasons I've been explaining.


ruby sparks said:
I've said many times that I do not think colour is a good comparison.
I've said many times that whether something is a good comparison depends on what it is we are talking about. Color is similar to morality in some senses, not in all (otherwise it would be morality), but the comparison is relevant in this context. If you do not see that, I'm afraid I cannot go further to explain it, as I do not know how to.

I will actually address the is/ought objection in another thread, but I will use color as an example. I hope you realize why it is adequate. If you do not realize that, I'm afraid I can't do more.
 
Bomb#20 said:
I think it's entirely plausible that someone will "get past Hume" -- the "Is-ought problem" is overrated -- but AM's approach appears to me to have better prospects for success than yours.
Maybe I went too far; I'm not sure. But I'm pretty sure that if it's a fallacy, it's pretty much everywhere, and it is inevitable. I will address it in another thread in MFP, in which I expect that I will be told repeatedly that the color analogy is inadequate, and so is the science analogy, and so on.:(
 
.... even if your points about what you call my theory were correct, my points against Jarhyn's would work for the reasons I've been explaining.

That is not something I am or was concerned about. You and Jarhyn are not necessarily debating the same things as you and I.

I've said many times that whether something is a good comparison depends on what it is we are talking about. Color is similar to morality in some senses, not in all (otherwise it would be morality), but the comparison is relevant in this context. If you do not see that, I'm afraid I cannot go further to explain it, as I do not know how to.

I will actually address the is/ought objection in another thread, but I will use color as an example. I hope you realize why it is adequate. If you do not realize that, I'm afraid I can't do more.

I definitely think you should do a value judgement, such as beauty, for reasons given. No, I do not accept your comparison with colour is the better one. Go ahead and use whatever comparison you wish, but I think it's flawed, and I believe contains an underlying presumption that morality has real, objective, independent properties, as colour (in terms of wavelengths of light) has, because that is precisely the key point of comparison you make. In other words, it's a conveniently pre-loaded comparison you're using. Which think is very iffy indeed. In any case, comparisons only work to a finite extent. It may be that morality is in some key ways different from either beauty or colour or whatever.

And even if you or anyone did manage to establish let's say at least one 'moral fact' (for humans), it will come with all the caveats I previously gave, that it is not truly independent or objective, that it can't necessarily be extrapolated to deal with other less clear situations and that it doesn't sort out the issue of deserts. As such, it is of very limited value and may even be something akin to a little nugget of philosophical fool's gold, depending on how you try to spend it in the real world.

To use your example of gustatory taste (which I agree was a better comparison that colour) yes, you may establish that almost every 'normal, proper-functioning' human will agree that something tastes disgusting, but you will never establish whether something else that some like and others don't has a similarly correct answer. And I would again remind you that rotting shark is considered a delicacy in Iceland in any case. :)
 
I've said many times that whether something is a good comparison depends on what it is we are talking about. Color is similar to morality in some senses, not in all (otherwise it would be morality), but the comparison is relevant in this context. If you do not see that, I'm afraid I cannot go further to explain it, as I do not know how to.

I will actually address the is/ought objection in another thread, but I will use color as an example. I hope you realize why it is adequate. If you do not realize that, I'm afraid I can't do more.

I definitely think you should do a value judgement, such as beauty, for reasons given. No, I do not accept your comparison with colour is the better one. Go ahead and use whatever comparison you wish, but I think it's flawed.

I already explained in the other thread why the reasons are not adequate. In this particular case, I was using color only to explain to you what I meant by a 'moral theory', which is analogous to what I would mean in that context by 'color theory'. So, the reasons you give for thinking the analogy is not adequate clearly fail (and if you do not see that, there is nothing I can do).

In other contexts, I use it for different purposes. And I reject the reasons you give for reasons I gave in our previous exchanges.

Btw, here is the new thread on the is/ought issue. I hope you realize why the analogies with color and science are adequate. https://talkfreethought.org/showthread.php?22197-The-is-ought-issue
However, previous experience suggests you will not, unfortunately.
 
The social consensus is impressed by Pascal's Wager. It judges that the risk of Gary going to Hell outweighs the infinitesimal probability that being taught Christianity will make him more dangerous than he is as a Jew. Actually, they figure that the case for Christianity is so strong that he must not have read the pamphlets they gave him.
which assumes things not in evidence. Prove hell. They both have EQUALLY proven beliefs, proven on equal amounts of evidence. Pascal's Wager has already fallen to trivial logical argument, and in fact I often make an inverse wager: that if there is a god, that God prefers the atheist. It goes a little something like this...

"The universe as we observe it has much evidence that it is old and is governed by unthinking mechanical relationships Much of it is indeterminate, and much of it is simply absurd. Nobody who has ever claimed to talk to god can claim a more 'divine' experience than someone who has merely taken a bunch of LSD, in fact. One thing that is trivially certain, however, is that IF there is a creator god, by that assumption he DID create a universe and this universe has a particular shape in the relationships that can emerge from its existence; the universe itself and it's properties and phenomena are in fact the only thing that can ever be said to be the word of God, if there is such a thing. The evidence remains that God has been silent since the first word.

Thus, that God clearly would prefer the person who reads this direct word and figures out from it what relationships and phenomena exist in it through testing and modeling and honest doubt in the flawed words of mere men. They will test, and doubt, and work it out. And they will not see 'a God' because no god is necessary based on what science, the act of understanding and reading the universe, has observed.

Thus God prefers the atheist, and particularly the atheist who derives their ethics from the shape of the universe rather than the assumptions made by men, even in the face of the absurdity of their own feelings.

This is in fact confirmed through the fact that atheistic or agnostic science yields better results to survival and understanding; the universe is shaped in such a way to guarantee this. There is no evidence of an afterlife. There is only evidence of this life. Thus the evidence indicates that it is our obligation for our own sake to make life as good in the one life we have been given evidence of as we find ourselves capable of."

I will get back to this Pascal's Wager abuse of the social contract in a bit, from different directions.

"King Charles I may do whatever he pleases; everyone else has an ethical duty to obey King Charles I in all things."
King Charles is a person, in the same fashion as all of "everyone else".

The contradiction exists quite plainly in that special pleading: the fact that King Charles' rights contradict with everyone else's rights; person X does not have equal moral value to person Y for all X and Y.

I am under no obligation to accept your axiom, at any rate. I am under an obligation to accept axioms I cannot deny and the other I cannot deny if I wish to be non-contradictory with an axiom no other can deny: that I claim my authority to act on the basis of my own existence (that I, ultimately, have autonomy). Second is that if I claim this autonomy it is equal in value to all others who claim this autonomy. Third, there are not real contradictions in nature.

So, if our autonomies have equal value (axiom 2) and your goal requires a greater value to your autonomy than mine, you have already invoked a contradiction. This is the ethical disproof of justification, the point at which an ought becomes qualified as "not ethically justified".

Instrumental and moral oughts appear prima facie to be differing in that "But I don't want to reach the other side of the wall" is generally perceived to be a good reason for not doing the thing one supposedly ought.

"When my goal is to get to the other side of the wall in the example but my goal isn't to get to the other side of the wall in the example..."
You are invoking a contradiction against the initial conditions of the example. The point is that the best strategy is contextual to the goal. You are moving the goalposts, quite literally, in asserting a different goal than the one our hypothetical actor had.

You frequently assert that the metagoal is a specific goal. It is not.

The metagoal represents a SET of goals, namely ALL goals for which value of autonomy of X is accepted as equal to the value of autonomy of Y.

I am using a single example where a goal is assumed to derive a strategy, so that later, when I derive a strategy that describes the metagoal, I can demonstrate an instrumental ought that is universally morally justified without engaging in special pleading.

Regardless of what you think you know of social contracts, this produces two issues that need to be resolved in the resolution of strategy, something that comes AFTER and SUBORDINATE to the aforementioned principles. This is where game theory enters: the strategy must address zero sum games and probabilistic outcomes, thus creating two "social contract" functions: the contract which decides what probabilities of risk we accept others to subject us to (lest we be paralyzed by infinitesimal probabilities of harm), and the resolution of limited resources in an equitable way. Both are well within the purview of game theory. Note that one opens us up to harm from the actions of others, it increases rather than limits our freedom with an inverse relationship to the probability of harm those actions may create.

Now, let me get back to your Pascal's Wager bullshit: first, Gary not going to church does not in any way generate risk for others. It creates exactly the outcome he has consented to on the basis of his own goals: It does not assume his justification based on his existence is superior to the justification of actions others have based on their existences. He has consented to hell if he is wrong AS IS HIS RIGHT, just as the christians consent to hell if Gary happens to be right. Because Gary does not risk THEIR souls even in going to hell, he has a right to do so on the basis of his personal goals (which include not wasting his sunday hearing someone blather on about false bullshit). There are some things the principles of ethics I have laid down do not allow a vote on, namely whether a person's rights are superior to another's. Only on what risk one is allowed to subject another to, and that this risk is purely measured in terms of the impacts on another person's goals, which can even include "going to hell, if I am wrong". And of course in this situation even God himself is measured against ethics. And we could have a merry conversation in which you would probably agree with me that the very idea of hell is unethical, at least within the context of the neo-lamarckian social-technological strategic context.

We can see imperfect reflections of these limitations on the social contract in it's subordinate position to noncontradicton in the existence of a bill of rights that limits the social contract and it's general acceptance in the population (most people are mostly right most of the time). This is roughly how things already are accepted to work. All I am doing is attempting to bring understanding of why into sharper focus.
 
Back
Top Bottom