• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Jokes about prison rape on men? Not a fan.

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
My original ethical derivations came from a pretty radical idea: that there is some principle in nature, some thing derived from the context of our existence in the universe, that caused the emergence of ethics in humans, that our theories and ethics are attempts to approximate in the same way that there, in fact, mere approximations.

You don't sound that different to Angra at this point. :)

I really do think that there is a game theoretic approach possible to ethical philosophy, to make it strategic.

Let's look at Tic Tac Toe. There are things that "are". "Marks are owned by players", "marks are placed in alternating sequence", "marks are placed on a three by three grid", "marks once placed are set". There is a GOAL, "place three marks in a line", and a secondary goal "prevent three marks that are not your own from being placed in a line." From these 'is' things, one can derive an OUGHT wherein every single action made by the player is predetermined. It creates a strategy, and the players who use that strategy will invariably meet the inferior goal and if their opponent makes any mistake at all, they will get their superior goal. This creates an ought: IF your goal is to win (and not lose), you OUGHT apply that strategy as perfectly as possible.

Of course, I expect such an axiom to be controversial. I expect people to not want it, to reject it. I used the transform to a simple game to illustrate the point in a simple rather than hellishly complicated context such as ethics, though it was originally thinking about Tic Tac Toe that I actually came to understand the mechanism by which goals derive oughts from "is".

Other games have different rules. Sometimes those rules imply that there can be no strategy: that there is no way to achieve any particular goal and the results of the game are random (like the card game WAR).

So to me this says that some of the fundamental elements of moral philosophy have to be approached from the examination of goals... hence my metagoal. Because it can't just be about what I want, if I want general strategy.

Sure. 'Goals' and 'strategies' are not arbitrary (evolution and natural selection ensure it) and game theory is a useful citation. But as I see it, the idea that goals or strategies are morally right or wrong in any factual, independent, universal or realist way is.....misguided.

Does that make me a moral relativist? Or a Consequentialist? Or a Utilitarian? Or something else? To be honest, I don't know. First, I never seem to feel I fit any particular label or ism, and second, I change my mind a lot. I think people have argued about morality since the start of recorded history and I think they probably will until the end of it and I'm not sure it will ever be philosophically resolved.

One thing to note: games (in game theory) that feature more cooperation than competition seem to work best, I believe. Does this suggest that retribution is not the best strategy? If so, where does that leave retributivism?
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,798
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
My original ethical derivations came from a pretty radical idea: that there is some principle in nature, some thing derived from the context of our existence in the universe, that caused the emergence of ethics in humans, that our theories and ethics are attempts to approximate in the same way that there, in fact, mere approximations.

You don't sound that different to Angra at this point. :)

I really do think that there is a game theoretic approach possible to ethical philosophy, to make it strategic.

Let's look at Tic Tac Toe. There are things that "are". "Marks are owned by players", "marks are placed in alternating sequence", "marks are placed on a three by three grid", "marks once placed are set". There is a GOAL, "place three marks in a line", and a secondary goal "prevent three marks that are not your own from being placed in a line." From these 'is' things, one can derive an OUGHT wherein every single action made by the player is predetermined. It creates a strategy, and the players who use that strategy will invariably meet the inferior goal and if their opponent makes any mistake at all, they will get their superior goal. This creates an ought: IF your goal is to win (and not lose), you OUGHT apply that strategy as perfectly as possible.

Of course, I expect such an axiom to be controversial. I expect people to not want it, to reject it. I used the transform to a simple game to illustrate the point in a simple rather than hellishly complicated context such as ethics, though it was originally thinking about Tic Tac Toe that I actually came to understand the mechanism by which goals derive oughts from "is".

Other games have different rules. Sometimes those rules imply that there can be no strategy: that there is no way to achieve any particular goal and the results of the game are random (like the card game WAR).

So to me this says that some of the fundamental elements of moral philosophy have to be approached from the examination of goals... hence my metagoal. Because it can't just be about what I want, if I want general strategy.

Sure. 'Goals' and 'strategies' are not arbitrary, and game theory is a useful citation. But as I see it, the idea that goals or strategies are morally right or wrong in any factual, independent, universal or realist way is.....misguided.

Does that make me a moral relativist? Or a Consequentialist? Or a Utilitarian? Or something else? To be honest, I don't know. First, I never seem to feel I fit any particular label or ism, and second, I change my mind a lot. I think people have argued about morality since the start of recorded history and I think they probably will until the end of it and I'm not sure it will ever be philosophically resolved.

One thing to note: games (in game theory) that feature more cooperation than competition seem to work best, I believe. Does this suggest that retribution is not the best strategy? If so, where does that leave retributivism?

Eh, I get that it sounds a lot like AM sometimes, but the fact is, ethics didn't come from nowhere. As to what the underlying "unifying" ethical model might be, I admit right at the get-go that I am probably wrong about at least some elements of what that is. I do not exclude myself from my doubt and there are a lot of things I haven't been able to work out.

I don't like to think about things in terms of specific goals. I fully admit that "wants to kill someone" is not even necessarily an "immoral" goal. Context makes a lot of difference insofar as there are ethical strategies and unethical strategies to fulfill the goal. A clearly unethical strategy for that particular goal would be to go out on the street with a gun, say "I'm going to kill someone", and shoot the first person who runs away screaming. An ethical fulfillment would be to put out an ad for "euthanasia services", wait for an applicant, interview them about their motivations and reasons, get a second opinion from a psychologist and doctor, urge them to seek counseling if it is determined that there may be a different resolution, and only after their resolve has been made apparent and that they are acting of sound mind, assisting in their death.

Strategies can, in this context, clearly be ethical or not ethical.

I tend to reject labels of extant moral models because I don't see myself as a consumer of "pop ethics". If I was hard pressed to invent a name, it would be something like "integral ethics", "physical ethics", or even "computational ethics".

As to games, I was thinking about my own private motivations in terms of how I derive ethics, it is from the perspective of someone who utterly sucks at PVP. I abhor conflict where I can avoid it, and think that while some competition is necessary to isolate "which is more fun tional" between equal alternatives, that it is ultimately injurious to self and others to PVP. I see us as in this together and the more teammates I have attacking an uncaring universe that shits us out into it as ignorant and undefended rather than weakening "us". Competition is for testing and self improvement, rather than isolation and group selection.
 

J842P

Veteran Member
Joined
Jan 30, 2006
Messages
4,137
Location
USA, California
Basic Beliefs
godless heathen
You don't sound that different to Angra at this point. :)



Sure. 'Goals' and 'strategies' are not arbitrary, and game theory is a useful citation. But as I see it, the idea that goals or strategies are morally right or wrong in any factual, independent, universal or realist way is.....misguided.

Does that make me a moral relativist? Or a Consequentialist? Or a Utilitarian? Or something else? To be honest, I don't know. First, I never seem to feel I fit any particular label or ism, and second, I change my mind a lot. I think people have argued about morality since the start of recorded history and I think they probably will until the end of it and I'm not sure it will ever be philosophically resolved.

One thing to note: games (in game theory) that feature more cooperation than competition seem to work best, I believe. Does this suggest that retribution is not the best strategy? If so, where does that leave retributivism?

Eh, I get that it sounds a lot like AM sometimes, but the fact is, ethics didn't come from nowhere. As to what the underlying "unifying" ethical model might be, I admit right at the get-go that I am probably wrong about at least some elements of what that is. I do not exclude myself from my doubt and there are a lot of things I haven't been able to work out.

I don't like to think about things in terms of specific goals. I fully admit that "wants to kill someone" is not even necessarily an "immoral" goal. Context makes a lot of difference insofar as there are ethical strategies and unethical strategies to fulfill the goal. A clearly unethical strategy for that particular goal would be to go out on the street with a gun, say "I'm going to kill someone", and shoot the first person who runs away screaming. An ethical fulfillment would be to put out an ad for "euthanasia services", wait for an applicant, interview them about their motivations and reasons, get a second opinion from a psychologist and doctor, urge them to seek counseling if it is determined that there may be a different resolution, and only after their resolve has been made apparent and that they are acting of sound mind, assisting in their death.

Strategies can, in this context, clearly be ethical or not ethical.

I tend to reject labels of extant moral models because I don't see myself as a consumer of "pop ethics". If I was hard pressed to invent a name, it would be something like "integral ethics", "physical ethics", or even "computational ethics".

As to games, I was thinking about my own private motivations in terms of how I derive ethics, it is from the perspective of someone who utterly sucks at PVP. I abhor conflict where I can avoid it, and think that while some competition is necessary to isolate "which is more fun tional" between equal alternatives, that it is ultimately injurious to self and others to PVP. I see us as in this together and the more teammates I have attacking an uncaring universe that shits us out into it as ignorant and undefended rather than weakening "us". Competition is for testing and self improvement, rather than isolation and group selection.

Yes, ethics do come from somewhere. They are evolved behaviors. It is a psychological/biological phenomenon. I guess "ethics" is simply the rationalization we layer on top of our moral intuitions.

Lately, I've come to believe that a virtue ethics perspective is actually a better overall model for our underlying moral behavior.

I've been playing around with ways to articulate this, but I think essentially, human morality can be approximated to be "those behaviors that would make me trust you if we were stuck out alone in a jungle and you could do whatever you wanted to me with impunity". The behaviors that would make me trust you would be considered "moral", those that wouldn't, "immoral".


But then there is a whole layer of "purity" and "sanctity", and I think that's simply related to group cohesion.

One of the more insightful researchers into moral psychology is Jonathan Haidt.


I do think a game-theoretic analysis is useful, but maybe only up to a certain point, at least currently. But there is a lot of work being done in that sphere.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,798
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
You don't sound that different to Angra at this point. :)



Sure. 'Goals' and 'strategies' are not arbitrary, and game theory is a useful citation. But as I see it, the idea that goals or strategies are morally right or wrong in any factual, independent, universal or realist way is.....misguided.

Does that make me a moral relativist? Or a Consequentialist? Or a Utilitarian? Or something else? To be honest, I don't know. First, I never seem to feel I fit any particular label or ism, and second, I change my mind a lot. I think people have argued about morality since the start of recorded history and I think they probably will until the end of it and I'm not sure it will ever be philosophically resolved.

One thing to note: games (in game theory) that feature more cooperation than competition seem to work best, I believe. Does this suggest that retribution is not the best strategy? If so, where does that leave retributivism?

Eh, I get that it sounds a lot like AM sometimes, but the fact is, ethics didn't come from nowhere. As to what the underlying "unifying" ethical model might be, I admit right at the get-go that I am probably wrong about at least some elements of what that is. I do not exclude myself from my doubt and there are a lot of things I haven't been able to work out.

I don't like to think about things in terms of specific goals. I fully admit that "wants to kill someone" is not even necessarily an "immoral" goal. Context makes a lot of difference insofar as there are ethical strategies and unethical strategies to fulfill the goal. A clearly unethical strategy for that particular goal would be to go out on the street with a gun, say "I'm going to kill someone", and shoot the first person who runs away screaming. An ethical fulfillment would be to put out an ad for "euthanasia services", wait for an applicant, interview them about their motivations and reasons, get a second opinion from a psychologist and doctor, urge them to seek counseling if it is determined that there may be a different resolution, and only after their resolve has been made apparent and that they are acting of sound mind, assisting in their death.

Strategies can, in this context, clearly be ethical or not ethical.

I tend to reject labels of extant moral models because I don't see myself as a consumer of "pop ethics". If I was hard pressed to invent a name, it would be something like "integral ethics", "physical ethics", or even "computational ethics".

As to games, I was thinking about my own private motivations in terms of how I derive ethics, it is from the perspective of someone who utterly sucks at PVP. I abhor conflict where I can avoid it, and think that while some competition is necessary to isolate "which is more fun tional" between equal alternatives, that it is ultimately injurious to self and others to PVP. I see us as in this together and the more teammates I have attacking an uncaring universe that shits us out into it as ignorant and undefended rather than weakening "us". Competition is for testing and self improvement, rather than isolation and group selection.

Yes, ethics do come from somewhere. They are evolved behaviors. It is a psychological/biological phenomenon. I guess "ethics" is simply the rationalization we layer on top of our moral intuitions.

Lately, I've come to believe that a virtue ethics perspective is actually a better overall model for our underlying moral behavior.

I've been playing around with ways to articulate this, but I think essentially, human morality can be approximated to be "those behaviors that would make me trust you if we were stuck out alone in a jungle and you could do whatever you wanted to me with impunity". The behaviors that would make me trust you would be considered "moral", those that wouldn't, "immoral".


But then there is a whole layer of "purity" and "sanctity", and I think that's simply related to group cohesion.

One of the more insightful researchers into moral psychology is Jonathan Haidt.


I do think a game-theoretic analysis is useful, but maybe only up to a certain point, at least currently. But there is a lot of work being done in that sphere.

"They are evolved behaviors"... No, ethics are not evolved behaviors. Morals, those are arguably evolved. But not ethics. You are begging a question, "from whence comes the evolutionary convergence upon ethics that is 'morality'?"

From whence come virtues? Why are they "virtuous"? Socrates asked these questions and I have yet to get an answer that is not purely axiomatic and subjective per goal. You add as many axioms as you add virtues.

I bring exactly one axiom to my framework and it is one I can fairly well believe we can agree on: that a goal in the context of a system generates an ought in the context of that goal in that system.
 

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
Yes, ethics do come from somewhere. They are evolved behaviors. It is a psychological/biological phenomenon. I guess "ethics" is simply the rationalization we layer on top of our moral intuitions.

Lately, I've come to believe that a virtue ethics perspective is actually a better overall model for our underlying moral behavior.

I've been playing around with ways to articulate this, but I think essentially, human morality can be approximated to be "those behaviors that would make me trust you if we were stuck out alone in a jungle and you could do whatever you wanted to me with impunity". The behaviors that would make me trust you would be considered "moral", those that wouldn't, "immoral".


But then there is a whole layer of "purity" and "sanctity", and I think that's simply related to group cohesion.

One of the more insightful researchers into moral psychology is Jonathan Haidt.


I do think a game-theoretic analysis is useful, but maybe only up to a certain point, at least currently. But there is a lot of work being done in that sphere.

"They are evolved behaviors"... No, ethics are not evolved behaviors. Morals, those are arguably evolved. But not ethics. You are begging a question, "from whence comes the evolutionary convergence upon ethics that is 'morality'?"

From whence come virtues? Why are they "virtuous"? Socrates asked these questions and I have yet to get an answer that is not purely axiomatic and subjective per goal. You add as many axioms as you add virtues.

I bring exactly one axiom to my framework and it is one I can fairly well believe we can agree on: that a goal in the context of a system generates an ought in the context of that goal in that system.

My 1st reaction was to pretty much agree with what both you and J842P said about evolved behaviours. I don't think I am understanding the distinction you are making between morals and ethics on that though. Aren't they both, in the end, similarly evolved?
 

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
As to games, I was thinking about my own private motivations in terms of how I derive ethics, it is from the perspective of someone who utterly sucks at PVP. I abhor conflict where I can avoid it, and think that while some competition is necessary to isolate "which is more fun tional" between equal alternatives, that it is ultimately injurious to self and others to PVP. I see us as in this together and the more teammates I have attacking an uncaring universe that shits us out into it as ignorant and undefended rather than weakening "us". Competition is for testing and self improvement, rather than isolation and group selection.

Here, I think, you are acknowledging at least a certain degree of moral relativism and subjectivism. Which I think is useful. I sometimes think it's a false dichotomy to choose between whether moral relativists/subjectivists or their opponents (be they moral universalists or moral realists or whatever) are correct. To me it seems there is a degree of relativity/subjectivity and a degree of universality (perhaps better to say generality) and realism involved, depending on what types, instances or examples of human behaviour we discuss in whatever context. And, I think this is the sort of complexity we would expect, if natural selection is the driver. The world (the 'system' in which things operate) is very complex and dynamic. Humans are very complex and capricious actors in that world. No two humans and no two contexts are the same. Etc. Even if humans do not have free will (which in the final analysis I tend to think they don't) their capacities for a range of varied and complicated responses (to varied and complicated situations) is wide, imo.

This does not mean we can't strive to construct ethical frameworks, obviously.

ETA: good example there about the 'wanting to kill' goal.
 
Last edited:

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
yes, ethics do come from somewhere. They are evolved behaviors. It is a psychological/biological phenomenon. I guess "ethics" is simply the rationalization we layer on top of our moral intuitions.

I tend to agree, very much.

Lately, I've come to believe that a virtue ethics perspective is actually a better overall model for our underlying moral behavior.

Unfortunately, I am not that familiar with the varieties of virtue ethics that are out there.

I've been playing around with ways to articulate this, but I think essentially, human morality can be approximated to be "those behaviors that would make me trust you if we were stuck out alone in a jungle and you could do whatever you wanted to me with impunity". The behaviors that would make me trust you would be considered "moral", those that wouldn't, "immoral".

That's certainly an interesting idea, and not one I've heard before. I'm puzzling over it. :)

But then there is a whole layer of "purity" and "sanctity", and I think that's simply related to group cohesion.

Yes, I think so. Which is why I might say that such things, and maybe virtues, can be reduced to what you cited at the start, namely, evolved behaviours. I think that's where I'd plant my flag. I wouldn't say it was moral bedrock, but I would say that it is the bedrock of what we label (or as you put it rationalise as being, or consider) moral, which of course is a slightly different thing to saying it is actually moral. And of course, as we all now know from plate tectonics, bedrock is not fixed in place, even if it only moves very very slowly.

One of the more insightful researchers into moral psychology is Jonathan Haidt.

I am not that familiar, but from what I have seen and heard, yes, I'd agree.

I do think a game-theoretic analysis is useful, but maybe only up to a certain point, at least currently. But there is a lot of work being done in that sphere.

Yes. I think there are such limits.
 
Last edited:

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,798
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
Yes, ethics do come from somewhere. They are evolved behaviors. It is a psychological/biological phenomenon. I guess "ethics" is simply the rationalization we layer on top of our moral intuitions.

Lately, I've come to believe that a virtue ethics perspective is actually a better overall model for our underlying moral behavior.

I've been playing around with ways to articulate this, but I think essentially, human morality can be approximated to be "those behaviors that would make me trust you if we were stuck out alone in a jungle and you could do whatever you wanted to me with impunity". The behaviors that would make me trust you would be considered "moral", those that wouldn't, "immoral".


But then there is a whole layer of "purity" and "sanctity", and I think that's simply related to group cohesion.

One of the more insightful researchers into moral psychology is Jonathan Haidt.


I do think a game-theoretic analysis is useful, but maybe only up to a certain point, at least currently. But there is a lot of work being done in that sphere.

"They are evolved behaviors"... No, ethics are not evolved behaviors. Morals, those are arguably evolved. But not ethics. You are begging a question, "from whence comes the evolutionary convergence upon ethics that is 'morality'?"

From whence come virtues? Why are they "virtuous"? Socrates asked these questions and I have yet to get an answer that is not purely axiomatic and subjective per goal. You add as many axioms as you add virtues.

I bring exactly one axiom to my framework and it is one I can fairly well believe we can agree on: that a goal in the context of a system generates an ought in the context of that goal in that system.

My 1st reaction was to pretty much agree with what both you and J842P said about evolved behaviours. I don't think I am understanding the distinction you are making between morals and ethics on that though. Aren't they both, in the end, similarly evolved?

So, here an analogy of the distinction which I hope works well for you: if I throw a ball into the water at the beach, a dog who is playing fetch will run along the beach for a while towards the water line (it is most consistent at a lake rather than an ocean beach). When they hit the waterline they will turn into the water, swim, and get the ball. When observed, it will be found that the angles at which they travelled are the shortest time path as a function of their running and swimming speeds. This is a calculus problem. But the dog is not "doing" calculus. Rather, the dog has a personal, biological calculus approximation.

Morals are the dog's evolved machinery. Ethics are actual calculus.

Is calculus evolved? No. It is a product of a system of axioms, and is the same on earth as it is on Mars, as it is on proxima centauri. There is a real phenomena that is the relationship of rates that can be described perfectly with a mathematical model.

To me, this is the difference between morals and ethics. There is something real there. I am not "into" watching the dog run along the beach. I am not interested in the animal and the implementation of a "moral machine", I am interested in the ethical physics . Once you understand the physical ethics you can ask "what tools can I use to make moral machinery more ethical".
 

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
My 1st reaction was to pretty much agree with what both you and J842P said about evolved behaviours. I don't think I am understanding the distinction you are making between morals and ethics on that though. Aren't they both, in the end, similarly evolved?

So, here an analogy of the distinction which I hope works well for you: if I throw a ball into the water at the beach, a dog who is playing fetch will run along the beach for a while towards the water line (it is most consistent at a lake rather than an ocean beach). When they hit the waterline they will turn into the water, swim, and get the ball. When observed, it will be found that the angles at which they travelled are the shortest time path as a function of their running and swimming speeds. This is a calculus problem. But the dog is not "doing" calculus. Rather, the dog has a personal, biological calculus approximation.

Morals are the dog's evolved machinery. Ethics are actual calculus.

Is calculus evolved? No. It is a product of a system of axioms, and is the same on earth as it is on Mars, as it is on proxima centauri. There is a real phenomena that is the relationship of rates that can be described perfectly with a mathematical model.

To me, this is the difference between morals and ethics. There is something real there. I am not "into" watching the dog run along the beach. I am not interested in the animal and the implementation of a "moral machine", I am interested in the ethical physics . Once you understand the physical ethics you can ask "what tools can I use to make moral machinery more ethical".

Well, I'm not against or averse to your approach, but to me, morals and ethics are as evolved as each other. They can still be defined slightly differently of course, perhaps in the way you suggest, but I don't think this makes one evolved behaviour and the other not.

As to your approach generally, could I call it Goalism? If so (or even if not) I think Angra is right to say that it gets hit by (what he says is not) the naturalistic fallacy or the getting an ought from an is, or something very similar to those. Perhaps one key difference is that, as I read it, you are saying this is your preferred model or one that you think works well, or would work well, while he is saying that (a) his involves independent, real, universal moral facts and (b) this allows him to know what is really, actually right and wrong.

Now, there may be strong Moral Realism and there may be weak Moral Realism, but I think when Moral Realism gets diluted, it starts to look a bit like (incorporates certain aspects of) Moral Relativism, and vice versa. :)

So I tend to suspect morality is mostly or generally an admixture. There may be exceptions. A bit vague, I know, but I'd prefer complicated uncertainty over certainty about such things I think. At least in discussions. When something happens in the real world, decisions may have to be made, if only for pragmatic reasons.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,798
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
My 1st reaction was to pretty much agree with what both you and J842P said about evolved behaviours. I don't think I am understanding the distinction you are making between morals and ethics on that though. Aren't they both, in the end, similarly evolved?

So, here an analogy of the distinction which I hope works well for you: if I throw a ball into the water at the beach, a dog who is playing fetch will run along the beach for a while towards the water line (it is most consistent at a lake rather than an ocean beach). When they hit the waterline they will turn into the water, swim, and get the ball. When observed, it will be found that the angles at which they travelled are the shortest time path as a function of their running and swimming speeds. This is a calculus problem. But the dog is not "doing" calculus. Rather, the dog has a personal, biological calculus approximation.

Morals are the dog's evolved machinery. Ethics are actual calculus.

Is calculus evolved? No. It is a product of a system of axioms, and is the same on earth as it is on Mars, as it is on proxima centauri. There is a real phenomena that is the relationship of rates that can be described perfectly with a mathematical model.

To me, this is the difference between morals and ethics. There is something real there. I am not "into" watching the dog run along the beach. I am not interested in the animal and the implementation of a "moral machine", I am interested in the ethical physics . Once you understand the physical ethics you can ask "what tools can I use to make moral machinery more ethical".

Well, I'm not against or averse to your approach, but to me, morals and ethics are as evolved as each other. They still be defined slightly differently of course, perhaps in the way you suggest, but I don't think this makes one evolved behaviour and the other not.

My point is, I can point to the line the dog runs in the sand. I can say this running is "an evolved behavior", but evolved because of what selection pressure?

Selection pressures are not "evolved" they are a part of evolution as a systemic process. I think you are making a category error here in saying ethics are "evolved". Ethical implementations are evolved, albeit not through darwinistic mechanisms. Morality on the other hand IS evolved through darwinistic mechanisms. The fact that when some thing has a different speed of travel based on media, that there is a point at which the derivative of the travel time function crosses zero is not evolved. It is an absolute. It would be true of the inhabitants of planet Vulcan as much as it is true for us. It is as true for robots as it is for us. Ethics is the selection pressure, and morality (and ethical implementation) are the selection that is made.
 

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
Selection pressures are not "evolved" they are a part of evolution as a systemic process. I think you are making a category error here in saying ethics are "evolved". Ethical implementations are evolved, albeit not through darwinistic mechanisms. Morality on the other hand IS evolved through darwinistic mechanisms. The fact that when some thing has a different speed of travel based on media, that there is a point at which the derivative of the travel time function crosses zero is not evolved. It is an absolute. It would be true of the inhabitants of planet Vulcan as much as it is true for us. It is as true for robots as it is for us. Ethics is the selection pressure, and morality (and ethical implementation) are the selection that is made.

It's true that I often get confused about the difference between morality and ethics, even though in the past I have occasionally googled 'what's the difference' (between them)? And I'm sure there are differing definitions. But you seem to be talking about ethics in a way I'm not familiar with at all.

Here's two of the definitions of ethics that I found, at two fairly well-known philosophy websites:

The field of ethics (or moral philosophy) involves systematizing, defending, and recommending concepts of right and wrong behavior.
https://iep.utm.edu/ethics/

“Ethics” is sometimes taken to refer to a guide to behavior wider in scope than morality, and that an individual adopts as his or her own guide to life, as long as it is a guide that the individual views as a proper guide for others.
https://plato.stanford.edu/entries/morality-definition/

Which is roughly the way I have previously understood the distinction, that thinking something right or wrong comes first (morality) and systemizing or coding the rules about it (ethics) comes after. But you seem to have the two the other way around.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,798
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
Selection pressures are not "evolved" they are a part of evolution as a systemic process. I think you are making a category error here in saying ethics are "evolved". Ethical implementations are evolved, albeit not through darwinistic mechanisms. Morality on the other hand IS evolved through darwinistic mechanisms. The fact that when some thing has a different speed of travel based on media, that there is a point at which the derivative of the travel time function crosses zero is not evolved. It is an absolute. It would be true of the inhabitants of planet Vulcan as much as it is true for us. It is as true for robots as it is for us. Ethics is the selection pressure, and morality (and ethical implementation) are the selection that is made.

It's true that I often get confused about the difference between morality and ethics, even though in the past I have occasionally googled 'what's the difference' (between them)? And I'm sure there are differing definitions. But you seem to be talking about ethics in a way I'm not familiar with at all.

Here's two of the definitions of ethics that I found, at two fairly well-known philosophy websites:

The field of ethics (or moral philosophy) involves systematizing, defending, and recommending concepts of right and wrong behavior.
https://iep.utm.edu/ethics/

“Ethics” is sometimes taken to refer to a guide to behavior wider in scope than morality, and that an individual adopts as his or her own guide to life, as long as it is a guide that the individual views as a proper guide for others.
https://plato.stanford.edu/entries/morality-definition/

Which is roughly the way I have previously understood the distinction, that thinking something right or wrong comes first (morality) and systemizing or coding the rules about it (ethics) comes after. But you seem to have the two the other way around.

I talk about them the way I do, drawing razor sharp distinctions, largely because I find that most philosophers are really muddy about it and don't ask what are even remotely the correct questions.

I first doubt my feelings. I find I have an obligation to doubt my feelings (that everyone does, in fact). The first assumption in honest discovery is "I am wrong". And that includes my moral machinery. I know what I FEEL is wrong, but is it? I know what I FEEL is right, but is it? Why? From whence comes piety?

Regardless of whatever sophistry, conflation, cart/horse, or other of what I suspect is pure bullshit in the field of philosophy, though, the real question is if you can understand what I am trying to say and why I draw these distinctions.

Regardless of what you want to call it, we need separate terms to describe the phenomena in nature, the selection pressure that makes morality emerge as an evolved machine, to understand the shape of it.

Edit: When you try to understand ethics as a deconstruction of morality rather than a driver of it, you end up in a realm of sophistry and nonsense because evolved behaviors are under no obligation to be consistent, non-contradictory or make any sense at all. They are under no obligation to accurately model the phenomena that makes a selection pressure. Evolved behaviors don't fundamentally have to make sense entirely, and it is a losing battle to try to derive a logical or consistent system from an approximation. They are merely "good enough for living long enough to pop a baby out".
 

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
ruby sparks said:
I hardly even know where to start with that. I would refer you back to previous times when I replied regarding those at length, both here and in previous threads, and explained why I think your arguments are flawed, particularly when taking human instincts to represent moral facts.
I explained in those threads why you were mistaken, but my arguments in this thread stand on their own. Do you have a counter argument?

ruby sparks said:
And if what I am saying hits all moralities, then in some ways, yes, that is exactly the point. Though as I said previously, it hits some harder than others, particularly the more dogmatic ones with strong claims to real, universal, independent moral facts that it is claimed are known by the person asserting them, such as yours.
No, that part is false. My moral assessments do not have anything more "dogmatic" than those of my opponents. In fact, your objection is that apparently one is not justified in making moral assessments on the basis of information that can be stated in non-moral terms. Well, I showed that every moral assessment made by every human who ever made it do just that (well, you might make an exception for 'Either Bob is a bad person or it is not the case tha Bob is a bad person' or things like that, for which we would have to get into the issue of what it is to use some information. But that's a detail).

Let me try again:


Suppose A says B behaved immorally when he did X.
1. If A uses her own moral sense to make the assessment, then her assessment falls within the scope of your 'naturalistic fallacy' or 'is-ought' problem, because it does not logically follow from the fact that A's moral sense gives the verdict 'B's doing X was immoral', that B's doing X was immoral.
2. If A uses the moral sense of other humans, then the same holds.
3. If A derives her judgment from some moral premises P1, ...Pn , and some other premises Q, then the question is: How does A derive P1,..Pn.



As there is no infinite regress in A's argumentation or thought (she is human), then at some point A is basing her moral assessments on something that is not a moral premise. That falls afaul of your 'naturalistic fallacy', and taints the rest of the conclusions as they are based on an unwarranted starting point.
The above shows that your objection to my points, if successful, works against every single moral assessment of the form 'B behaved immorally when he did X' ever made by a human being. The same goes for any moral assessment of the form 'A behaved in an unjust manner when he did X', 'B deserves to be punished for doing X', 'B does not deserve to be punished for doing X', and so on.

ruby sparks said:
I'm seeing quite a lot of your 'precision' as pedantry and sophistry that hides an underlying intransigence and presumption.
You are mistaken, for the reasons I have been explaining. But that aside, do you have a counter argument?


ruby sparks said:
And in an odd way, it reminds me in some ways at least of the ways academic and learned theologians go about their business. Endless 'precision', logic and convolution, even citing evidence, in order to arrive at conclusions already assumed. Which is why I think you should be careful about levelling such criticisms at others as if you were immune. There's none so blind as those who think only others can't see.
Do you have an argument that shows any of my alleged errors?
 

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
Here's even an example of a judgment that people deserve some punishment, in this case mild:

By all means tell us your opinions. Opinions can be useful and reasonable, and I broadly think yours are both. Just ease back on claiming that you know that you are talking about real, independent, universal moral facts and that if anyone disagrees with you, they are mistaken. That's essentially your underlying dogma, or perhaps your ideology, possibly even your secular religion in some ways, imo.

That was an example of a judgment that people deserve some punishment made by a person whose ideology/religion holds that people never deserve punishment for their actions. It is a fact that he said so, of course.
 

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
I do use it quite a lot, though it happens far more often than I use it.

Maybe it does. Sometimes it may even be you doing it.

Well, I'm not an ideologue or religious. It might happen I'm angry sometimes - though it has never happened in this thread -, but then if I make an error, I correct myself by cooling off.
 

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
Jarhyn said:
My original ethical derivations came from a pretty radical idea: that there is some principle in nature, some thing derived from the context of our existence in the universe, that caused the emergence of ethics in humans, that our theories and ethics are attempts to approximate in the same way that there, in fact, mere approximations.
Take at other monkeys. They seek to punish monkeys that break the rules of monkey behavior (general species-wide rules and local rules to the extent they have them). They can see unfairness and injustice. But other animals have different rules of behavior, even smart animals. Whether one calls what other monkeys do 'morality' is a matter of classification, and whether you want to include debate in it. But it is of course a continuum in evolutionary terms. We have morality because animals with the kinds of minds we have have certain rules of behavior, which we got from evolution. More precisely, I'm talking about notions such as what is ethically permissible or impermissible. Concepts such as what is good or bad or just or unjust are not exactly the rules of behavior but are closely related to that.



Jarhyn said:
Let's look at Tic Tac Toe. There are things that "are". "Marks are owned by players", "marks are placed in alternating sequence", "marks are placed on a three by three grid", "marks once placed are set". There is a GOAL, "place three marks in a line", and a secondary goal "prevent three marks that are not your own from being placed in a line." From these 'is' things, one can derive an OUGHT wherein every single action made by the player is predetermined. It creates a strategy, and the players who use that strategy will invariably meet the inferior goal and if their opponent makes any mistake at all, they will get their superior goal. This creates an ought: IF your goal is to win (and not lose), you OUGHT apply that strategy as perfectly as possible.
Right, but that is a means-to-ends 'ought'. A moral 'ought' is not like that in that you cannot set the goal. However, arguably (though this is debatable) a moral 'ought' is a particular case of means-to-ends 'ought' in which the end is fixed, and the end is not to behave unethically. Whether that identification is correct as a matter of the meaning of the words is again debatable, but either way, it is true that 'B ought to X' in the moral sense of 'ought' is true if and only if 'It would be unethical of B not to X' is true.

And in any case, the goal is set by our own set of monkey rules. If species#384751837247 is a species of aliens from another galaxy in the observable universe with advanced ships and whatnot, they might have #384751837247-morality, but they will very probably not have morality, given that they evolved in a different environment, including a different social environment. As they had to resolve similar and in many cases almost the same problems as our ancestors, there may well be a considerable similarity between morality and #384751837247-morality, but they are still two different things.


Jarhyn said:
So to me this says that some of the fundamental elements of moral philosophy have to be approached from the examination of goals... hence my metagoal. Because it can't just be about what I want, if I want general strategy.
Right, it's not just about what you want. But it's also not about any general strategy you come up with. It's built-into the human mind.


Jarhyn said:
Eh, I get that it sounds a lot like AM sometimes, but the fact is, ethics didn't come from nowhere.
Clearly. See, we agree about that much.

Jarhyn said:
I don't like to think about things in terms of specific goals. I fully admit that "wants to kill someone" is not even necessarily an "immoral" goal. Context makes a lot of difference insofar as there are ethical strategies and unethical strategies to fulfill the goal. A clearly unethical strategy for that particular goal would be to go out on the street with a gun, say "I'm going to kill someone", and shoot the first person who runs away screaming. An ethical fulfillment would be to put out an ad for "euthanasia services", wait for an applicant, interview them about their motivations and reasons, get a second opinion from a psychologist and doctor, urge them to seek counseling if it is determined that there may be a different resolution, and only after their resolve has been made apparent and that they are acting of sound mind, assisting in their death.

I would say in the first case the goal is unethical, because it is to kill someone for fun or for some other bad reason that does not ethically justify killing. In the second case, the goal is ethically acceptable, because it is to help alleviate the suffering of people who want it (assuming those are the goals; if they are not, then my assessments would change).

But in any case, take a look at what you did. How did you know which behavior was ethically acceptable and which one was not? Well, you contemplated said behaviors, and your moral sense intuitively gave a verdict. And that's the way humans normally do it, and are justified in doing it.

Jarhyn said:
Strategies can, in this context, clearly be ethical or not ethical.
Yes, that is true. Now imagine someone said that euthanasia is always unethical. The person who says that is mistaken, are they not? Hopefully we can agree on that as well.
 

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
J842P said:
Yes, ethics do come from somewhere. They are evolved behaviors. It is a psychological/biological phenomenon.
Yes, pretty much.

J842P said:
I guess "ethics" is simply the rationalization we layer on top of our moral intuitions.
I do not think that's the most common usage. Generally, 'unethical' is used to mean the same as 'immoral', or at most a subset of immoral behaviors, and 'ethics' on its own has more than one related usage. But those are details, so moving on...


J842P said:
Lately, I've come to believe that a virtue ethics perspective is actually a better overall model for our underlying moral behavior.

I've been playing around with ways to articulate this, but I think essentially, human morality can be approximated to be "those behaviors that would make me trust you if we were stuck out alone in a jungle and you could do whatever you wanted to me with impunity". The behaviors that would make me trust you would be considered "moral", those that wouldn't, "immoral".
Even if doable, the approximation would only work in terms of giving the right results in most realistic situations. But it is not what morality/ethics is about, because what matters when it comes to assessing whether a behavior is morally permissible is what is in the mind of the person who acts in one way or another, not what they do - though, of course, what they do is what we use as a means of assessing what is in their minds.

For example, imagine A and B are in the jungle. A can do whatever he wants to B. But A reckons that gaining B's trust is a better strategy because it increases the chances of being able to con B if they eventually manage to get out of the jungle (which A reckons has a probability around 0.5). So, A behaves in ways that he correctly reckons would make A trust B, in order to be able to con A and steal her money. In particular, of course A lies to B about his intentions. Then A behaves unethically, even though he behaves in a way that makes B trust A.

Granted, it was meant to be an approximation. But I mean that even if it might work as a practical approximation in most cases (how often it fails is a difficult question I think), when one looks under the hood so to speak, things are very different.
 

Lion IRC

Veteran Member
Joined
Feb 5, 2016
Messages
4,638
Basic Beliefs
Biblical theist
I have targeted nukes at my own planet. And joked about it. If it were up to me, i would want to prevent nuclear war. But i still joked about it.

I worked a suicide hotline. I did this exactly because i wanted to stop suicides. But among the hotline workers, we joked about some of the calls.

The missile tech who volunteered on an ambulance had anecdotes that made MTs throw up. Not me, but i was working the hotline at the time. We used to just clear out the break room, swapping stories. But finding humor in a subject is a separate subject from the reality.

Given the opportunity, or authority, or sufficient tasers, i would stop any rape, even in prison, even if the victim was a rapist, a child molester, or one of the Trumps.

However, the idea of someone like Bannon, or Stone, or Trump being in prison, and facing the threat of rape amuses me. Not because of any thoughts of justice, karma, or retribution.
I like the idea of some evil bastard facing the fact that he's fallen so low, that no matter his money, his clout, his political savvy, or his friends, he's now a goldfish in a shark tank. And he done it to his own greedy self.

And that's funny.

Yeah, I just can’t get to any place close to rape as being funny. Or just or acceptable. Rape should not be part of any criminal sentence.
What if it is an ape? Apologies for that awful capture. Best I could find.

[YOUTUBE]https://www.youtube.com/watch?v=x07BKBdjIak[/YOUTUBE]

Needs a trigger warning.
 

Bomb#20

Contributor
Joined
Sep 28, 2004
Messages
6,228
Location
California
Gender
It's a free country.
Basic Beliefs
Rationalism
Look at the Norse model. Incarceration there is potentially indefinite. I don't see anyone calling Norwegians monsters. Maybe reevaluate your position.
I don't see the Norwegians actually keeping people in prison longer than they deserve. The people they've subjected to indefinite preventive detention are generally murderers and rapists. Whether Norwegians are monsters is not determined by how they choose to label their penal practices.

At any rate I answered your inane questions in my discussions targeted at RS, with respect to what is the best outcome. Perhaps you should go back and actually read some of those posts,
I actually read them; the reason you're accusing me of not actually reading them is because you have no moral compunctions about libeling your outgroup.

And no, those discussions didn't answer my inane questions -- you did not supply any reason to think your selection of ethical premises isn't based on "aroused emotional drive". Here are some selections from your posts targeted at RS:


So, there's been a lot of discussion about is/ought. There is, in my estimation one way to get there: adding goals.

If I AM on one side of a wall, AND it IS my goal to use the least energy to reach the other side, ... I ought do that thing (it is the solution to the problem).
That's an equivocation fallacy. "Ought" has two meanings -- instrumental and moral -- corresponding to what Kant called hypothetical imperatives and categorical imperatives. Adding goals gets you from an "is" to an instrumental "ought", not to a moral "ought".

If the goal is "achieving personal goals", this creates a metagoal: survive long enough, in a state capable of achieving those goals. It is a goal we must accept for everyone to the extent we accept it for ourselves.
That's a non sequitur. And it should be painfully obvious to you that it's a non sequitur since if you lock someone up indefinitely in preventive detention -- not a state capable of achieving his personal goals -- you aren't accepting the metagoal for him.

Really, you have to ask, is it a valid goal to pursue the least negative outcome for yourself that makes your own behavior non-destructive with respect to the necessary meta-goals for general goal seeking? If this is true, then it cannot possibly be true that you have a right to impose more harm than is absolutely necessary (punishment, infliction of suffering, etc), because of the requirement for non-contradiction.
There's no contradiction between those. Have you been influenced by Randroids? Those guys imagine whatever they dislike violates the non-contradiction principle.

My thought is that it is absolutely NOT ok to go ham on someone once they've been bad. It's one of the most basic tests of an ethical framework: does it permit doing unto others that which you would not have done into you?
Jesus said it; you believe it; that settles it? The Golden Rule is a rough rule of thumb that is often helpful, but it leads to absurdity in some situations and it's always ambiguous, due to the inherent ambiguity in the phrase "that which". You wouldn't want to be sued, would you? Well then you should never sue anyone. You wouldn't want to be imprisoned, would you? Well then you should never imprison anyone. Sure, you can game that problem away by redefining "that which" you do to be different from "that which" you don't want done to you; you can always claim what you don't want is to be imprisoned "unnecessarily". But the same game is available to retributionists; we don't want to be imprisoned "undeservedly".

I mean, speaking in terms of a specific goal for the derivation of general "oughts" is a losing battle. There is no specific goal. There is the possibility, though, of discussing a meta-goal to derive general oughts.

To me, that goal is "to have all that is necessary to do X" where X does not deprived anyone else of the same.
That's a special-pleading fallacy -- you sound like that philosopher who spent the first half of his book proving all moral claims are errors and the second half making moral claims. "To have all that is necessary to do X where X does not deprive anyone else of the same." is a specific goal. Just calling it a "meta-goal" doesn't make your attempt to derive general "oughts" from it a winning battle.

Of course we live in a probabilistic universe, and in a universe where there are zero-sum situations, so we need to account for these two things: by having a common agreement and expectation of what risks are to be accepted, and a mechanism to determine disposition of limited resources.
But we don't have a common agreement. People disagree. People are going to disagree. Why would we agree, when we have incompatible beliefs, goals and emotional drives? You haven't even offered us a reason to agree, just a bunch of fallacies.

(And even if you could construct a common agreement that was a genuine contract -- a rule people actually agree to, as opposed to the rules social contract theorists keep agreeing to on other people's behalf -- it wouldn't deliver a rational ethical foundation. All it would do is help us feel self-righteous about acting on our aroused emotional drive to force others to keep their promises. Social contract theory is logically incapable of delivering that which it exists for the purpose of delivering.)

I can easily identify that if I wish to have my meta-goal stay as intact as possible, I must respect the meta-goals of others as much as possible. Punishment for the sake of vengeance rather than only as a last resort in behavior modification fits right into "unnecessary", almost trivially so.
Some people's meta-goal is to have justice done. If you abolish retributive punishment you are trivially not respecting those people's meta-goal as much as possible.​

That'll do as a sampling of your earlier attempts; if you think one of the arguments I skipped was better than those, feel free to point it out. Moving on...

but the gist of it is that there is a metagoal that can be defined such that "maximizing the ability to pursue the goals you wish to pursue" wherein goals that are unilaterally/mutually exclusive get rejected (ie "Gary wants to kill Bob; Bob has goals that require being alive", Gary's goal is invalidated), where a certain probability of damage at a certain extent to the metagoal is deemed acceptable through social consensus, and where the disposition of limited resources is agreed on through some mechanism of allocation.
Uh huh. So Gary wants to go to synagogue on Saturday and work on Sunday; Bob has goals that require everyone to work on Saturday and go to church on Sunday. Through social consensus it's agreed to damage Gary's metagoal of satisfying his own religious obligations, because his unilateral goal is mutually exclusive with the deemed-acceptable metagoal of the social consensus, which is to have as many people as possible be saved through knowing Jesus, so Gary's goal gets rejected.

Of course you wouldn't rule that way -- you'd no doubt say it's Bob's goal that should get rejected, and you'd no doubt have an excellent argument to that effect -- but that's immaterial, because as soon as you uttered the magical phrase "social consensus", that meant it's not up to you and your arguments to determine whose right to an unbroken nose ends at whose swinging fist.

In this way it is not about what I, personally, want. Instead it is about determining the limit of which of my wants are justifiable generally,
And you appear to be trying to establish "justifiable" on the basis of symmetry -- Gary and Bob can both be alive; they can't both be alive while the other's dead. Your "It is a goal we must accept for everyone to the extent we accept it for ourselves.", your "X does not deprived anyone else of the same" and your invocation of the Golden Rule likewise are appeals to symmetry. But you haven't shown what's good about symmetry; you're just taking that for granted. Clearly what's going on here is you picked symmetry as your ethical premise, due to an aroused emotional drive. You transparently have a symmetry boner.

Punishing is by definition harming the goals of others, as a goal in and of itself, agnostic to other effects. It is trivially evil.
Stomping the bread into the ground so you could both die is by definition harming the goals of others, as a goal in and of itself. What makes you think you aren't trivially evil? Wait, don't tell me, I know the answer to this one...

...everyone is the hero of their own story and people will jump through all kinds of hoops to prove it to themselves.
 

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
.... I find that most philosophers....don't ask what are even remotely the correct questions.

That's quite a claim.

I first doubt my feelings. I find I have an obligation to doubt my feelings (that everyone does, in fact). The first assumption in honest discovery is "I am wrong". And that includes my moral machinery. I know what I FEEL is wrong, but is it? I know what I FEEL is right, but is it? Why? From whence comes piety?

Regardless of whatever sophistry, conflation, cart/horse, or other of what I suspect is pure bullshit in the field of philosophy, though, the real question is if you can understand what I am trying to say and why I draw these distinctions.

I don't understand what you're trying to say.

Regardless of what you want to call it, we need separate terms to describe the phenomena in nature, the selection pressure that makes morality emerge as an evolved machine, to understand the shape of it.

Sure. We could call them natural selection pressures. Evolutionary biologists do it all the time.

Edit: When you try to understand ethics as a deconstruction of morality.....

Who tried to do that?

Wanting to hit someone and thinking it either the right thing to do or at least permissible would be about morality. Ethics would be more like the rules of boxing. The rules of boxing are not a deconstruction of wanting to hit someone or thinking it right or permissible, using any definition of deconstruction that I am familiar with.

.... rather than a driver of it, you end up in a realm of sophistry and nonsense because evolved behaviors are under no obligation to be consistent, non-contradictory or make any sense at all. They are under no obligation to accurately model the phenomena that makes a selection pressure. Evolved behaviors don't fundamentally have to make sense entirely, and it is a losing battle to try to derive a logical or consistent system from an approximation. They are merely "good enough for living long enough to pop a baby out".

I don't understand that.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,798
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
I don't see the Norwegians actually keeping people in prison longer than they deserve.
Again, begging the question that anyone "deserves" prison

You know there's a huge discussion that could be had about moral deserts and corrupt motive of both kinds.
The people they've subjected to indefinite preventive detention are generally murderers and rapists. Whether Norwegians are monsters is not determined by how they choose to label their penal practices.
Nobody has claimed here that monstrosity is a function of labeling. It is a function of whether they have a corrupt motive.
I actually read them; the reason you're accusing me of not actually reading them is because you have no moral compunctions about libeling your outgroup.
Again, clearly you did not. Or you are throwing up a massive series of straw man arguments. Just because you dislike being put on the spot for being expected to not hurt people for your own enjoyment doesn't mean I am libeling you. I suspect it just makes you feel bad. While I don't think feelings are self-justifying, often they can contain hints. This is one of the hints you should take to heart.
And no, those discussions didn't answer my inane questions -- you did not supply any reason to think your selection of ethical premises isn't based on "aroused emotional drive".
I did. You just don't pay attention, maybe because you don't like to think that someone could possibly get past Hume.

I have laid this out a few times, but I guess I'll do it again because you just really seem to like ignoring it. Pay close attention here: I have derived two classes of oughts. You can guess what those classes are pretty easily if you aren't too busy flogging your log to Hume.

First, I pointed out the class of all oughts that can be derived from is. Then I pointed out that there is a subclass, the metagoal which are the class of oughts that are not unilaterally asymmetrical, and thus contradictory against a basic moral justification when compared to someone else.

You regularly ignore that little fact instead straw-manning against bullshit claims like this:
If I AM on one side of a wall, AND it IS my goal to use the least energy to reach the other side, ... I ought do that thing (it is the solution to the problem).
That's an equivocation fallacy. "Ought" has two meanings -- instrumental and moral -- corresponding to what Kant called hypothetical imperatives and categorical imperatives. Adding goals gets you from an "is" to an instrumental "ought", not to a moral "ought".

Have you been influenced by Randroids? Those guys imagine whatever they dislike violates the non-contradiction principle.

...
Jesus said it; you believe it; that settles it?
...
Well then you should never sue anyone.
...
Well then you should never imprison anyone.
...

"unnecessarily" ... [=] ... "undeservedly".
Talk about fucking equivocations. First, we have a discussion that must be had about "the golden rule". You are equivocating the biblical formulation (the "positive formulation") against the negative formulation which is "don't do unto others that which you would not have done into you", which has in the invocation of the metagoal been further distilled to "you have no justification for doing something without symmetrical consent to others that you expect others to not do to you without symmetrical consent". If you want to talk about "useful rules of thumb, maybe we can invoke your unsupported virtues that you invent from your own feelings.

At any rate, there's a big difference between "unnecessary" with respect to whether we are talking about extrinsic utility vs intrinsic desert. One says "I'm going to do this because I do not deserve to be violated" vs "I'm going to do this thing to them because they deserve to be hurt." Good job drawing that equivocation.
I mean, speaking in terms of a specific goal for the derivation of general "oughts" is a losing battle. There is no specific goal. There is the possibility, though, of discussing a meta-goal to derive general oughts.

To me, that goal is "to have all that is necessary to do X" where X does not deprived anyone else of the same.
That's a special-pleading fallacy -- you sound like that philosopher who spent the first half of his book proving all moral claims are errors and the second half making moral claims. "To have all that is necessary to do X where X does not deprive anyone else of the same." is a specific goal. Just calling it a "meta-goal" doesn't make your attempt to derive general "oughts" from it a winning battle.
That's quite a claim, in the presence of a variable. Also dead fucking wrong. Instrumental and moral oughts are only differing in whether they are symmetrically non-contradictory, whether they can be invoked without creating a social contradiction.

...Social contract theory is logically incapable of delivering that which it exists for the purpose of delivering.
That's one hell of a (bullshit) claim. In fact, ethics only exists in the context of more than one person. If there is only one person, it is perfectly acceptable to be a solipsist, and all instrumental oughts are in fact ethically justified; nobody else's concerns need to be heeded in that situation because there is nobody else to be concerned with.

It is trivially easy to see that ethics are a function of society, and that ethics must be derived from where our goals conflict (or, from where they do not).

Now whether or not the broad, complicated sophistry that is "formal social contract theory" that you might use as a straw man for my actual arguments is in fact 'correct' I would stay that it has a lot of holes. HOORAY, WE ALREADY AGREED SOCIAL CONTRACT THEORY IS BULLSHIT. In fact if you read my posts like you claim, you would have already noticed that I pointed out the extent of it's function: risk level acceptance and resource allocation for zero- or limited-sum pools.
I can easily identify that if I wish to have my meta-goal stay as intact as possible, I must respect the meta-goals of others as much as possible. Punishment for the sake of vengeance rather than only as a last resort in behavior modification fits right into "unnecessary", almost trivially so.
Some people's meta-goal is to have justice done.
No, it is not. Because that metagoal is contradictory; their goal is, I guarantee you, including to not have 'justice' done to them, to be free of 'justice'. Nobody wants to be punished, else it is not "punishment".
So Gary wants to go to synagogue on Saturday and work on Sunday
So, Gary's goal does not unilaterally invoke bob
; Bob has goals that require everyone to work on Saturday and go to church on Sunday.
Bob's goal unilaterally invokes gary. There you go, it's already not up for debate with the social consensus. If I can change the name 'jesus' for 'muhammed' or any other arbitrary thing, it's already disallowed as a contradiction; you are already abusing the role of the social consensus in the model, and invoking special pleading to justify one form of goal over another (jesus as opposed to Muhammed, neither of which is justifiable against the observable reality; come back to me when you prove jesus and God and all that exist).

The role of social consensus here is limited to probabilistic outcomes: we accept some probability of being harmed because our actions generate a probability of harming others; the social contract only determines the probabilities and the extents of harm allowed against the metagoal in the context of the risks we generate for others. For instance, I accept the risks of being harmed while others are driving by driving and imposing those risks on others, with consent through action. The formal social contract in this context merely formalizes the observation and makes explicit the vote.

Of course, I do invoke a second role of social contract: can also serve to formalize etiquettes for the disposition of limited social resources.

In this society you have invoked, you have already taken things too far in invoking an expectation of jesus worship as that neither speaks to what probabilities of harmful actions are allowed outside of special pleading or the disposition of limited resources.
Clearly what's going on here is you picked symmetry as your ethical premise, due to an aroused emotional drive. You transparently have a symmetry boner.
No system based on axioms can tolerate the existence of a contradiction within it. It has nothing to do with emotion and everything to do with the fact that I expect my ethical principles to be logical. I have a logic boner. So should you. So should everyone. Anything else is, well, illogical.
Stomping the bread into the ground so you could both die is by definition harming the goals of others, as a goal in and of itself.
It's my bread, not his. His metagoal cannot be harmed because he has no justification to take the bread. The destruction of the bread is, in fact, a product of their own unethical behavior as a result of what was an absolutely fair way to decide what happened to the bread; all other things equal, if might makes right we both die because I will be forced to fight to the death lest I die anyway; we both die. All other things being equal if we play any other game for the bread, it comes down to probabilistics anyway. So no matter what it comes down to probabilistics.

So his goals END as soon as he loses whatever game we decide on. It is by definition not harming them because his goals have already by his own consent been ended.

The only option for either of us was always "get a 50/50 chance at bread"; the cost of accepting a chance of getting that bread without mortal harm is accepting the consequences of cheating (namely mutually assured destruction); I just figure it's better to be starving to death without also being heavily injured in a fight that's likely to destroy the bread anyway.

Maybe you missed the fact that RS in this scenario of stomping the bread is already presumed to have of lost the coin toss. If he wins, he gets bread without violence, and I starve.

At that point, who is being unethical, again? Oh yeah, the person who would create a situation where they may get bread without injury and someone else starves, but they refuse to offer a situation where the other may attain the same. If we both play by the rules, we universally have a better chance at survival.
 

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
Jarhyn said:
In fact, ethics only exists in the context of more than one person. If there is only one person, it is perfectly acceptable to be a solipsist, and all instrumental oughts are in fact ethically justified; nobody else's concerns need to be heeded in that situation because there is nobody else to be concerned with.
That is false. Purely for example, suppose that all people die due to a rogue biological weapon, except for Joe, who decides to pour gasoline on a cat and set her on fire, so that he has fun watching a fireball run. In fact, he does that every day, as there are plenty of cats around, and he can capture them with different traps and tactics. He is determined to do this, and has human intelligence and the tools left by the rest of humanity at his disposal, so the cats do not have a chance. Then Joe behaves unethically.


Side note: The part about the cat is not my idea. When I was in high school, some kids were bragging about how they did that to a cat and how much fun it was; I do not know whether it was true or they just made it up. But they wanted others to believe them.

Jarhyn said:
No, it is not. Because that metagoal is contradictory; their goal is, I guarantee you, including to not have 'justice' done to them, to be free of 'justice'. Nobody wants to be punished, else it is not "punishment".
a. It is not contradictory to want to have justice done to everyone, and even on oneself if one were to deserve it.

b. It is not even contradictory to be biased and want to have justice done to other people.

c. It contradictory to have justice done by the
government on those who engage in heinous crimes, but leave minor unethical behaviors out of it, and to the punishment regularly inflicted by humans on one another by means of condemning each other's behavior, or mocking each other, etc.

d. There is no reason to even suspect that the number of people who would want to be imprisoned when they do not deserve it is greater than the number of people who would want to be imprisoned if they were to do something for which they would deserve it..


Jarhyn said:
No system based on axioms can tolerate the existence of a contradiction within it. It has nothing to do with emotion and everything to do with the fact that I expect my ethical principles to be logical. I have a logic boner. So should you. So should everyone. Anything else is, well, illogical.
But your logic is flawed. B20's is not. Neither is mine. You are making logical errors in believing that we are making logical errors.


Bomb#20 said:
Uh huh. So Gary wants to go to synagogue on Saturday and work on Sunday; Bob has goals that require everyone to work on Saturday and go to church on Sunday. Through social consensus it's agreed to damage Gary's metagoal of satisfying his own religious obligations, because his unilateral goal is mutually exclusive with the deemed-acceptable metagoal of the social consensus, which is to have as many people as possible be saved through knowing Jesus, so Gary's goal gets rejected.
Jarhyn said:
So, Gary's goal does not unilaterally invoke bob

Jarhyn said:
Bob's goal unilaterally invokes gary. There you go, it's already not up for debate with the social consensus. If I can change the name 'jesus' for 'muhammed' or any other arbitrary thing, it's already disallowed as a contradiction; you are already abusing the role of the social consensus in the model, and invoking special pleading to justify one form of goal over another (jesus as opposed to Muhammed, neither of which is justifiable against the observable reality; come back to me when you prove jesus and God and all that exist).

Gary wants to pour gasoline on a cat and set it on fire every Saturday, because he has fun watching a fire ball run.
Bob has goals that require that everyone refrain from setting cats on fire for fun, and further require that failing that, police try to arrest people who set cats on fire for fun.

So, Bob has goals that unilaterally invoke Gary and other people. It's already not up for debate. Bob is behaving unethically. Gary is not. This is what your ethical theory predicts. Since this is false, it follows that your ethical theory makes false predictions, so it has been tested and shown to be false (it had already been shown to be false, on other grounds, but there is no harm in showing it again).
 

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
... suppose that all people die due to a rogue biological weapon, except for Joe, who decides to pour gasoline on a cat and set her on fire, so that he has fun watching a fireball run. In fact, he does that every day, as there are plenty of cats around, and he can capture them with different traps and tactics. He is determined to do this, and has human intelligence and the tools left by the rest of humanity at his disposal, so the cats do not have a chance. Then Joe behaves unethically.

If there were other people around, most of them would likely agree Joe behaves unethically. Also, if other humans either arrived, or in fact had, unbeknownst to Joe, survived the effects of the biological weapon, and it was found that Joe, who had never before harmed animals, had been very severely traumatised by either believing himself to be or being the only human left alive, they might give him compassionate therapy rather than any form of punishment.

So in the first instance, it does not seem possible to say that Joe was actually, independently, really, factually, objectively being unethical, and in the second instance retribution is not deemed the correct response. Which I suggest puts a dent in both moral realism and retributivism.
 

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
My original ethical derivations came from a pretty radical idea: that there is some principle in nature, some thing derived from the context of our existence in the universe, that caused the emergence of ethics in humans, that our theories and ethics are attempts to approximate in the same way that there, in fact, mere approximations.

I just want to get back to this. Yes, there are, imo, principles in nature that caused the emergence of ethics in humans, but I would say they are not themselves ethical principles.
 

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
... suppose that all people die due to a rogue biological weapon, except for Joe, who decides to pour gasoline on a cat and set her on fire, so that he has fun watching a fireball run. In fact, he does that every day, as there are plenty of cats around, and he can capture them with different traps and tactics. He is determined to do this, and has human intelligence and the tools left by the rest of humanity at his disposal, so the cats do not have a chance. Then Joe behaves unethically.

If there were other people around, most of them would likely agree Joe behaves unethically. Also, if other humans either arrived, or in fact had, unbeknownst to Joe, survived the effects of the biological weapon, and it was found that Joe, who had never before harmed animals, had been very severely traumatised by either believing himself to be or being the only human left alive, they might give him compassionate therapy rather than any form of punishment.

So in the first instance, it does not seem possible to say that Joe was actually, independently, really, factually, objectively being unethical, and in the second instance retribution is not deemed the correct response. Which I suggest puts a dent in both moral realism and retributivism.

In the first instance, it is stipulated that Joe does it for fun. As it is stipulated that Joe has human intelligence and no further stipulation is made, that seems to suffice to make it unethical. You are just making claims that go against the ordinary human moral sense, which is a proper tool to find moral truth (if you claim otherwise, the burden is on your side, as is on anyone claiming that our faculties are, in a specific case, misleading us; and yes, sometimes they fail, but we can only assess that using also some of our faculties, which we trust; failure is very unlikely barring specific evidence).

However, as I only need a counterexample to show that Jarhyn's claim is false, I can just modify the scenario (not needed, but why not? ):

1. Suppose that all people die due to a rogue biological weapon, except for Joe, who was a serial killer. While he is happy to see all the suffering and death caused by the biological weapon, a few weeks after everyone else is dead, he is frustrated by the lack of humans to murder. So, as a substitute, Joe decides to pour gasoline on a cat and set her on fire, so that he has fun watching a fireball run. In fact, he does that every day, as there are plenty of cats around, and he can capture them with different traps and tactics. He is determined to do this, and has human intelligence and the tools left by the rest of humanity at his disposal, so the cats do not have a chance. Then Joe behaves unethically.


2. Suppose that all people die due to a rogue biological weapon, except for Joe, who has several times set cats on fire for fun. To have further fun, Joe decides to pour gasoline on a cat and set her on fire, so that he has fun watching a fireball run. In fact, he does that every day, as there are plenty of cats around, and he can capture them with different traps and tactics. He is determined to do this, and has human intelligence and the tools left by the rest of humanity at his disposal, so the cats do not have a chance. Then Joe behaves unethically.
 

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
In the first instance, it is stipulated that Joe does it for fun.

Ok, so he enjoys it. Whatever. If so, he arguably deserves compassionate therapy, not punishment. So much for your retributivism.

As it is stipulated that Joe has human intelligence and no further stipulation is made, that seems to suffice to make it unethical.

Obviously, it seems that way to you, possibly to me, possibly to most people, maybe even all 'normal' people. But the problem is, 'seems to be unethical (to humans)' falls short of being really, actually, objectively, factually unethical. Which is where your theories eventually run into trouble.

And in any case, you're too fond of the extreme example of causing harm for fun. It's trivially obvious that at some point on the spectrum of human behaviours, we could say something like, 'all normal, decent, intelligent humans would think this wrong'. So what? You're just operating at one extreme. At the other end of the spectrum, human morality is pretty relative and variegated. The idea that there are correct answers to all moral issues is not demonstrated merely by picking an easy example of where almost everyone would agree.

You are just making claims that go against the ordinary human moral sense, which is a proper tool to find moral truth (if you claim otherwise, the burden is on your side, as is on anyone claiming that our faculties are, in a specific case, misleading us; and yes, sometimes they fail, but we can only assess that using also some of our faculties, which we trust; failure is very unlikely barring specific evidence).

On the contrary, the claim that what you call the ordinary human moral sense is the proper tool to find moral truth is something for which the burden is on you.

However, as I only need a counterexample to show that Jarhyn's claim is false.....

I'm not and wasn't discussing Jarhyn's claim with you. You can discuss that with Jarhyn. I'm not sure how much I agree with Jarhyn either, since he seems to think nature is ethical.
 

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
ruby sparks said:
Ok, so he enjoys it. Whatever. If so, he arguably needs compassionate therapy, not punishment. So much for your retributivism.
First, it's not only that he enjoys it. It's that he does it for fun, so that is the reason (not just a secondary reason).
Second, he deserves punishment, regardless of what he needs - what he needs is food, water, air, etc.
Third, your claim againt retributivism flies is just that: a claim that goes against the ordinary human moral faculty. As is the case when you question any human faculty, the burden is on you. It would be irrational to dismiss our faculties for no good reason - reasons we can only assess by means of some of our faculties, obviously.
Fourth and foremost, this misses the point entirely. The point of the example was not that he deserves punishment (though he does), but he behaves unethically. That is sufficient to show that Jarhyn's theory is false.


ruby sparks said:
Obviously, it seems that way to you, possibly to me, possibly to most people, maybe even all 'normal' people. But the problem is, 'seems to be unethical (to humans)' falls short of being really, actually, objectively, factually unethical. Which is where your theories eventually run into trouble.
No, that is not a problem. The problem is with your qualifiers. But that the ordinary human moral faculty deems a behavior unethical is in fact sufficient evidence to conclude that it is so, just as is the case with other ordinary human faculties, and barring specific evidence to the contrary.
For that matter, if it seems blue to a normal human visual system under ordinary light conditions, that is pretty much sufficient evidence to reckon that it is blue. It's what rationally one should reckon, barring a lot of counter evidence. The same for the verdicts of other human faculties, in this case the moral faculty. It's you who has the burden of showing that his behavior is not unethical.

Again, your qualifiers only complicate matters. But the 'really' qualifier does not seem to add anything, but it's an intensifier. As for the others, I already addressed them.


Now, if you were correct and the ordinary human faculty were not enough to justify our moral assessments, then nothing would, and Jarhyn's theory would be unwarranted. The reason for that is that we do not have any tools for assessing whether a behavior is unethical or not other than the ordinary moral human faculty - our own, and that of other people -, aided of course by other faculties (e.g, to make intuitive probabilistic assessments about expected consequences of some behavior), but in the end, our moral faculty is the tool to make ethical assessments.

What about moral theories?

None is true. However, even if one were true, those theories can only be properly tested against the judgments of the human moral faculty (or against something already based on it), so even then, we would only be justified in believing them true if they pass the test when their predictions are tested vs. the human moral faculty.

Incidentally, something like the above holds for color too: we may have cameras and computers that can detect blue stuff, but we only have them because they have been calibrated using human color vision (or tools already based on it).


ruby sparks said:
And in any case, you're too fond of the extreme example of causing harm for fun. It's trivially obvious that at some point on the spectrum of human behaviours, we could say something like, 'all normal, decent, intelligent humans would think this wrong'. So what? You're just operating at one extreme. At the other end of the spectrum, human morality is pretty relative and variegated.
You seem to have lost track of the conversation. Again, I was showing that Jarhyn's theory was false. In order to test a theory, I just need to compare its predictions with some known facts. So, the extreme examples are pretty adequate.

So what, you say?

So, "'all normal, decent, intelligent humans would think this wrong'", but the theory I am debunking entails it is not morally wrong.


ruby sparks said:
On the contrary, the claim that what you call the ordinary human moral sense is the proper tool to find moral truth is something for which the burden is on you.
No, that is not true. It would be irrational to question one of our faculties without good evidence against it - evidence which, of course, we also assess on the basis of our human faculties!

We do not do that normally. For example, we do not claim people who say a traffic light was red that they have to show that the human visual system is a proper tool to figure out whether something is red. Sure, there are arguments for a color error theory (they fail) but the burden is on the claimants.


ruby sparks said:
In fact, demonstrating that there is even such a thing as moral truth in the first place is a burden you might want to try to lift before you even get on the the other one.
No, not at all. That some behaviors are unethical is obvious by normal human assessments. It's on you the burden to show otherwise in the first place.

]However, in this context you miss the point again. Jarhyn's theory entails that there is such thing as moral truth - ethical truth in his terminology. So, in order to argue against it, it is proper to assume there is (else, the theory is false on that account alone). This does not even depend on whether it is proper to reckon in general (i.e., when not argue against a specific theory) that there is moral truth (it is, but really not the point here).


Jarhyn said:
I'm not and wasn't discussing Jarhyn's claim with you. You can discuss that with Jarhyn.
Are you serious?
Discussing it with Jarhyn is exactly what I was doing, when you jumped in: You jumped on a post in which I was replying to Jarhyn's ethical theory. You took that post out of context. Of course Jarhyn agrees that there is ethical truths (read his posts!), and his theory entails that there is (read his posts!), so it would be proper on my part to assume there is ethical truth in the context of testing his theory even if it were not proper to reckon in general (i.e, in other contexts) that there is ethical truth.


Let me try another manner. Suppose that someone claims that God (i.e., omnipotent, omniscient, morally perfect agent) exists. In order to argue against that claim, it is proper to assume that there is an omniscient, omnipotent agent, and then argue it is not at all morally perfect. Now, this case is different because there is moral truth, whereas there is no good reason to suspect there is an omniscient, omnipotent agent, but it is not different in the relevant sense, namely that it is okay to use as a hypothesis one or more of the implications of the theory one is criticizing.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,798
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
Jarhyn... he seems to think nature is ethical.

This is a very inaccurate statement. I do not think nature is ethical. I think that certain contexts in nature create situations where particular strategies are optimally beneficial. Not all nature is a river, but rivers are created at the intersection of gravity, atmospheric gas, a particular range in temperature, and erosive rock.

Ethics are natural, but not all of nature is ethical.
 

Bomb#20

Contributor
Joined
Sep 28, 2004
Messages
6,228
Location
California
Gender
It's a free country.
Basic Beliefs
Rationalism
Bomb#20 said:
I don't see the Norwegians actually keeping people in prison longer than they deserve.
Again, begging the question that anyone "deserves" prison

You know there's a huge discussion that could be had about moral deserts and corrupt motive of both kinds.
Any time, sir, any time. (Though we ought to take that to M&P.)

The people they've subjected to indefinite preventive detention are generally murderers and rapists. Whether Norwegians are monsters is not determined by how they choose to label their penal practices.
Nobody has claimed here that monstrosity is a function of labeling. It is a function of whether they have a corrupt motive.
You're missing the point. You advocated 'that incarceration become indefinite and the bar for ending it being that they are considered rehabilitated rather than "suitably punished".' I argued that such a policy would be barbaric because it would allow for extreme harm to perpetrators in response to minor transgressions. You replied "I don't see anyone calling Norwegians monsters." as a counterargument. The point of my reply was not to suggest you claimed monstrosity is a function of labeling; the point of my reply was to propose an alternate explanation for your observation -- that the Norwegians may be labeling their policy with your policy's name but they are not putting it into practice -- thereby showing that your counterargument fails to imply your conclusion.

I put it to you that the reason nobody is calling Norwegians monsters is not because locking criminals up far out of proportion to what they deserve isn't monstrous, but rather because the Norwegians do not actually lock criminals up far out of proportion to what they deserve. When the Norwegians release a person convicted of a minor crime, they might well label it "We judge him to be rehabilitated" rather than labeling it "He's served the deserved sentence", but what they call it is irrelevant. People are deciding the Norwegians aren't monsters because when the criminal doesn't deserve to be in prison any more, the Norwegians set him free.

So we have two competing explanations for your observation. That means if you want your observation to qualify as supporting evidence for your favored policy not being barbaric, then you'll need to show your explanation is right and mine is wrong, because if my explanation is correct then your observation is perfectly compatible with "the bar for ending it being that they are considered rehabilitated" nonetheless being barbaric. The Norwegians are letting people go once they've been "suitably punished".

I actually read them; the reason you're accusing me of not actually reading them is because you have no moral compunctions about libeling your outgroup.
Again, clearly you did not. Or you are throwing up a massive series of straw man arguments. Just because you dislike being put on the spot for being expected to not hurt people for your own enjoyment doesn't mean I am libeling you.
No, it's the fact that you make up false damaging claims about me with reckless disregard for the truth that means you are libeling me. You have no rational basis for thinking you are an expert witness as to whether I read your posts. It takes a special level of arrogance on your part for you to imagine that your arguments are so spectacularly good that all anyone needs is to read them and he will necessarily recognize them as solving the greatest philosophical conundrums of the ages, and presumably therefore recognize you as the greatest philosopher of all time -- the man who beat Aristotle and Kant and Mill and finally figured out how to derive ethics from pure reason. I read your arguments. I was unimpressed. And you think my not being impressed proves I'm lying about reading them. Oh, for the love of god, get over yourself.

(And speaking of massive strawman arguments, you are not putting me on the spot "for being expected to not hurt people for my own enjoyment." You are putting me on the spot for favoring justice. Hurting people in order to do justice is not the same thing as hurting people for enjoyment; your willingness to equivocate on this point does not do you credit. When people hurt for the sake of enjoyment, it's okay with them if they're hurting innocent people -- a characteristic which puts them in the same camp with people who hurt for the sake of deterrence, or for the sake of rehabilitation, or for the sake of incapacitation.)

And no, those discussions didn't answer my inane questions -- you did not supply any reason to think your selection of ethical premises isn't based on "aroused emotional drive".
I did. You just don't pay attention, maybe because you don't like to think that someone could possibly get past Hume.
You should probably lay off speculating about other posters' psychology -- you stink at it. Let me remind you that you invoked Hume first; I merely repaid you in like coin. I think it's entirely plausible that someone will "get past Hume" -- the "Is-ought problem" is overrated -- but AM's approach appears to me to have better prospects for success than yours.

I have laid this out a few times, but I guess I'll do it again because you just really seem to like ignoring it. Pay close attention here: I have derived two classes of oughts. You can guess what those classes are pretty easily if you aren't too busy [vulgarity once again imputing to me your own fantasy about] Hume.

First, I pointed out the class of all oughts that can be derived from is.
By "pointed out", you appear to be referring to something you asserted. You did not supply any evidence that there were no others besides the hypothetical imperatives you exhibited.

Then I pointed out that there is a subclass, the metagoal which are the class of oughts that are not unilaterally asymmetrical, and thus contradictory against a basic moral justification when compared to someone else.
By "contradictory against a basic moral justification" you appear to be referring to their incompatibility with your personal favorite ethical premise. The circumstance that your preferred hypothetical imperative does not contradict some ethical assumption that you happen to like is not enough to magically transform it into a categorical imperative, and thereby "get past Hume".

You regularly ignore that little fact
I didn't ignore it. Which part of '"To have all that is necessary to do X where X does not deprive anyone else of the same." is a specific goal. Just calling it a "meta-goal" doesn't make your attempt to derive general "oughts" from it a winning battle.' didn't you understand? You deciding you're more impressed by your own verbiage than by my refutation does not give you license to falsely claim I ignored you -- particularly seeing as how you quoted my response back to me, and you swore at me over it. Why don't you have any moral compunctions about just making up garbage about your opponents?

If I AM on one side of a wall, AND it IS my goal to use the least energy to reach the other side, ... I ought do that thing (it is the solution to the problem).
That's an equivocation fallacy. "Ought" has two meanings -- instrumental and moral -- corresponding to what Kant called hypothetical imperatives and categorical imperatives. Adding goals gets you from an "is" to an instrumental "ought", not to a moral "ought".

Have you been influenced by Randroids? Those guys imagine whatever they dislike violates the non-contradiction principle.
...
Jesus said it; you believe it; that settles it?
...
Well then you should never sue anyone.
...
Well then you should never imprison anyone.
...
"unnecessarily" ... [=] ... "undeservedly".
Talk about <expletive deleted> equivocations.
You just put words in my mouth. I didn't say or imply that "unnecessarily" = "undeservedly". I didn't write "[=]" or anything that meant "[=]". "[=]" is completely unreasonable as an attempt at paraphrase. You just made it up and spliced it between two words I did say. That's unethical. What makes you think a person who would do something that unethical to another poster is competent to lecture the rest of us about ethics?

First, we have a discussion that must be had about "the golden rule". You are equivocating the biblical formulation (the "positive formulation") against the negative formulation which is "don't do unto others that which you would not have done into you",
Sorry, my bad. Make that "Confucius said it; you believe it; that settles it?". The distinction between the positive and negative formulations is a quibble. You can't derive either form from pure logic, and both forms are vulnerable to the problems I pointed out.

which has in the invocation of the metagoal been further distilled to "you have no justification for doing something without symmetrical consent to others that you expect others to not do to you without symmetrical consent".
And you have evidence, do you, that we all consent to be locked up if others want to rehabilitate us, but we don't consent to be locked up if we deserve it?

If you want to talk about "useful rules of thumb, maybe we can invoke your unsupported virtues that you invent from your own feelings.
What's your point? Did I claim my own useful rules of thumb get past Hume? You're the one making the big claims here, so you're the one with burden of proof.

At any rate, there's a big difference between "unnecessary" with respect to whether we are talking about extrinsic utility vs intrinsic desert. One says "I'm going to do this because I do not deserve to be violated" vs "I'm going to do this thing to them because they deserve to be hurt." Good job drawing that equivocation.
I made no such equivocation. I simply pointed out that the Golden Rule is inherently ambiguous: in your latest phrasing, it's the word "something" that's ambiguous. Whether what you do to others qualifies as the same thing as the "something" you don't want them to do to you depends entirely on how you choose to characterize it, and you can characterize the same act in a million different ways. Utility and desert are simply two examples of that ambiguity. I didn't claim they were equal to each other. They're two different tools that people with two different moral judgments can equally well use to shoehorn what they do into satisfying the ambiguous Golden Rule.

All that aside, you have evidence, do you, that we all consent to be locked up if others suspect we will violate them so they think it has extrinsic utility to them, but we don't consent to be locked up if we intrinsically deserve it?

I mean, speaking in terms of a specific goal for the derivation of general "oughts" is a losing battle. There is no specific goal. There is the possibility, though, of discussing a meta-goal to derive general oughts.

To me, that goal is "to have all that is necessary to do X" where X does not deprived anyone else of the same.
That's a special-pleading fallacy... "To have all that is necessary to do X where X does not deprive anyone else of the same." is a specific goal. Just calling it a "meta-goal" doesn't make your attempt to derive general "oughts" from it a winning battle.
That's quite a claim, in the presence of a variable.
You seriously think you can derive philosophy from surface syntax? The goals you call "specific" to distinguish them from your "meta-goals" have variables too. "Maximize total happiness." means "If X leads to more happiness than Y, choose X." "Moderation in all things; seek the Golden mean." means "If X < Y < Z, choose Y."

Also dead <expletive deleted> wrong. Instrumental and moral oughts are only differing in whether they are symmetrically non-contradictory,
Why should anyone take your word for that? Because you say it with a resonant and well modulated voice? Because you have a symmetry boner? Show your work. Instrumental and moral oughts appear prima facie to be differing in that "But I don't want to reach the other side of the wall" is generally perceived to be a good reason for not doing the thing one supposedly ought.

whether they can be invoked without creating a social contradiction.
Is that the same thing as a regular contradiction, or is it something different?

...Social contract theory is logically incapable of delivering that which it exists for the purpose of delivering.
That's one hell of a (<expletive deleted>) claim.
Social contract theory was made up by Thomas Hobbes to justify his claim that we all owe absolute obedience to the King, as a rhetorical device, because the traditional justification -- the Divine Right of Kings -- had stopped impressing people. In the absence of a god to magically prevent infinite regress in justifications for ethical obligation, people were proposing all manner of alternative foundations, or becoming skeptical about ethical claims in general. Whenever somebody made a moral claim, somebody else would say "Why?", and to whatever answer was given, somebody would say "Why?" to that too, so it was getting harder and harder to make the public believe "Because the King said so" was a good reason for anything. Hobbes' solution was to short-circuit all those "Why?s" and all those conflicting theories, by answering "Because you promised to". Nearly everybody agreed that people should keep their promises. But of course, as a matter of logic, this fails. "Why should people keep their promises?" is every bit as good a question as "Why should people take orders from gods?".

In fact, ethics only exists in the context of more than one person. If there is only one person, it is perfectly acceptable to be a solipsist, and all instrumental oughts are in fact ethically justified; nobody else's concerns need to be heeded in that situation because there is nobody else to be concerned with.
AM has refuted this admirably.

In fact if you read my posts like you claim, you would have already noticed that I pointed out the extent of it's function: risk level acceptance and resource allocation for zero- or limited-sum pools.
Yes, certainly. What's your point? You said "by having a common agreement". We don't have a common agreement; and even if we did, having a common agreement wouldn't magically make the "Why should people do what they agreed to?" question go away. To suppose it would is an appeal to magic or an appeal to "aroused emotional drive". The fact that you only want to apply it to risk and resource distribution rather than to every stupid command some Stewart king issues is great -- go you! Big step in the right direction. Just like if you rely on your horoscope only for scheduling your appointments and don't base actual foreign policy on it. Doesn't change the fact that when you wrote "by having a common agreement", you doomed any remaining possibility of having your theory "get past Hume", at least as far as risk acceptance and resource allocation are concerned.

I can easily identify that if I wish to have my meta-goal stay as intact as possible, I must respect the meta-goals of others as much as possible. Punishment for the sake of vengeance rather than only as a last resort in behavior modification fits right into "unnecessary", almost trivially so.
Some people's meta-goal is to have justice done.
No, it is not. Because that metagoal is contradictory; their goal is, I guarantee you, including to not have 'justice' done to them, to be free of 'justice'. Nobody wants to be punished, else it is not "punishment".
And? Nobody wants to be incarcerated for the sake of behavior modification either. People change their minds when their perspective changes; and people are biased in favor of themselves. What somebody thinks satisfies his goals while he hasn't committed a crime, and once he has, are probably going to be two different things no matter what his philosophical stance is. This isn't rocket science. Your double standard is painfully obvious, probably to everyone but you. Your whole meta-goal approach to ethics was pretty thoroughly anticipated by the grand poobah of symmetry, Immanuel "Always act according to that maxim whose universality as a law you can at the same time will" Kant -- and Kant was a dedicated retributivist. Retributive penal principles are every bit as symmetrical as utility-based principles. Deal with it.

So, Gary's goal does not unilaterally invoke bob
Bob's goal unilaterally invokes gary. There you go, it's already not up for debate with the social consensus.
Says you. The social consensus says otherwise.

If I can change the name 'jesus' for 'muhammed' or any other arbitrary thing, it's already disallowed as a contradiction; you are already abusing the role of the social consensus in the model, and invoking special pleading to justify one form of goal over another (jesus as opposed to Muhammed, neither of which is justifiable against the observable reality;
Don't shoot the messenger. I'm doing nothing of the sort; it's the social consensus that's doing that. I'm on your side here -- Gary and you and I and the rabbi will all vote for Gary's right not to go to church; the motion is carried, 996 to 4.

come back to me when you prove jesus and God and all that exist).
The social consensus already voted that Brother Justin the preacher proved it beyond reasonable doubt.

The role of social consensus here is limited to probabilistic outcomes:
The social consensus is impressed by Pascal's Wager. It judges that the risk of Gary going to Hell outweighs the infinitesimal probability that being taught Christianity will make him more dangerous than he is as a Jew. Actually, they figure that the case for Christianity is so strong that he must not have read the pamphlets they gave him.

we accept some probability of being harmed because our actions generate a probability of harming others; the social contract only determines the probabilities and the extents of harm allowed against the metagoal in the context of the risks we generate for others. For instance, I accept the risks of being harmed while others are driving by driving and imposing those risks on others, with consent through action.
Um, no, you accept those risks by writing "I accept the risks". "Consent through action" is another way to say "consent by proxy" -- it is you determining what somebody else consents to. It is every bit as logical as Christianity's sin-by-proxy and atone-by-proxy. Social contract theory is a religion.

Of course, I do invoke a second role of social contract: can also serve to formalize etiquettes for the disposition of limited social resources.
The social consensus votes to dismantle the synagogue to deploy its bricks and lumber to a more socially desired use.

Clearly what's going on here is you picked symmetry as your ethical premise, due to an aroused emotional drive. You transparently have a symmetry boner.
No system based on axioms can tolerate the existence of a contradiction within it. It has nothing to do with emotion and everything to do with the fact that I expect my ethical principles to be logical. I have a logic boner. So should you. So should everyone. Anything else is, well, illogical.
You haven't exhibited an asymmetry in retributive ethical systems, merely a tendency for people to change their minds when it's their own ox being gored. But never mind that -- you haven't even exhibited a logical contradiction in ethical systems that really are asymmetrical. Here, let's make it as easy for you as it could be. Consider the ethical system "King Charles I may do whatever he pleases; everyone else has an ethical duty to obey King Charles I in all things." Go ahead: derive a logical contradiction from that.

:eating_popcorn:

Stomping the bread into the ground so you could both die is by definition harming the goals of others, as a goal in and of itself.
It's my bread, not his. His metagoal cannot be harmed because he has no justification to take the bread. The destruction of the bread is, in fact, a product of their own unethical behavior as a result of what was an absolutely fair way to decide what happened to the bread; all other things equal, if might makes right we both die because I will be forced to fight to the death lest I die anyway; we both die. All other things being equal if we play any other game for the bread, it comes down to probabilistics anyway. So no matter what it comes down to probabilistics.

So his goals END as soon as he loses whatever game we decide on. It is by definition not harming them because his goals have already by his own consent been ended.

The only option for either of us was always "get a 50/50 chance at bread"; the cost of accepting a chance of getting that bread without mortal harm is accepting the consequences of cheating (namely mutually assured destruction); I just figure it's better to be starving to death without also being heavily injured in a fight that's likely to destroy the bread anyway.

Maybe you missed the fact that RS in this scenario of stomping the bread is already presumed to have of lost the coin toss. If he wins, he gets bread without violence, and I starve.

At that point, who is being unethical, again? Oh yeah, the person who would create a situation where they may get bread without injury and someone else starves, but they refuse to offer a situation where the other may attain the same. If we both play by the rules, we universally have a better chance at survival.
All that comes under the heading of "everyone is the hero of their own story and people will jump through all kinds of hoops to prove it to themselves." You are killing him, not because it's necessary to prevent a future crime, but because of his past crime. Your justifications -- that he's unethical, that he created the situation, that it's a product of his own unethical behavior, that the goals you consider legitimate for him to have are forfeited due to his cheating -- are all just high-falutin' ways to say you're killing him because he deserves it.
 

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
This is a very inaccurate statement. I do not think nature is ethical. I think that certain contexts in nature create situations where particular strategies are optimally beneficial.

I'm fine with that you see, but then you say things like 'ethics is the selection pressure' and that you have a radical idea about guiding principles.

The way I would put it is, there are natural laws and natural selection pressures that of themselves are neither ethical or unethical (are a-ethical) and that what we feel, and name, ethical, is our response to them.

An analogy might be beauty. Not Angra's example of colour, because there are, I would say, objective facts about colour (or at least about the wavelengths of light) that are independent of human judgements about them. I would say that this is not the case for beauty, or morality, or perhaps any human value judgement.

Ethics are natural, but not all of nature is ethical.

I do not even understand how you can say any of nature is ethical (of itself) rather than that we call it that because we have evolved certain capacities and traits.
 

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
First, it's not only that he enjoys it. It's that he does it for fun, so that is the reason (not just a secondary reason).
Second, he deserves punishment, regardless of what he needs - what he needs is food, water, air, etc.
Third, your claim againt retributivism flies is just that: a claim that goes against the ordinary human moral faculty. As is the case when you question any human faculty, the burden is on you. It would be irrational to dismiss our faculties for no good reason - reasons we can only assess by means of some of our faculties, obviously.
Fourth and foremost, this misses the point entirely. The point of the example was not that he deserves punishment (though he does), but he behaves unethically. That is sufficient to show that Jarhyn's theory is false.



No, that is not a problem. The problem is with your qualifiers. But that the ordinary human moral faculty deems a behavior unethical is in fact sufficient evidence to conclude that it is so, just as is the case with other ordinary human faculties, and barring specific evidence to the contrary.
For that matter, if it seems blue to a normal human visual system under ordinary light conditions, that is pretty much sufficient evidence to reckon that it is blue. It's what rationally one should reckon, barring a lot of counter evidence. The same for the verdicts of other human faculties, in this case the moral faculty. It's you who has the burden of showing that his behavior is not unethical.

Again, your qualifiers only complicate matters. But the 'really' qualifier does not seem to add anything, but it's an intensifier. As for the others, I already addressed them.


Now, if you were correct and the ordinary human faculty were not enough to justify our moral assessments, then nothing would, and Jarhyn's theory would be unwarranted. The reason for that is that we do not have any tools for assessing whether a behavior is unethical or not other than the ordinary moral human faculty - our own, and that of other people -, aided of course by other faculties (e.g, to make intuitive probabilistic assessments about expected consequences of some behavior), but in the end, our moral faculty is the tool to make ethical assessments.

What about moral theories?

None is true. However, even if one were true, those theories can only be properly tested against the judgments of the human moral faculty (or against something already based on it), so even then, we would only be justified in believing them true if they pass the test when their predictions are tested vs. the human moral faculty.

Incidentally, something like the above holds for color too: we may have cameras and computers that can detect blue stuff, but we only have them because they have been calibrated using human color vision (or tools already based on it).


ruby sparks said:
And in any case, you're too fond of the extreme example of causing harm for fun. It's trivially obvious that at some point on the spectrum of human behaviours, we could say something like, 'all normal, decent, intelligent humans would think this wrong'. So what? You're just operating at one extreme. At the other end of the spectrum, human morality is pretty relative and variegated.
You seem to have lost track of the conversation. Again, I was showing that Jarhyn's theory was false. In order to test a theory, I just need to compare its predictions with some known facts. So, the extreme examples are pretty adequate.

So what, you say?

So, "'all normal, decent, intelligent humans would think this wrong'", but the theory I am debunking entails it is not morally wrong.


ruby sparks said:
On the contrary, the claim that what you call the ordinary human moral sense is the proper tool to find moral truth is something for which the burden is on you.
No, that is not true. It would be irrational to question one of our faculties without good evidence against it - evidence which, of course, we also assess on the basis of our human faculties!

We do not do that normally. For example, we do not claim people who say a traffic light was red that they have to show that the human visual system is a proper tool to figure out whether something is red. Sure, there are arguments for a color error theory (they fail) but the burden is on the claimants.


ruby sparks said:
In fact, demonstrating that there is even such a thing as moral truth in the first place is a burden you might want to try to lift before you even get on the the other one.
No, not at all. That some behaviors are unethical is obvious by normal human assessments. It's on you the burden to show otherwise in the first place.

]However, in this context you miss the point again. Jarhyn's theory entails that there is such thing as moral truth - ethical truth in his terminology. So, in order to argue against it, it is proper to assume there is (else, the theory is false on that account alone). This does not even depend on whether it is proper to reckon in general (i.e., when not argue against a specific theory) that there is moral truth (it is, but really not the point here).


Jarhyn said:
I'm not and wasn't discussing Jarhyn's claim with you. You can discuss that with Jarhyn.
Are you serious?
Discussing it with Jarhyn is exactly what I was doing, when you jumped in: You jumped on a post in which I was replying to Jarhyn's ethical theory. You took that post out of context. Of course Jarhyn agrees that there is ethical truths (read his posts!), and his theory entails that there is (read his posts!), so it would be proper on my part to assume there is ethical truth in the context of testing his theory even if it were not proper to reckon in general (i.e, in other contexts) that there is ethical truth.


Let me try another manner. Suppose that someone claims that God (i.e., omnipotent, omniscient, morally perfect agent) exists. In order to argue against that claim, it is proper to assume that there is an omniscient, omnipotent agent, and then argue it is not at all morally perfect. Now, this case is different because there is moral truth, whereas there is no good reason to suspect there is an omniscient, omnipotent agent, but it is not different in the relevant sense, namely that it is okay to use as a hypothesis one or more of the implications of the theory one is criticizing.

Ok too much of that is confusingly interwoven with the mistaken idea that I am or was discussing Jarhyn's claims with you. As I said already, I am not and wasn't.

I will extract something though...

What about moral theories?

None is true.

This is not something I would have expected you to say, although it is what I would say. Are you not in fact, after all, claiming that your theory is true?
 

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
I think it's entirely plausible that someone will "get past Hume" -- the "Is-ought problem" is overrated -- but AM's approach appears to me to have better prospects for success than yours.

Without (I stress) getting involved in your exchanges with Jarhyn, I would like to offer my thoughts on this, separate to your disagreements with Jarhyn. I am not sure if the is-ought problem is overrated, but it depends what you mean. I might say that it can never be got past, although this does not prevent us from coming up with moral theories nonetheless. We pragmatically need to do that, I think, not least because (a) we are stuck with having to deal with our moral intuitions and (b) we must find ways to co-exist, if only in order to survive, which I feel is probably the main driver for what we humans call (rationalise as being) morality, even though the universe is amoral.

As to AM's approach, I agree it has its merits, obviously. But I am a bit skeptical about where he goes with it.

Personally, I would say that morality is not either objective or relativist. I would say that that is a false dichotomy, and too simple to reflect the enormous complexities. Does that mean I would say that there are no objective moral facts? No, I don't think I would go as far as that. There may be, but my caveats would be that (a) there might only be a very few, in clear cut situations (which I think are the minority) and (b) they are only objective in the sense that they are common to all (let's say) normal, properly-functioning humans (temporarily assuming we can define that) and are not objectively independent of our species the way that, for example, the laws of physics are.

For example, take Angra's favourite "it is morally wrong to torture people just for the fun of it.” All 'normal, properly-functioning' humans might agree with this, but (a) that does not make it independently true and (b) once we move away from such extreme examples, the ground starts to get situationally boggy, not least when we move on to responses (just deserts).
 
Last edited:

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
ruby sparks said:
Ok too much of that is confusingly interwoven with the mistaken idea that I am or was discussing Jarhyn's claims with you. As I said already, I am not and wasn't.
But you raise objections that do not make sense as objections to a post in which I reply to Jarhyn. Even if your claims about burden, etc., were correct in general, they would not work against my arguments in the post you were replying to, because that post was a reply to Jarhyn's theory, and it is proper to use some of the implications of the theory one is testing (in this case, Jarhyn's) in the arguments against it.


ruby sparks said:
This is not something I would have expected you to say, although it is what I would say. Are you not in fact, after all, claiming that your theory is true?
I see I haven't been clear. I was talking about first-order ethical theories that make specific predictions about moral assessments, and I mentioned them in opposition to the use of the human moral sense to make moral assessments. The reason I say none is true is that when tested using a human moral sense, they all fail.

Let me illustrate the distinction with an analogy. Imagine philosophers and/or scientists come up with different theories about color that make predictions about which objects are, say, blue. But when we look at some of the objects in question, under ordinary light conditions (i.e., objects in our vecinity, daylight, no difficulty to see, so pretty ordinary conditions), several of them do not look blue. On the basis of this, I would say all color theories are false. Someone might object and say 'But what about the color theory that says that our visual system, under ordinary conditions, is a good guide to ascertain the color of an object?' I would then say that that is not what I called a 'color theory'. But regardless of terminology, my point would be as above for moral theories.

Also, it's not my theory; it's not my invention, except for some details.
 

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
But you raise objections that do not make sense as objections to a post in which I reply to Jarhyn. Even if your claims about burden, etc., were correct in general, they would not work against my arguments in the post you were replying to, because that post was a reply to Jarhyn's theory, and it is proper to use some of the implications of the theory one is testing (in this case, Jarhyn's) in the arguments against it.

I don't know how I can be any clearer. Regardless of whether I came in on a conversation between you and Jarhyn, my points to you were to you and about your theory, not Jarhyn's. I am at this point discussing Jarhyn's theories with him separately.


I see I haven't been clear. I was talking about first-order ethical theories that make specific predictions about moral assessments, and I mentioned them in opposition to the use of the human moral sense to make moral assessments. The reason I say none is true is that when tested using a human moral sense, they all fail.

I don't understand that. Are you or are you not saying that yours is true?

Or are you merely saying that yours is not true, but at least accords with what you are calling 'human moral sense'? If so, good, but I would say that that is more complicated, variegated and relative than you seem to allow for. As such, it may be applicable where someone hypothetically kills purely for fun, temporarily assuming that ever happens, but beyond that I'm not so sure. Nor am I sure about the next step, deserts.

Let me illustrate the distinction with an analogy. Imagine philosophers and/or scientists come up with different theories about color that make predictions about which objects are, say, blue. But when we look at some of the objects in question, under ordinary light conditions (i.e., objects in our vecinity, daylight, no difficulty to see, so pretty ordinary conditions), several of them do not look blue. On the basis of this, I would say all color theories are false. Someone might object and say 'But what about the color theory that says that our visual system, under ordinary conditions, is a good guide to ascertain the color of an object?' I would then say that that is not what I called a 'color theory'. But regardless of terminology, my point would be as above for moral theories.

Also, it's not my theory; it's not my invention, except for some details.

I've said many times that I do not think colour is a good comparison. There may be independent (of humans) objective facts about colour (at least in terms of wavelengths of light) but I do not think there are such facts about morals.

I have suggested (aesthetic) beauty as a comparison instead, or some other human value judgement. I think that would be much better, given that we would be dealing with human value judgements in both comparative cases.
 

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
ruby sparks said:
I don't know how I can be any clearer. My points to you were to you and about your theory, not Jarhyn's.
And again, in that context, they are out of place, because even if your points about what you call my theory were correct, my points against Jarhyn's would work for the reasons I've been explaining.


ruby sparks said:
I've said many times that I do not think colour is a good comparison.
I've said many times that whether something is a good comparison depends on what it is we are talking about. Color is similar to morality in some senses, not in all (otherwise it would be morality), but the comparison is relevant in this context. If you do not see that, I'm afraid I cannot go further to explain it, as I do not know how to.

I will actually address the is/ought objection in another thread, but I will use color as an example. I hope you realize why it is adequate. If you do not realize that, I'm afraid I can't do more.
 

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
Bomb#20 said:
I think it's entirely plausible that someone will "get past Hume" -- the "Is-ought problem" is overrated -- but AM's approach appears to me to have better prospects for success than yours.
Maybe I went too far; I'm not sure. But I'm pretty sure that if it's a fallacy, it's pretty much everywhere, and it is inevitable. I will address it in another thread in MFP, in which I expect that I will be told repeatedly that the color analogy is inadequate, and so is the science analogy, and so on.:(
 

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
.... even if your points about what you call my theory were correct, my points against Jarhyn's would work for the reasons I've been explaining.

That is not something I am or was concerned about. You and Jarhyn are not necessarily debating the same things as you and I.

I've said many times that whether something is a good comparison depends on what it is we are talking about. Color is similar to morality in some senses, not in all (otherwise it would be morality), but the comparison is relevant in this context. If you do not see that, I'm afraid I cannot go further to explain it, as I do not know how to.

I will actually address the is/ought objection in another thread, but I will use color as an example. I hope you realize why it is adequate. If you do not realize that, I'm afraid I can't do more.

I definitely think you should do a value judgement, such as beauty, for reasons given. No, I do not accept your comparison with colour is the better one. Go ahead and use whatever comparison you wish, but I think it's flawed, and I believe contains an underlying presumption that morality has real, objective, independent properties, as colour (in terms of wavelengths of light) has, because that is precisely the key point of comparison you make. In other words, it's a conveniently pre-loaded comparison you're using. Which think is very iffy indeed. In any case, comparisons only work to a finite extent. It may be that morality is in some key ways different from either beauty or colour or whatever.

And even if you or anyone did manage to establish let's say at least one 'moral fact' (for humans), it will come with all the caveats I previously gave, that it is not truly independent or objective, that it can't necessarily be extrapolated to deal with other less clear situations and that it doesn't sort out the issue of deserts. As such, it is of very limited value and may even be something akin to a little nugget of philosophical fool's gold, depending on how you try to spend it in the real world.

To use your example of gustatory taste (which I agree was a better comparison that colour) yes, you may establish that almost every 'normal, proper-functioning' human will agree that something tastes disgusting, but you will never establish whether something else that some like and others don't has a similarly correct answer. And I would again remind you that rotting shark is considered a delicacy in Iceland in any case. :)
 

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
I've said many times that whether something is a good comparison depends on what it is we are talking about. Color is similar to morality in some senses, not in all (otherwise it would be morality), but the comparison is relevant in this context. If you do not see that, I'm afraid I cannot go further to explain it, as I do not know how to.

I will actually address the is/ought objection in another thread, but I will use color as an example. I hope you realize why it is adequate. If you do not realize that, I'm afraid I can't do more.

I definitely think you should do a value judgement, such as beauty, for reasons given. No, I do not accept your comparison with colour is the better one. Go ahead and use whatever comparison you wish, but I think it's flawed.

I already explained in the other thread why the reasons are not adequate. In this particular case, I was using color only to explain to you what I meant by a 'moral theory', which is analogous to what I would mean in that context by 'color theory'. So, the reasons you give for thinking the analogy is not adequate clearly fail (and if you do not see that, there is nothing I can do).

In other contexts, I use it for different purposes. And I reject the reasons you give for reasons I gave in our previous exchanges.

Btw, here is the new thread on the is/ought issue. I hope you realize why the analogies with color and science are adequate. https://talkfreethought.org/showthread.php?22197-The-is-ought-issue
However, previous experience suggests you will not, unfortunately.
 

Jarhyn

Wizard
Joined
Mar 29, 2010
Messages
9,798
Gender
No pls.
Basic Beliefs
Natural Philosophy, Game Theoretic Ethicist
The social consensus is impressed by Pascal's Wager. It judges that the risk of Gary going to Hell outweighs the infinitesimal probability that being taught Christianity will make him more dangerous than he is as a Jew. Actually, they figure that the case for Christianity is so strong that he must not have read the pamphlets they gave him.
which assumes things not in evidence. Prove hell. They both have EQUALLY proven beliefs, proven on equal amounts of evidence. Pascal's Wager has already fallen to trivial logical argument, and in fact I often make an inverse wager: that if there is a god, that God prefers the atheist. It goes a little something like this...

"The universe as we observe it has much evidence that it is old and is governed by unthinking mechanical relationships Much of it is indeterminate, and much of it is simply absurd. Nobody who has ever claimed to talk to god can claim a more 'divine' experience than someone who has merely taken a bunch of LSD, in fact. One thing that is trivially certain, however, is that IF there is a creator god, by that assumption he DID create a universe and this universe has a particular shape in the relationships that can emerge from its existence; the universe itself and it's properties and phenomena are in fact the only thing that can ever be said to be the word of God, if there is such a thing. The evidence remains that God has been silent since the first word.

Thus, that God clearly would prefer the person who reads this direct word and figures out from it what relationships and phenomena exist in it through testing and modeling and honest doubt in the flawed words of mere men. They will test, and doubt, and work it out. And they will not see 'a God' because no god is necessary based on what science, the act of understanding and reading the universe, has observed.

Thus God prefers the atheist, and particularly the atheist who derives their ethics from the shape of the universe rather than the assumptions made by men, even in the face of the absurdity of their own feelings.

This is in fact confirmed through the fact that atheistic or agnostic science yields better results to survival and understanding; the universe is shaped in such a way to guarantee this. There is no evidence of an afterlife. There is only evidence of this life. Thus the evidence indicates that it is our obligation for our own sake to make life as good in the one life we have been given evidence of as we find ourselves capable of."

I will get back to this Pascal's Wager abuse of the social contract in a bit, from different directions.

"King Charles I may do whatever he pleases; everyone else has an ethical duty to obey King Charles I in all things."
King Charles is a person, in the same fashion as all of "everyone else".

The contradiction exists quite plainly in that special pleading: the fact that King Charles' rights contradict with everyone else's rights; person X does not have equal moral value to person Y for all X and Y.

I am under no obligation to accept your axiom, at any rate. I am under an obligation to accept axioms I cannot deny and the other I cannot deny if I wish to be non-contradictory with an axiom no other can deny: that I claim my authority to act on the basis of my own existence (that I, ultimately, have autonomy). Second is that if I claim this autonomy it is equal in value to all others who claim this autonomy. Third, there are not real contradictions in nature.

So, if our autonomies have equal value (axiom 2) and your goal requires a greater value to your autonomy than mine, you have already invoked a contradiction. This is the ethical disproof of justification, the point at which an ought becomes qualified as "not ethically justified".

Instrumental and moral oughts appear prima facie to be differing in that "But I don't want to reach the other side of the wall" is generally perceived to be a good reason for not doing the thing one supposedly ought.

"When my goal is to get to the other side of the wall in the example but my goal isn't to get to the other side of the wall in the example..."
You are invoking a contradiction against the initial conditions of the example. The point is that the best strategy is contextual to the goal. You are moving the goalposts, quite literally, in asserting a different goal than the one our hypothetical actor had.

You frequently assert that the metagoal is a specific goal. It is not.

The metagoal represents a SET of goals, namely ALL goals for which value of autonomy of X is accepted as equal to the value of autonomy of Y.

I am using a single example where a goal is assumed to derive a strategy, so that later, when I derive a strategy that describes the metagoal, I can demonstrate an instrumental ought that is universally morally justified without engaging in special pleading.

Regardless of what you think you know of social contracts, this produces two issues that need to be resolved in the resolution of strategy, something that comes AFTER and SUBORDINATE to the aforementioned principles. This is where game theory enters: the strategy must address zero sum games and probabilistic outcomes, thus creating two "social contract" functions: the contract which decides what probabilities of risk we accept others to subject us to (lest we be paralyzed by infinitesimal probabilities of harm), and the resolution of limited resources in an equitable way. Both are well within the purview of game theory. Note that one opens us up to harm from the actions of others, it increases rather than limits our freedom with an inverse relationship to the probability of harm those actions may create.

Now, let me get back to your Pascal's Wager bullshit: first, Gary not going to church does not in any way generate risk for others. It creates exactly the outcome he has consented to on the basis of his own goals: It does not assume his justification based on his existence is superior to the justification of actions others have based on their existences. He has consented to hell if he is wrong AS IS HIS RIGHT, just as the christians consent to hell if Gary happens to be right. Because Gary does not risk THEIR souls even in going to hell, he has a right to do so on the basis of his personal goals (which include not wasting his sunday hearing someone blather on about false bullshit). There are some things the principles of ethics I have laid down do not allow a vote on, namely whether a person's rights are superior to another's. Only on what risk one is allowed to subject another to, and that this risk is purely measured in terms of the impacts on another person's goals, which can even include "going to hell, if I am wrong". And of course in this situation even God himself is measured against ethics. And we could have a merry conversation in which you would probably agree with me that the very idea of hell is unethical, at least within the context of the neo-lamarckian social-technological strategic context.

We can see imperfect reflections of these limitations on the social contract in it's subordinate position to noncontradicton in the existence of a bill of rights that limits the social contract and it's general acceptance in the population (most people are mostly right most of the time). This is roughly how things already are accepted to work. All I am doing is attempting to bring understanding of why into sharper focus.
 

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
First, it's not only that he enjoys it. It's that he does it for fun, so that is the reason (not just a secondary reason).
Second, he deserves punishment, regardless of what he needs - what he needs is food, water, air, etc.
Third, your claim againt retributivism flies is just that: a claim that goes against the ordinary human moral faculty. As is the case when you question any human faculty, the burden is on you. It would be irrational to dismiss our faculties for no good reason - reasons we can only assess by means of some of our faculties, obviously.
Fourth and foremost, this misses the point entirely. The point of the example was not that he deserves punishment (though he does), but he behaves unethically. That is sufficient to show that Jarhyn's theory is false.



No, that is not a problem. The problem is with your qualifiers. But that the ordinary human moral faculty deems a behavior unethical is in fact sufficient evidence to conclude that it is so, just as is the case with other ordinary human faculties, and barring specific evidence to the contrary.
For that matter, if it seems blue to a normal human visual system under ordinary light conditions, that is pretty much sufficient evidence to reckon that it is blue. It's what rationally one should reckon, barring a lot of counter evidence. The same for the verdicts of other human faculties, in this case the moral faculty. It's you who has the burden of showing that his behavior is not unethical.

Again, your qualifiers only complicate matters. But the 'really' qualifier does not seem to add anything, but it's an intensifier. As for the others, I already addressed them.


Now, if you were correct and the ordinary human faculty were not enough to justify our moral assessments, then nothing would, and Jarhyn's theory would be unwarranted. The reason for that is that we do not have any tools for assessing whether a behavior is unethical or not other than the ordinary moral human faculty - our own, and that of other people -, aided of course by other faculties (e.g, to make intuitive probabilistic assessments about expected consequences of some behavior), but in the end, our moral faculty is the tool to make ethical assessments.

What about moral theories?

None is true. However, even if one were true, those theories can only be properly tested against the judgments of the human moral faculty (or against something already based on it), so even then, we would only be justified in believing them true if they pass the test when their predictions are tested vs. the human moral faculty.

Incidentally, something like the above holds for color too: we may have cameras and computers that can detect blue stuff, but we only have them because they have been calibrated using human color vision (or tools already based on it).


ruby sparks said:
And in any case, you're too fond of the extreme example of causing harm for fun. It's trivially obvious that at some point on the spectrum of human behaviours, we could say something like, 'all normal, decent, intelligent humans would think this wrong'. So what? You're just operating at one extreme. At the other end of the spectrum, human morality is pretty relative and variegated.
You seem to have lost track of the conversation. Again, I was showing that Jarhyn's theory was false. In order to test a theory, I just need to compare its predictions with some known facts. So, the extreme examples are pretty adequate.

So what, you say?

So, "'all normal, decent, intelligent humans would think this wrong'", but the theory I am debunking entails it is not morally wrong.


ruby sparks said:
On the contrary, the claim that what you call the ordinary human moral sense is the proper tool to find moral truth is something for which the burden is on you.
No, that is not true. It would be irrational to question one of our faculties without good evidence against it - evidence which, of course, we also assess on the basis of our human faculties!

We do not do that normally. For example, we do not claim people who say a traffic light was red that they have to show that the human visual system is a proper tool to figure out whether something is red. Sure, there are arguments for a color error theory (they fail) but the burden is on the claimants.


ruby sparks said:
In fact, demonstrating that there is even such a thing as moral truth in the first place is a burden you might want to try to lift before you even get on the the other one.
No, not at all. That some behaviors are unethical is obvious by normal human assessments. It's on you the burden to show otherwise in the first place.

]However, in this context you miss the point again. Jarhyn's theory entails that there is such thing as moral truth - ethical truth in his terminology. So, in order to argue against it, it is proper to assume there is (else, the theory is false on that account alone). This does not even depend on whether it is proper to reckon in general (i.e., when not argue against a specific theory) that there is moral truth (it is, but really not the point here).


Jarhyn said:
I'm not and wasn't discussing Jarhyn's claim with you. You can discuss that with Jarhyn.
Are you serious?
Discussing it with Jarhyn is exactly what I was doing, when you jumped in: You jumped on a post in which I was replying to Jarhyn's ethical theory. You took that post out of context. Of course Jarhyn agrees that there is ethical truths (read his posts!), and his theory entails that there is (read his posts!), so it would be proper on my part to assume there is ethical truth in the context of testing his theory even if it were not proper to reckon in general (i.e, in other contexts) that there is ethical truth.


Let me try another manner. Suppose that someone claims that God (i.e., omnipotent, omniscient, morally perfect agent) exists. In order to argue against that claim, it is proper to assume that there is an omniscient, omnipotent agent, and then argue it is not at all morally perfect. Now, this case is different because there is moral truth, whereas there is no good reason to suspect there is an omniscient, omnipotent agent, but it is not different in the relevant sense, namely that it is okay to use as a hypothesis one or more of the implications of the theory one is criticizing.

Correction: the last quote above is from ruby sparks, not Jarhyn.
 

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
me said:
c. It contradictory to have justice done by the
government on those who engage in heinous crimes, but leave minor unethical behaviors out of it, and to the punishment regularly inflicted by humans on one another by means of condemning each other's behavior, or mocking each other, etc.
I meant to say "It is not contradictory...", etc.
 

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
Jarhyn said:
King Charles is a person, in the same fashion as all of "everyone else".

The contradiction exists quite plainly in that special pleading: the fact that King Charles' rights contradict with everyone else's rights; person X does not have equal moral value to person Y for all X and Y.

First, there is no claim that anyone has "moral value", just that King Charles may do as he pleases and everyone else has an ethical duty to obey King Charles I in all things.
Second, even if we add the clause that the existence of King Charles exists is a better state of affairs all other things equal than the existence of any other human person (a way of making sense of the idea that KC has greater moral value than any other person), you have failed to derive a contradiction.
me said:
Jarhyn said:
In fact, ethics only exists in the context of more than one person. If there is only one person, it is perfectly acceptable to be a solipsist, and all instrumental oughts are in fact ethically justified; nobody else's concerns need to be heeded in that situation because there is nobody else to be concerned with.
That is false. Purely for example, suppose that all people die due to a rogue biological weapon, except for Joe, who decides to pour gasoline on a cat and set her on fire, so that he has fun watching a fireball run. In fact, he does that every day, as there are plenty of cats around, and he can capture them with different traps and tactics. He is determined to do this, and has human intelligence and the tools left by the rest of humanity at his disposal, so the cats do not have a chance. Then Joe behaves unethically.
Do you have any reply?

me said:
Bomb#20 said:
Uh huh. So Gary wants to go to synagogue on Saturday and work on Sunday; Bob has goals that require everyone to work on Saturday and go to church on Sunday. Through social consensus it's agreed to damage Gary's metagoal of satisfying his own religious obligations, because his unilateral goal is mutually exclusive with the deemed-acceptable metagoal of the social consensus, which is to have as many people as possible be saved through knowing Jesus, so Gary's goal gets rejected.

Jarhyn said:
So, Gary's goal does not unilaterally invoke bob
Jarhyn said:
Bob's goal unilaterally invokes gary. There you go, it's already not up for debate with the social consensus. If I can change the name 'jesus' for 'muhammed' or any other arbitrary thing, it's already disallowed as a contradiction; you are already abusing the role of the social consensus in the model, and invoking special pleading to justify one form of goal over another (jesus as opposed to Muhammed, neither of which is justifiable against the observable reality; come back to me when you prove jesus and God and all that exist).

Gary wants to pour gasoline on a cat and set it on fire every Saturday, because he has fun watching a fire ball run.
Bob has goals that require that everyone refrain from setting cats on fire for fun, and further require that failing that, police try to arrest people who set cats on fire for fun.

So, Bob has goals that unilaterally invoke Gary and other people. It's already not up for debate. Bob is behaving unethically. Gary is not. This is what your ethical theory predicts. Since this is false, it follows that your ethical theory makes false predictions, so it has been tested and shown to be false (it had already been shown to be false, on other grounds, but there is no harm in showing it again).
Do you have any reply?
 

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
In this particular case, I was using color only to explain to you what I meant by a 'moral theory', which is analogous to what I would mean in that context by 'color theory'. So, the reasons you give for thinking the analogy is not adequate clearly fail (and if you do not see that, there is nothing I can do).

That is true. To be honest, I did not thoroughly read what you said just above about colour.

But then I did read it, and I think in the main paragraph, you tried to say that your theory was not really a theory? So when you said no moral theories are true, it's not clear to me now whether you included yours or whether you only meant other moral theories. Could you clarify?

In other contexts, I use it for different purposes. And I reject the reasons you give for reasons I gave in our previous exchanges.

Fine, but if I catch you implying that there are facts about morality because there are, or via a comparison with, facts about colour, I'll be onto you. :)
 

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
ruby sparks said:
But then I did read it, and I think in the main paragraph, you tried to say that your theory was not really a theory? So when you said no moral theories are true, it's not clear to me now whether you included yours or whether you only meant other moral theories. Could you clarify?
It is in a sense a theory (though it's not mine), but what I mean is that it's not one of the theories I was referring to when I said a 'moral theory' in the previous post, just as I would not have classified the view that human color vision is a proper and effective tool for finding color facts a 'color theory' in the sense described above.

So, call it a theory if you like, I do not mind, but it's not the sort of theory I was talking about when I said moral theories were all false.
 

repoman

Contributor
Joined
Aug 4, 2001
Messages
8,297
Location
Seattle, WA
Basic Beliefs
Science Based Atheism
Tragedy is when I cut my finger. Comedy is when you fall into an open sewer and die.

-Mel Brooks
 

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
So, call it a theory if you like, I do not mind, but it's not the sort of theory I was talking about when I said moral theories were all false.

It's a moral theory. So when you earlier said no moral theories are true, you excluded the one you are using. I expect that works nicely for you. ;)
 

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
So, call it a theory if you like, I do not mind, but it's not the sort of theory I was talking about when I said moral theories were all false.

It's a moral theory. So when you earlier said no moral theories are true, you excluded the one you are using. I expect that works nicely for you. ;)
Obviously, if I had included "my" theory, then I would have said that all moral theories are false, except for that one - since I clearly hold that that one is true.
However, that would not have been adequate. For example, a color theory that says that objects with such-and-such properties are red (where the such-and-such are properties not described in color terms), is tested using our own human color vision. In that context, it's not a color theory to say that human color vision is used properly to figure out colors - that's just to point out how you test theories.

But again, not really the point. You count it as a theory? Sure, then that one is correct, though it's not a theory in the same sense the ones I was talking about are theories, so it would be confusing to use it in that manner.

For that matter, I was not talking about metaethical theories I reject (like an error theory), either.
 

ruby sparks

Contributor
Joined
Nov 24, 2017
Messages
9,167
Location
Northern Ireland
Basic Beliefs
Atheist
Obviously, if I had included "my" theory, then I would have said that all moral theories are false, except for that one - since I clearly hold that that one is true.

Indeed. Thanks for clearing that up. All moral theories are false, except your preferred one. Got it. Luckily, that isn't even slightly dogmatic.
 

Angra Mainyu

Veteran Member
Joined
Jan 23, 2006
Messages
4,069
Location
Buenos Aires
Basic Beliefs
non-theist
Obviously, if I had included "my" theory, then I would have said that all moral theories are false, except for that one - since I clearly hold that that one is true.

Indeed. Thanks for clearing that up. All moral theories are false except yours. Got it.

Well, no, as I said that is not a moral theory in the same sense, and it's not mine as in I did not come up with it. But other than that, of course I hold it's a correct theory - else, I would not be defending it -, and so obviously those that deny it are false. But I had already explained clearly enough what I meant when I said all moral theories were false. The point is that we need to use our moral faculty, we do not have a theory that works and can be used instead.
 
Top Bottom