• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Jokes about prison rape on men? Not a fan.

My original ethical derivations came from a pretty radical idea: that there is some principle in nature, some thing derived from the context of our existence in the universe, that caused the emergence of ethics in humans, that our theories and ethics are attempts to approximate in the same way that there, in fact, mere approximations.

You don't sound that different to Angra at this point. :)

I really do think that there is a game theoretic approach possible to ethical philosophy, to make it strategic.

Let's look at Tic Tac Toe. There are things that "are". "Marks are owned by players", "marks are placed in alternating sequence", "marks are placed on a three by three grid", "marks once placed are set". There is a GOAL, "place three marks in a line", and a secondary goal "prevent three marks that are not your own from being placed in a line." From these 'is' things, one can derive an OUGHT wherein every single action made by the player is predetermined. It creates a strategy, and the players who use that strategy will invariably meet the inferior goal and if their opponent makes any mistake at all, they will get their superior goal. This creates an ought: IF your goal is to win (and not lose), you OUGHT apply that strategy as perfectly as possible.

Of course, I expect such an axiom to be controversial. I expect people to not want it, to reject it. I used the transform to a simple game to illustrate the point in a simple rather than hellishly complicated context such as ethics, though it was originally thinking about Tic Tac Toe that I actually came to understand the mechanism by which goals derive oughts from "is".

Other games have different rules. Sometimes those rules imply that there can be no strategy: that there is no way to achieve any particular goal and the results of the game are random (like the card game WAR).

So to me this says that some of the fundamental elements of moral philosophy have to be approached from the examination of goals... hence my metagoal. Because it can't just be about what I want, if I want general strategy.

Sure. 'Goals' and 'strategies' are not arbitrary (evolution and natural selection ensure it) and game theory is a useful citation. But as I see it, the idea that goals or strategies are morally right or wrong in any factual, independent, universal or realist way is.....misguided.

Does that make me a moral relativist? Or a Consequentialist? Or a Utilitarian? Or something else? To be honest, I don't know. First, I never seem to feel I fit any particular label or ism, and second, I change my mind a lot. I think people have argued about morality since the start of recorded history and I think they probably will until the end of it and I'm not sure it will ever be philosophically resolved.

One thing to note: games (in game theory) that feature more cooperation than competition seem to work best, I believe. Does this suggest that retribution is not the best strategy? If so, where does that leave retributivism?
 
My original ethical derivations came from a pretty radical idea: that there is some principle in nature, some thing derived from the context of our existence in the universe, that caused the emergence of ethics in humans, that our theories and ethics are attempts to approximate in the same way that there, in fact, mere approximations.

You don't sound that different to Angra at this point. :)

I really do think that there is a game theoretic approach possible to ethical philosophy, to make it strategic.

Let's look at Tic Tac Toe. There are things that "are". "Marks are owned by players", "marks are placed in alternating sequence", "marks are placed on a three by three grid", "marks once placed are set". There is a GOAL, "place three marks in a line", and a secondary goal "prevent three marks that are not your own from being placed in a line." From these 'is' things, one can derive an OUGHT wherein every single action made by the player is predetermined. It creates a strategy, and the players who use that strategy will invariably meet the inferior goal and if their opponent makes any mistake at all, they will get their superior goal. This creates an ought: IF your goal is to win (and not lose), you OUGHT apply that strategy as perfectly as possible.

Of course, I expect such an axiom to be controversial. I expect people to not want it, to reject it. I used the transform to a simple game to illustrate the point in a simple rather than hellishly complicated context such as ethics, though it was originally thinking about Tic Tac Toe that I actually came to understand the mechanism by which goals derive oughts from "is".

Other games have different rules. Sometimes those rules imply that there can be no strategy: that there is no way to achieve any particular goal and the results of the game are random (like the card game WAR).

So to me this says that some of the fundamental elements of moral philosophy have to be approached from the examination of goals... hence my metagoal. Because it can't just be about what I want, if I want general strategy.

Sure. 'Goals' and 'strategies' are not arbitrary, and game theory is a useful citation. But as I see it, the idea that goals or strategies are morally right or wrong in any factual, independent, universal or realist way is.....misguided.

Does that make me a moral relativist? Or a Consequentialist? Or a Utilitarian? Or something else? To be honest, I don't know. First, I never seem to feel I fit any particular label or ism, and second, I change my mind a lot. I think people have argued about morality since the start of recorded history and I think they probably will until the end of it and I'm not sure it will ever be philosophically resolved.

One thing to note: games (in game theory) that feature more cooperation than competition seem to work best, I believe. Does this suggest that retribution is not the best strategy? If so, where does that leave retributivism?

Eh, I get that it sounds a lot like AM sometimes, but the fact is, ethics didn't come from nowhere. As to what the underlying "unifying" ethical model might be, I admit right at the get-go that I am probably wrong about at least some elements of what that is. I do not exclude myself from my doubt and there are a lot of things I haven't been able to work out.

I don't like to think about things in terms of specific goals. I fully admit that "wants to kill someone" is not even necessarily an "immoral" goal. Context makes a lot of difference insofar as there are ethical strategies and unethical strategies to fulfill the goal. A clearly unethical strategy for that particular goal would be to go out on the street with a gun, say "I'm going to kill someone", and shoot the first person who runs away screaming. An ethical fulfillment would be to put out an ad for "euthanasia services", wait for an applicant, interview them about their motivations and reasons, get a second opinion from a psychologist and doctor, urge them to seek counseling if it is determined that there may be a different resolution, and only after their resolve has been made apparent and that they are acting of sound mind, assisting in their death.

Strategies can, in this context, clearly be ethical or not ethical.

I tend to reject labels of extant moral models because I don't see myself as a consumer of "pop ethics". If I was hard pressed to invent a name, it would be something like "integral ethics", "physical ethics", or even "computational ethics".

As to games, I was thinking about my own private motivations in terms of how I derive ethics, it is from the perspective of someone who utterly sucks at PVP. I abhor conflict where I can avoid it, and think that while some competition is necessary to isolate "which is more fun tional" between equal alternatives, that it is ultimately injurious to self and others to PVP. I see us as in this together and the more teammates I have attacking an uncaring universe that shits us out into it as ignorant and undefended rather than weakening "us". Competition is for testing and self improvement, rather than isolation and group selection.
 
You don't sound that different to Angra at this point. :)



Sure. 'Goals' and 'strategies' are not arbitrary, and game theory is a useful citation. But as I see it, the idea that goals or strategies are morally right or wrong in any factual, independent, universal or realist way is.....misguided.

Does that make me a moral relativist? Or a Consequentialist? Or a Utilitarian? Or something else? To be honest, I don't know. First, I never seem to feel I fit any particular label or ism, and second, I change my mind a lot. I think people have argued about morality since the start of recorded history and I think they probably will until the end of it and I'm not sure it will ever be philosophically resolved.

One thing to note: games (in game theory) that feature more cooperation than competition seem to work best, I believe. Does this suggest that retribution is not the best strategy? If so, where does that leave retributivism?

Eh, I get that it sounds a lot like AM sometimes, but the fact is, ethics didn't come from nowhere. As to what the underlying "unifying" ethical model might be, I admit right at the get-go that I am probably wrong about at least some elements of what that is. I do not exclude myself from my doubt and there are a lot of things I haven't been able to work out.

I don't like to think about things in terms of specific goals. I fully admit that "wants to kill someone" is not even necessarily an "immoral" goal. Context makes a lot of difference insofar as there are ethical strategies and unethical strategies to fulfill the goal. A clearly unethical strategy for that particular goal would be to go out on the street with a gun, say "I'm going to kill someone", and shoot the first person who runs away screaming. An ethical fulfillment would be to put out an ad for "euthanasia services", wait for an applicant, interview them about their motivations and reasons, get a second opinion from a psychologist and doctor, urge them to seek counseling if it is determined that there may be a different resolution, and only after their resolve has been made apparent and that they are acting of sound mind, assisting in their death.

Strategies can, in this context, clearly be ethical or not ethical.

I tend to reject labels of extant moral models because I don't see myself as a consumer of "pop ethics". If I was hard pressed to invent a name, it would be something like "integral ethics", "physical ethics", or even "computational ethics".

As to games, I was thinking about my own private motivations in terms of how I derive ethics, it is from the perspective of someone who utterly sucks at PVP. I abhor conflict where I can avoid it, and think that while some competition is necessary to isolate "which is more fun tional" between equal alternatives, that it is ultimately injurious to self and others to PVP. I see us as in this together and the more teammates I have attacking an uncaring universe that shits us out into it as ignorant and undefended rather than weakening "us". Competition is for testing and self improvement, rather than isolation and group selection.

Yes, ethics do come from somewhere. They are evolved behaviors. It is a psychological/biological phenomenon. I guess "ethics" is simply the rationalization we layer on top of our moral intuitions.

Lately, I've come to believe that a virtue ethics perspective is actually a better overall model for our underlying moral behavior.

I've been playing around with ways to articulate this, but I think essentially, human morality can be approximated to be "those behaviors that would make me trust you if we were stuck out alone in a jungle and you could do whatever you wanted to me with impunity". The behaviors that would make me trust you would be considered "moral", those that wouldn't, "immoral".


But then there is a whole layer of "purity" and "sanctity", and I think that's simply related to group cohesion.

One of the more insightful researchers into moral psychology is Jonathan Haidt.


I do think a game-theoretic analysis is useful, but maybe only up to a certain point, at least currently. But there is a lot of work being done in that sphere.
 
You don't sound that different to Angra at this point. :)



Sure. 'Goals' and 'strategies' are not arbitrary, and game theory is a useful citation. But as I see it, the idea that goals or strategies are morally right or wrong in any factual, independent, universal or realist way is.....misguided.

Does that make me a moral relativist? Or a Consequentialist? Or a Utilitarian? Or something else? To be honest, I don't know. First, I never seem to feel I fit any particular label or ism, and second, I change my mind a lot. I think people have argued about morality since the start of recorded history and I think they probably will until the end of it and I'm not sure it will ever be philosophically resolved.

One thing to note: games (in game theory) that feature more cooperation than competition seem to work best, I believe. Does this suggest that retribution is not the best strategy? If so, where does that leave retributivism?

Eh, I get that it sounds a lot like AM sometimes, but the fact is, ethics didn't come from nowhere. As to what the underlying "unifying" ethical model might be, I admit right at the get-go that I am probably wrong about at least some elements of what that is. I do not exclude myself from my doubt and there are a lot of things I haven't been able to work out.

I don't like to think about things in terms of specific goals. I fully admit that "wants to kill someone" is not even necessarily an "immoral" goal. Context makes a lot of difference insofar as there are ethical strategies and unethical strategies to fulfill the goal. A clearly unethical strategy for that particular goal would be to go out on the street with a gun, say "I'm going to kill someone", and shoot the first person who runs away screaming. An ethical fulfillment would be to put out an ad for "euthanasia services", wait for an applicant, interview them about their motivations and reasons, get a second opinion from a psychologist and doctor, urge them to seek counseling if it is determined that there may be a different resolution, and only after their resolve has been made apparent and that they are acting of sound mind, assisting in their death.

Strategies can, in this context, clearly be ethical or not ethical.

I tend to reject labels of extant moral models because I don't see myself as a consumer of "pop ethics". If I was hard pressed to invent a name, it would be something like "integral ethics", "physical ethics", or even "computational ethics".

As to games, I was thinking about my own private motivations in terms of how I derive ethics, it is from the perspective of someone who utterly sucks at PVP. I abhor conflict where I can avoid it, and think that while some competition is necessary to isolate "which is more fun tional" between equal alternatives, that it is ultimately injurious to self and others to PVP. I see us as in this together and the more teammates I have attacking an uncaring universe that shits us out into it as ignorant and undefended rather than weakening "us". Competition is for testing and self improvement, rather than isolation and group selection.

Yes, ethics do come from somewhere. They are evolved behaviors. It is a psychological/biological phenomenon. I guess "ethics" is simply the rationalization we layer on top of our moral intuitions.

Lately, I've come to believe that a virtue ethics perspective is actually a better overall model for our underlying moral behavior.

I've been playing around with ways to articulate this, but I think essentially, human morality can be approximated to be "those behaviors that would make me trust you if we were stuck out alone in a jungle and you could do whatever you wanted to me with impunity". The behaviors that would make me trust you would be considered "moral", those that wouldn't, "immoral".


But then there is a whole layer of "purity" and "sanctity", and I think that's simply related to group cohesion.

One of the more insightful researchers into moral psychology is Jonathan Haidt.


I do think a game-theoretic analysis is useful, but maybe only up to a certain point, at least currently. But there is a lot of work being done in that sphere.

"They are evolved behaviors"... No, ethics are not evolved behaviors. Morals, those are arguably evolved. But not ethics. You are begging a question, "from whence comes the evolutionary convergence upon ethics that is 'morality'?"

From whence come virtues? Why are they "virtuous"? Socrates asked these questions and I have yet to get an answer that is not purely axiomatic and subjective per goal. You add as many axioms as you add virtues.

I bring exactly one axiom to my framework and it is one I can fairly well believe we can agree on: that a goal in the context of a system generates an ought in the context of that goal in that system.
 
Yes, ethics do come from somewhere. They are evolved behaviors. It is a psychological/biological phenomenon. I guess "ethics" is simply the rationalization we layer on top of our moral intuitions.

Lately, I've come to believe that a virtue ethics perspective is actually a better overall model for our underlying moral behavior.

I've been playing around with ways to articulate this, but I think essentially, human morality can be approximated to be "those behaviors that would make me trust you if we were stuck out alone in a jungle and you could do whatever you wanted to me with impunity". The behaviors that would make me trust you would be considered "moral", those that wouldn't, "immoral".


But then there is a whole layer of "purity" and "sanctity", and I think that's simply related to group cohesion.

One of the more insightful researchers into moral psychology is Jonathan Haidt.


I do think a game-theoretic analysis is useful, but maybe only up to a certain point, at least currently. But there is a lot of work being done in that sphere.

"They are evolved behaviors"... No, ethics are not evolved behaviors. Morals, those are arguably evolved. But not ethics. You are begging a question, "from whence comes the evolutionary convergence upon ethics that is 'morality'?"

From whence come virtues? Why are they "virtuous"? Socrates asked these questions and I have yet to get an answer that is not purely axiomatic and subjective per goal. You add as many axioms as you add virtues.

I bring exactly one axiom to my framework and it is one I can fairly well believe we can agree on: that a goal in the context of a system generates an ought in the context of that goal in that system.

My 1st reaction was to pretty much agree with what both you and J842P said about evolved behaviours. I don't think I am understanding the distinction you are making between morals and ethics on that though. Aren't they both, in the end, similarly evolved?
 
As to games, I was thinking about my own private motivations in terms of how I derive ethics, it is from the perspective of someone who utterly sucks at PVP. I abhor conflict where I can avoid it, and think that while some competition is necessary to isolate "which is more fun tional" between equal alternatives, that it is ultimately injurious to self and others to PVP. I see us as in this together and the more teammates I have attacking an uncaring universe that shits us out into it as ignorant and undefended rather than weakening "us". Competition is for testing and self improvement, rather than isolation and group selection.

Here, I think, you are acknowledging at least a certain degree of moral relativism and subjectivism. Which I think is useful. I sometimes think it's a false dichotomy to choose between whether moral relativists/subjectivists or their opponents (be they moral universalists or moral realists or whatever) are correct. To me it seems there is a degree of relativity/subjectivity and a degree of universality (perhaps better to say generality) and realism involved, depending on what types, instances or examples of human behaviour we discuss in whatever context. And, I think this is the sort of complexity we would expect, if natural selection is the driver. The world (the 'system' in which things operate) is very complex and dynamic. Humans are very complex and capricious actors in that world. No two humans and no two contexts are the same. Etc. Even if humans do not have free will (which in the final analysis I tend to think they don't) their capacities for a range of varied and complicated responses (to varied and complicated situations) is wide, imo.

This does not mean we can't strive to construct ethical frameworks, obviously.

ETA: good example there about the 'wanting to kill' goal.
 
Last edited:
yes, ethics do come from somewhere. They are evolved behaviors. It is a psychological/biological phenomenon. I guess "ethics" is simply the rationalization we layer on top of our moral intuitions.

I tend to agree, very much.

Lately, I've come to believe that a virtue ethics perspective is actually a better overall model for our underlying moral behavior.

Unfortunately, I am not that familiar with the varieties of virtue ethics that are out there.

I've been playing around with ways to articulate this, but I think essentially, human morality can be approximated to be "those behaviors that would make me trust you if we were stuck out alone in a jungle and you could do whatever you wanted to me with impunity". The behaviors that would make me trust you would be considered "moral", those that wouldn't, "immoral".

That's certainly an interesting idea, and not one I've heard before. I'm puzzling over it. :)

But then there is a whole layer of "purity" and "sanctity", and I think that's simply related to group cohesion.

Yes, I think so. Which is why I might say that such things, and maybe virtues, can be reduced to what you cited at the start, namely, evolved behaviours. I think that's where I'd plant my flag. I wouldn't say it was moral bedrock, but I would say that it is the bedrock of what we label (or as you put it rationalise as being, or consider) moral, which of course is a slightly different thing to saying it is actually moral. And of course, as we all now know from plate tectonics, bedrock is not fixed in place, even if it only moves very very slowly.

One of the more insightful researchers into moral psychology is Jonathan Haidt.

I am not that familiar, but from what I have seen and heard, yes, I'd agree.

I do think a game-theoretic analysis is useful, but maybe only up to a certain point, at least currently. But there is a lot of work being done in that sphere.

Yes. I think there are such limits.
 
Last edited:
Yes, ethics do come from somewhere. They are evolved behaviors. It is a psychological/biological phenomenon. I guess "ethics" is simply the rationalization we layer on top of our moral intuitions.

Lately, I've come to believe that a virtue ethics perspective is actually a better overall model for our underlying moral behavior.

I've been playing around with ways to articulate this, but I think essentially, human morality can be approximated to be "those behaviors that would make me trust you if we were stuck out alone in a jungle and you could do whatever you wanted to me with impunity". The behaviors that would make me trust you would be considered "moral", those that wouldn't, "immoral".


But then there is a whole layer of "purity" and "sanctity", and I think that's simply related to group cohesion.

One of the more insightful researchers into moral psychology is Jonathan Haidt.


I do think a game-theoretic analysis is useful, but maybe only up to a certain point, at least currently. But there is a lot of work being done in that sphere.

"They are evolved behaviors"... No, ethics are not evolved behaviors. Morals, those are arguably evolved. But not ethics. You are begging a question, "from whence comes the evolutionary convergence upon ethics that is 'morality'?"

From whence come virtues? Why are they "virtuous"? Socrates asked these questions and I have yet to get an answer that is not purely axiomatic and subjective per goal. You add as many axioms as you add virtues.

I bring exactly one axiom to my framework and it is one I can fairly well believe we can agree on: that a goal in the context of a system generates an ought in the context of that goal in that system.

My 1st reaction was to pretty much agree with what both you and J842P said about evolved behaviours. I don't think I am understanding the distinction you are making between morals and ethics on that though. Aren't they both, in the end, similarly evolved?

So, here an analogy of the distinction which I hope works well for you: if I throw a ball into the water at the beach, a dog who is playing fetch will run along the beach for a while towards the water line (it is most consistent at a lake rather than an ocean beach). When they hit the waterline they will turn into the water, swim, and get the ball. When observed, it will be found that the angles at which they travelled are the shortest time path as a function of their running and swimming speeds. This is a calculus problem. But the dog is not "doing" calculus. Rather, the dog has a personal, biological calculus approximation.

Morals are the dog's evolved machinery. Ethics are actual calculus.

Is calculus evolved? No. It is a product of a system of axioms, and is the same on earth as it is on Mars, as it is on proxima centauri. There is a real phenomena that is the relationship of rates that can be described perfectly with a mathematical model.

To me, this is the difference between morals and ethics. There is something real there. I am not "into" watching the dog run along the beach. I am not interested in the animal and the implementation of a "moral machine", I am interested in the ethical physics . Once you understand the physical ethics you can ask "what tools can I use to make moral machinery more ethical".
 
My 1st reaction was to pretty much agree with what both you and J842P said about evolved behaviours. I don't think I am understanding the distinction you are making between morals and ethics on that though. Aren't they both, in the end, similarly evolved?

So, here an analogy of the distinction which I hope works well for you: if I throw a ball into the water at the beach, a dog who is playing fetch will run along the beach for a while towards the water line (it is most consistent at a lake rather than an ocean beach). When they hit the waterline they will turn into the water, swim, and get the ball. When observed, it will be found that the angles at which they travelled are the shortest time path as a function of their running and swimming speeds. This is a calculus problem. But the dog is not "doing" calculus. Rather, the dog has a personal, biological calculus approximation.

Morals are the dog's evolved machinery. Ethics are actual calculus.

Is calculus evolved? No. It is a product of a system of axioms, and is the same on earth as it is on Mars, as it is on proxima centauri. There is a real phenomena that is the relationship of rates that can be described perfectly with a mathematical model.

To me, this is the difference between morals and ethics. There is something real there. I am not "into" watching the dog run along the beach. I am not interested in the animal and the implementation of a "moral machine", I am interested in the ethical physics . Once you understand the physical ethics you can ask "what tools can I use to make moral machinery more ethical".

Well, I'm not against or averse to your approach, but to me, morals and ethics are as evolved as each other. They can still be defined slightly differently of course, perhaps in the way you suggest, but I don't think this makes one evolved behaviour and the other not.

As to your approach generally, could I call it Goalism? If so (or even if not) I think Angra is right to say that it gets hit by (what he says is not) the naturalistic fallacy or the getting an ought from an is, or something very similar to those. Perhaps one key difference is that, as I read it, you are saying this is your preferred model or one that you think works well, or would work well, while he is saying that (a) his involves independent, real, universal moral facts and (b) this allows him to know what is really, actually right and wrong.

Now, there may be strong Moral Realism and there may be weak Moral Realism, but I think when Moral Realism gets diluted, it starts to look a bit like (incorporates certain aspects of) Moral Relativism, and vice versa. :)

So I tend to suspect morality is mostly or generally an admixture. There may be exceptions. A bit vague, I know, but I'd prefer complicated uncertainty over certainty about such things I think. At least in discussions. When something happens in the real world, decisions may have to be made, if only for pragmatic reasons.
 
My 1st reaction was to pretty much agree with what both you and J842P said about evolved behaviours. I don't think I am understanding the distinction you are making between morals and ethics on that though. Aren't they both, in the end, similarly evolved?

So, here an analogy of the distinction which I hope works well for you: if I throw a ball into the water at the beach, a dog who is playing fetch will run along the beach for a while towards the water line (it is most consistent at a lake rather than an ocean beach). When they hit the waterline they will turn into the water, swim, and get the ball. When observed, it will be found that the angles at which they travelled are the shortest time path as a function of their running and swimming speeds. This is a calculus problem. But the dog is not "doing" calculus. Rather, the dog has a personal, biological calculus approximation.

Morals are the dog's evolved machinery. Ethics are actual calculus.

Is calculus evolved? No. It is a product of a system of axioms, and is the same on earth as it is on Mars, as it is on proxima centauri. There is a real phenomena that is the relationship of rates that can be described perfectly with a mathematical model.

To me, this is the difference between morals and ethics. There is something real there. I am not "into" watching the dog run along the beach. I am not interested in the animal and the implementation of a "moral machine", I am interested in the ethical physics . Once you understand the physical ethics you can ask "what tools can I use to make moral machinery more ethical".

Well, I'm not against or averse to your approach, but to me, morals and ethics are as evolved as each other. They still be defined slightly differently of course, perhaps in the way you suggest, but I don't think this makes one evolved behaviour and the other not.

My point is, I can point to the line the dog runs in the sand. I can say this running is "an evolved behavior", but evolved because of what selection pressure?

Selection pressures are not "evolved" they are a part of evolution as a systemic process. I think you are making a category error here in saying ethics are "evolved". Ethical implementations are evolved, albeit not through darwinistic mechanisms. Morality on the other hand IS evolved through darwinistic mechanisms. The fact that when some thing has a different speed of travel based on media, that there is a point at which the derivative of the travel time function crosses zero is not evolved. It is an absolute. It would be true of the inhabitants of planet Vulcan as much as it is true for us. It is as true for robots as it is for us. Ethics is the selection pressure, and morality (and ethical implementation) are the selection that is made.
 
Selection pressures are not "evolved" they are a part of evolution as a systemic process. I think you are making a category error here in saying ethics are "evolved". Ethical implementations are evolved, albeit not through darwinistic mechanisms. Morality on the other hand IS evolved through darwinistic mechanisms. The fact that when some thing has a different speed of travel based on media, that there is a point at which the derivative of the travel time function crosses zero is not evolved. It is an absolute. It would be true of the inhabitants of planet Vulcan as much as it is true for us. It is as true for robots as it is for us. Ethics is the selection pressure, and morality (and ethical implementation) are the selection that is made.

It's true that I often get confused about the difference between morality and ethics, even though in the past I have occasionally googled 'what's the difference' (between them)? And I'm sure there are differing definitions. But you seem to be talking about ethics in a way I'm not familiar with at all.

Here's two of the definitions of ethics that I found, at two fairly well-known philosophy websites:

The field of ethics (or moral philosophy) involves systematizing, defending, and recommending concepts of right and wrong behavior.
https://iep.utm.edu/ethics/

“Ethics” is sometimes taken to refer to a guide to behavior wider in scope than morality, and that an individual adopts as his or her own guide to life, as long as it is a guide that the individual views as a proper guide for others.
https://plato.stanford.edu/entries/morality-definition/

Which is roughly the way I have previously understood the distinction, that thinking something right or wrong comes first (morality) and systemizing or coding the rules about it (ethics) comes after. But you seem to have the two the other way around.
 
Selection pressures are not "evolved" they are a part of evolution as a systemic process. I think you are making a category error here in saying ethics are "evolved". Ethical implementations are evolved, albeit not through darwinistic mechanisms. Morality on the other hand IS evolved through darwinistic mechanisms. The fact that when some thing has a different speed of travel based on media, that there is a point at which the derivative of the travel time function crosses zero is not evolved. It is an absolute. It would be true of the inhabitants of planet Vulcan as much as it is true for us. It is as true for robots as it is for us. Ethics is the selection pressure, and morality (and ethical implementation) are the selection that is made.

It's true that I often get confused about the difference between morality and ethics, even though in the past I have occasionally googled 'what's the difference' (between them)? And I'm sure there are differing definitions. But you seem to be talking about ethics in a way I'm not familiar with at all.

Here's two of the definitions of ethics that I found, at two fairly well-known philosophy websites:

The field of ethics (or moral philosophy) involves systematizing, defending, and recommending concepts of right and wrong behavior.
https://iep.utm.edu/ethics/

“Ethics” is sometimes taken to refer to a guide to behavior wider in scope than morality, and that an individual adopts as his or her own guide to life, as long as it is a guide that the individual views as a proper guide for others.
https://plato.stanford.edu/entries/morality-definition/

Which is roughly the way I have previously understood the distinction, that thinking something right or wrong comes first (morality) and systemizing or coding the rules about it (ethics) comes after. But you seem to have the two the other way around.

I talk about them the way I do, drawing razor sharp distinctions, largely because I find that most philosophers are really muddy about it and don't ask what are even remotely the correct questions.

I first doubt my feelings. I find I have an obligation to doubt my feelings (that everyone does, in fact). The first assumption in honest discovery is "I am wrong". And that includes my moral machinery. I know what I FEEL is wrong, but is it? I know what I FEEL is right, but is it? Why? From whence comes piety?

Regardless of whatever sophistry, conflation, cart/horse, or other of what I suspect is pure bullshit in the field of philosophy, though, the real question is if you can understand what I am trying to say and why I draw these distinctions.

Regardless of what you want to call it, we need separate terms to describe the phenomena in nature, the selection pressure that makes morality emerge as an evolved machine, to understand the shape of it.

Edit: When you try to understand ethics as a deconstruction of morality rather than a driver of it, you end up in a realm of sophistry and nonsense because evolved behaviors are under no obligation to be consistent, non-contradictory or make any sense at all. They are under no obligation to accurately model the phenomena that makes a selection pressure. Evolved behaviors don't fundamentally have to make sense entirely, and it is a losing battle to try to derive a logical or consistent system from an approximation. They are merely "good enough for living long enough to pop a baby out".
 
ruby sparks said:
I hardly even know where to start with that. I would refer you back to previous times when I replied regarding those at length, both here and in previous threads, and explained why I think your arguments are flawed, particularly when taking human instincts to represent moral facts.
I explained in those threads why you were mistaken, but my arguments in this thread stand on their own. Do you have a counter argument?

ruby sparks said:
And if what I am saying hits all moralities, then in some ways, yes, that is exactly the point. Though as I said previously, it hits some harder than others, particularly the more dogmatic ones with strong claims to real, universal, independent moral facts that it is claimed are known by the person asserting them, such as yours.
No, that part is false. My moral assessments do not have anything more "dogmatic" than those of my opponents. In fact, your objection is that apparently one is not justified in making moral assessments on the basis of information that can be stated in non-moral terms. Well, I showed that every moral assessment made by every human who ever made it do just that (well, you might make an exception for 'Either Bob is a bad person or it is not the case tha Bob is a bad person' or things like that, for which we would have to get into the issue of what it is to use some information. But that's a detail).

Let me try again:


Suppose A says B behaved immorally when he did X.
1. If A uses her own moral sense to make the assessment, then her assessment falls within the scope of your 'naturalistic fallacy' or 'is-ought' problem, because it does not logically follow from the fact that A's moral sense gives the verdict 'B's doing X was immoral', that B's doing X was immoral.
2. If A uses the moral sense of other humans, then the same holds.
3. If A derives her judgment from some moral premises P1, ...Pn , and some other premises Q, then the question is: How does A derive P1,..Pn.



As there is no infinite regress in A's argumentation or thought (she is human), then at some point A is basing her moral assessments on something that is not a moral premise. That falls afaul of your 'naturalistic fallacy', and taints the rest of the conclusions as they are based on an unwarranted starting point.
The above shows that your objection to my points, if successful, works against every single moral assessment of the form 'B behaved immorally when he did X' ever made by a human being. The same goes for any moral assessment of the form 'A behaved in an unjust manner when he did X', 'B deserves to be punished for doing X', 'B does not deserve to be punished for doing X', and so on.

ruby sparks said:
I'm seeing quite a lot of your 'precision' as pedantry and sophistry that hides an underlying intransigence and presumption.
You are mistaken, for the reasons I have been explaining. But that aside, do you have a counter argument?


ruby sparks said:
And in an odd way, it reminds me in some ways at least of the ways academic and learned theologians go about their business. Endless 'precision', logic and convolution, even citing evidence, in order to arrive at conclusions already assumed. Which is why I think you should be careful about levelling such criticisms at others as if you were immune. There's none so blind as those who think only others can't see.
Do you have an argument that shows any of my alleged errors?
 
Here's even an example of a judgment that people deserve some punishment, in this case mild:

By all means tell us your opinions. Opinions can be useful and reasonable, and I broadly think yours are both. Just ease back on claiming that you know that you are talking about real, independent, universal moral facts and that if anyone disagrees with you, they are mistaken. That's essentially your underlying dogma, or perhaps your ideology, possibly even your secular religion in some ways, imo.

That was an example of a judgment that people deserve some punishment made by a person whose ideology/religion holds that people never deserve punishment for their actions. It is a fact that he said so, of course.
 
I do use it quite a lot, though it happens far more often than I use it.

Maybe it does. Sometimes it may even be you doing it.

Well, I'm not an ideologue or religious. It might happen I'm angry sometimes - though it has never happened in this thread -, but then if I make an error, I correct myself by cooling off.
 
Jarhyn said:
My original ethical derivations came from a pretty radical idea: that there is some principle in nature, some thing derived from the context of our existence in the universe, that caused the emergence of ethics in humans, that our theories and ethics are attempts to approximate in the same way that there, in fact, mere approximations.
Take at other monkeys. They seek to punish monkeys that break the rules of monkey behavior (general species-wide rules and local rules to the extent they have them). They can see unfairness and injustice. But other animals have different rules of behavior, even smart animals. Whether one calls what other monkeys do 'morality' is a matter of classification, and whether you want to include debate in it. But it is of course a continuum in evolutionary terms. We have morality because animals with the kinds of minds we have have certain rules of behavior, which we got from evolution. More precisely, I'm talking about notions such as what is ethically permissible or impermissible. Concepts such as what is good or bad or just or unjust are not exactly the rules of behavior but are closely related to that.



Jarhyn said:
Let's look at Tic Tac Toe. There are things that "are". "Marks are owned by players", "marks are placed in alternating sequence", "marks are placed on a three by three grid", "marks once placed are set". There is a GOAL, "place three marks in a line", and a secondary goal "prevent three marks that are not your own from being placed in a line." From these 'is' things, one can derive an OUGHT wherein every single action made by the player is predetermined. It creates a strategy, and the players who use that strategy will invariably meet the inferior goal and if their opponent makes any mistake at all, they will get their superior goal. This creates an ought: IF your goal is to win (and not lose), you OUGHT apply that strategy as perfectly as possible.
Right, but that is a means-to-ends 'ought'. A moral 'ought' is not like that in that you cannot set the goal. However, arguably (though this is debatable) a moral 'ought' is a particular case of means-to-ends 'ought' in which the end is fixed, and the end is not to behave unethically. Whether that identification is correct as a matter of the meaning of the words is again debatable, but either way, it is true that 'B ought to X' in the moral sense of 'ought' is true if and only if 'It would be unethical of B not to X' is true.

And in any case, the goal is set by our own set of monkey rules. If species#384751837247 is a species of aliens from another galaxy in the observable universe with advanced ships and whatnot, they might have #384751837247-morality, but they will very probably not have morality, given that they evolved in a different environment, including a different social environment. As they had to resolve similar and in many cases almost the same problems as our ancestors, there may well be a considerable similarity between morality and #384751837247-morality, but they are still two different things.


Jarhyn said:
So to me this says that some of the fundamental elements of moral philosophy have to be approached from the examination of goals... hence my metagoal. Because it can't just be about what I want, if I want general strategy.
Right, it's not just about what you want. But it's also not about any general strategy you come up with. It's built-into the human mind.


Jarhyn said:
Eh, I get that it sounds a lot like AM sometimes, but the fact is, ethics didn't come from nowhere.
Clearly. See, we agree about that much.

Jarhyn said:
I don't like to think about things in terms of specific goals. I fully admit that "wants to kill someone" is not even necessarily an "immoral" goal. Context makes a lot of difference insofar as there are ethical strategies and unethical strategies to fulfill the goal. A clearly unethical strategy for that particular goal would be to go out on the street with a gun, say "I'm going to kill someone", and shoot the first person who runs away screaming. An ethical fulfillment would be to put out an ad for "euthanasia services", wait for an applicant, interview them about their motivations and reasons, get a second opinion from a psychologist and doctor, urge them to seek counseling if it is determined that there may be a different resolution, and only after their resolve has been made apparent and that they are acting of sound mind, assisting in their death.

I would say in the first case the goal is unethical, because it is to kill someone for fun or for some other bad reason that does not ethically justify killing. In the second case, the goal is ethically acceptable, because it is to help alleviate the suffering of people who want it (assuming those are the goals; if they are not, then my assessments would change).

But in any case, take a look at what you did. How did you know which behavior was ethically acceptable and which one was not? Well, you contemplated said behaviors, and your moral sense intuitively gave a verdict. And that's the way humans normally do it, and are justified in doing it.

Jarhyn said:
Strategies can, in this context, clearly be ethical or not ethical.
Yes, that is true. Now imagine someone said that euthanasia is always unethical. The person who says that is mistaken, are they not? Hopefully we can agree on that as well.
 
J842P said:
Yes, ethics do come from somewhere. They are evolved behaviors. It is a psychological/biological phenomenon.
Yes, pretty much.

J842P said:
I guess "ethics" is simply the rationalization we layer on top of our moral intuitions.
I do not think that's the most common usage. Generally, 'unethical' is used to mean the same as 'immoral', or at most a subset of immoral behaviors, and 'ethics' on its own has more than one related usage. But those are details, so moving on...


J842P said:
Lately, I've come to believe that a virtue ethics perspective is actually a better overall model for our underlying moral behavior.

I've been playing around with ways to articulate this, but I think essentially, human morality can be approximated to be "those behaviors that would make me trust you if we were stuck out alone in a jungle and you could do whatever you wanted to me with impunity". The behaviors that would make me trust you would be considered "moral", those that wouldn't, "immoral".
Even if doable, the approximation would only work in terms of giving the right results in most realistic situations. But it is not what morality/ethics is about, because what matters when it comes to assessing whether a behavior is morally permissible is what is in the mind of the person who acts in one way or another, not what they do - though, of course, what they do is what we use as a means of assessing what is in their minds.

For example, imagine A and B are in the jungle. A can do whatever he wants to B. But A reckons that gaining B's trust is a better strategy because it increases the chances of being able to con B if they eventually manage to get out of the jungle (which A reckons has a probability around 0.5). So, A behaves in ways that he correctly reckons would make A trust B, in order to be able to con A and steal her money. In particular, of course A lies to B about his intentions. Then A behaves unethically, even though he behaves in a way that makes B trust A.

Granted, it was meant to be an approximation. But I mean that even if it might work as a practical approximation in most cases (how often it fails is a difficult question I think), when one looks under the hood so to speak, things are very different.
 
I have targeted nukes at my own planet. And joked about it. If it were up to me, i would want to prevent nuclear war. But i still joked about it.

I worked a suicide hotline. I did this exactly because i wanted to stop suicides. But among the hotline workers, we joked about some of the calls.

The missile tech who volunteered on an ambulance had anecdotes that made MTs throw up. Not me, but i was working the hotline at the time. We used to just clear out the break room, swapping stories. But finding humor in a subject is a separate subject from the reality.

Given the opportunity, or authority, or sufficient tasers, i would stop any rape, even in prison, even if the victim was a rapist, a child molester, or one of the Trumps.

However, the idea of someone like Bannon, or Stone, or Trump being in prison, and facing the threat of rape amuses me. Not because of any thoughts of justice, karma, or retribution.
I like the idea of some evil bastard facing the fact that he's fallen so low, that no matter his money, his clout, his political savvy, or his friends, he's now a goldfish in a shark tank. And he done it to his own greedy self.

And that's funny.

Yeah, I just can’t get to any place close to rape as being funny. Or just or acceptable. Rape should not be part of any criminal sentence.
What if it is an ape? Apologies for that awful capture. Best I could find.

[YOUTUBE]https://www.youtube.com/watch?v=x07BKBdjIak[/YOUTUBE]

Needs a trigger warning.
 
Look at the Norse model. Incarceration there is potentially indefinite. I don't see anyone calling Norwegians monsters. Maybe reevaluate your position.
I don't see the Norwegians actually keeping people in prison longer than they deserve. The people they've subjected to indefinite preventive detention are generally murderers and rapists. Whether Norwegians are monsters is not determined by how they choose to label their penal practices.

At any rate I answered your inane questions in my discussions targeted at RS, with respect to what is the best outcome. Perhaps you should go back and actually read some of those posts,
I actually read them; the reason you're accusing me of not actually reading them is because you have no moral compunctions about libeling your outgroup.

And no, those discussions didn't answer my inane questions -- you did not supply any reason to think your selection of ethical premises isn't based on "aroused emotional drive". Here are some selections from your posts targeted at RS:


So, there's been a lot of discussion about is/ought. There is, in my estimation one way to get there: adding goals.

If I AM on one side of a wall, AND it IS my goal to use the least energy to reach the other side, ... I ought do that thing (it is the solution to the problem).
That's an equivocation fallacy. "Ought" has two meanings -- instrumental and moral -- corresponding to what Kant called hypothetical imperatives and categorical imperatives. Adding goals gets you from an "is" to an instrumental "ought", not to a moral "ought".

If the goal is "achieving personal goals", this creates a metagoal: survive long enough, in a state capable of achieving those goals. It is a goal we must accept for everyone to the extent we accept it for ourselves.
That's a non sequitur. And it should be painfully obvious to you that it's a non sequitur since if you lock someone up indefinitely in preventive detention -- not a state capable of achieving his personal goals -- you aren't accepting the metagoal for him.

Really, you have to ask, is it a valid goal to pursue the least negative outcome for yourself that makes your own behavior non-destructive with respect to the necessary meta-goals for general goal seeking? If this is true, then it cannot possibly be true that you have a right to impose more harm than is absolutely necessary (punishment, infliction of suffering, etc), because of the requirement for non-contradiction.
There's no contradiction between those. Have you been influenced by Randroids? Those guys imagine whatever they dislike violates the non-contradiction principle.

My thought is that it is absolutely NOT ok to go ham on someone once they've been bad. It's one of the most basic tests of an ethical framework: does it permit doing unto others that which you would not have done into you?
Jesus said it; you believe it; that settles it? The Golden Rule is a rough rule of thumb that is often helpful, but it leads to absurdity in some situations and it's always ambiguous, due to the inherent ambiguity in the phrase "that which". You wouldn't want to be sued, would you? Well then you should never sue anyone. You wouldn't want to be imprisoned, would you? Well then you should never imprison anyone. Sure, you can game that problem away by redefining "that which" you do to be different from "that which" you don't want done to you; you can always claim what you don't want is to be imprisoned "unnecessarily". But the same game is available to retributionists; we don't want to be imprisoned "undeservedly".

I mean, speaking in terms of a specific goal for the derivation of general "oughts" is a losing battle. There is no specific goal. There is the possibility, though, of discussing a meta-goal to derive general oughts.

To me, that goal is "to have all that is necessary to do X" where X does not deprived anyone else of the same.
That's a special-pleading fallacy -- you sound like that philosopher who spent the first half of his book proving all moral claims are errors and the second half making moral claims. "To have all that is necessary to do X where X does not deprive anyone else of the same." is a specific goal. Just calling it a "meta-goal" doesn't make your attempt to derive general "oughts" from it a winning battle.

Of course we live in a probabilistic universe, and in a universe where there are zero-sum situations, so we need to account for these two things: by having a common agreement and expectation of what risks are to be accepted, and a mechanism to determine disposition of limited resources.
But we don't have a common agreement. People disagree. People are going to disagree. Why would we agree, when we have incompatible beliefs, goals and emotional drives? You haven't even offered us a reason to agree, just a bunch of fallacies.

(And even if you could construct a common agreement that was a genuine contract -- a rule people actually agree to, as opposed to the rules social contract theorists keep agreeing to on other people's behalf -- it wouldn't deliver a rational ethical foundation. All it would do is help us feel self-righteous about acting on our aroused emotional drive to force others to keep their promises. Social contract theory is logically incapable of delivering that which it exists for the purpose of delivering.)

I can easily identify that if I wish to have my meta-goal stay as intact as possible, I must respect the meta-goals of others as much as possible. Punishment for the sake of vengeance rather than only as a last resort in behavior modification fits right into "unnecessary", almost trivially so.
Some people's meta-goal is to have justice done. If you abolish retributive punishment you are trivially not respecting those people's meta-goal as much as possible.​

That'll do as a sampling of your earlier attempts; if you think one of the arguments I skipped was better than those, feel free to point it out. Moving on...

but the gist of it is that there is a metagoal that can be defined such that "maximizing the ability to pursue the goals you wish to pursue" wherein goals that are unilaterally/mutually exclusive get rejected (ie "Gary wants to kill Bob; Bob has goals that require being alive", Gary's goal is invalidated), where a certain probability of damage at a certain extent to the metagoal is deemed acceptable through social consensus, and where the disposition of limited resources is agreed on through some mechanism of allocation.
Uh huh. So Gary wants to go to synagogue on Saturday and work on Sunday; Bob has goals that require everyone to work on Saturday and go to church on Sunday. Through social consensus it's agreed to damage Gary's metagoal of satisfying his own religious obligations, because his unilateral goal is mutually exclusive with the deemed-acceptable metagoal of the social consensus, which is to have as many people as possible be saved through knowing Jesus, so Gary's goal gets rejected.

Of course you wouldn't rule that way -- you'd no doubt say it's Bob's goal that should get rejected, and you'd no doubt have an excellent argument to that effect -- but that's immaterial, because as soon as you uttered the magical phrase "social consensus", that meant it's not up to you and your arguments to determine whose right to an unbroken nose ends at whose swinging fist.

In this way it is not about what I, personally, want. Instead it is about determining the limit of which of my wants are justifiable generally,
And you appear to be trying to establish "justifiable" on the basis of symmetry -- Gary and Bob can both be alive; they can't both be alive while the other's dead. Your "It is a goal we must accept for everyone to the extent we accept it for ourselves.", your "X does not deprived anyone else of the same" and your invocation of the Golden Rule likewise are appeals to symmetry. But you haven't shown what's good about symmetry; you're just taking that for granted. Clearly what's going on here is you picked symmetry as your ethical premise, due to an aroused emotional drive. You transparently have a symmetry boner.

Punishing is by definition harming the goals of others, as a goal in and of itself, agnostic to other effects. It is trivially evil.
Stomping the bread into the ground so you could both die is by definition harming the goals of others, as a goal in and of itself. What makes you think you aren't trivially evil? Wait, don't tell me, I know the answer to this one...

...everyone is the hero of their own story and people will jump through all kinds of hoops to prove it to themselves.
 
.... I find that most philosophers....don't ask what are even remotely the correct questions.

That's quite a claim.

I first doubt my feelings. I find I have an obligation to doubt my feelings (that everyone does, in fact). The first assumption in honest discovery is "I am wrong". And that includes my moral machinery. I know what I FEEL is wrong, but is it? I know what I FEEL is right, but is it? Why? From whence comes piety?

Regardless of whatever sophistry, conflation, cart/horse, or other of what I suspect is pure bullshit in the field of philosophy, though, the real question is if you can understand what I am trying to say and why I draw these distinctions.

I don't understand what you're trying to say.

Regardless of what you want to call it, we need separate terms to describe the phenomena in nature, the selection pressure that makes morality emerge as an evolved machine, to understand the shape of it.

Sure. We could call them natural selection pressures. Evolutionary biologists do it all the time.

Edit: When you try to understand ethics as a deconstruction of morality.....

Who tried to do that?

Wanting to hit someone and thinking it either the right thing to do or at least permissible would be about morality. Ethics would be more like the rules of boxing. The rules of boxing are not a deconstruction of wanting to hit someone or thinking it right or permissible, using any definition of deconstruction that I am familiar with.

.... rather than a driver of it, you end up in a realm of sophistry and nonsense because evolved behaviors are under no obligation to be consistent, non-contradictory or make any sense at all. They are under no obligation to accurately model the phenomena that makes a selection pressure. Evolved behaviors don't fundamentally have to make sense entirely, and it is a losing battle to try to derive a logical or consistent system from an approximation. They are merely "good enough for living long enough to pop a baby out".

I don't understand that.
 
Back
Top Bottom