• Welcome to the new Internet Infidels Discussion Board, formerly Talk Freethought.

Morality/ethics: instinct vs ideology

Underseer

Contributor
Joined
May 29, 2003
Messages
11,413
Location
Chicago suburbs
Basic Beliefs
atheism, resistentialism
So... I tried starting a conversation about the Euthyphro dilemma, and as often happens when I get into a discussion about ethics, it got sidetracked because I'm apparently one of the few that believe that ideology and higher order thought plays a relatively small role in most of our moral decisions.

I am not very well-versed in neuroscience, animal behavior, etc., so hopefully someone here is more versed in the recent research in these fields and can offer opinions of greater value.


Dawkins and the Trolley Problem
In Dawkins' book The God Delusion, he challenges the moral claims of theists by talking about how much of our morality is simply the product of instinct. This is really the first time I even thought about the role of instinct in moral choices, and this is probably where I started forming my opinion.

Obviously, Dawkins tends to favor instinct as a source of our moral choices because that is a big part of his main contribution to biology: getting people to think about evolution from the point of view of genes rather than individual organisms. In other words, to think about how evolution acts on populations rather than individuals.

This solved the problem of where the behavior of social species came from. If you think about it in terms of individuals and the genes of individuals, becoming a social species doesn't make much sense. One way or another, most or all of the species in a group must do work and take on risks that offer no direct benefit to the individual, but either benefit another individual in the group, or benefits the group as a whole. If you look at it from the point of the individual, it is hard to see how evolution could possibly produce this behavior. However, in most social species, the social group is closely related, so an individual working towards helping other individuals in the group or the group itself is helping individuals with very similar genes. In other words, individuals are free to be selfless because it is the genes that are being selfish, not the individual organism.

Dawkins bolstered his argument about behavior conducive to the group's survival coming from instinct even in humans by referencing a study on the Trolley Problem:

http://www.scientificamerican.com/article/famous-trolley-problem-exposes-moral-instincts/

Sorry, but I have no idea if this article references the same study Dawkins referenced in The God Delusion. The study Dawkins referenced was performed by a scientist who later lost his career because he got caught engaging in fraud. From what I've managed to read on the matter, the fraud charges did not extend to the specific study Dawkins cited in his book.

The study Dawkins cited had "translated" the Trolley Problem into something that made sense to people from very different cultures. For instance, someone from a primitive stone-age tribe is not going to have a very good understanding of what a trolley is, so when you explain the Trolley Problem to him, your results could be skewed by the individual's attempt to understand a trolley as a purely abstract and unfamiliar concept. So for a primitive bushman from Africa, the story might involve a charging rhino instead of a trolley to get a response more in line with the individual's instincts.

The result showed not only similar responses, but the same percentages of people answered one way as opposed to another way.

Dawkins argued that this shows that for these specific ethical questions at least, the decision had to be a matter of instinct more than ideology because presumably an African bushman would have a wildly different ideology from some yuppie from a big city in a developed nation.


Making Decisions without Realizing It
I don't remember if I encountered this in The God Delusion among Dawkins' arguments for instincts as the source of our morality, or if I just stumbled upon it elsewhere and connected it to Dawkins' arguments myself.

http://www.nature.com/news/2008/080411/full/news.2008.751.html

Long story short: researchers were able to anticipate what decision a person would make up to ten seconds before the person was even aware they had made a decision at all. To me, this suggests that at least some component of our decision-making is the result of a process other than carefully-considered conscious thought in the prefrontal cortex, and the most likely culprit would be instinct. If instinct has a strong influence on our decisions in general, then it must also have a strong influence on our moral decisions.

Caveat: I have no idea what percentage of decisions nor what percentage of test subjects fell into this category.


Animal Behavior Science and Ethics
A variety of ethical behaviors that we once regarded as the exclusive product of our big human brains turn out to be things that are observed among other social species. A notable example of this would be the concept of fairness:

http://www.psychologytoday.com/blog...s-instinct-science-human-nature-and-sociality

Other social species such as dogs and apes will keep track of which individuals contribute more to the group than others and which contribute less to the group than others. These individuals will thus receive more or less help, thus showing that other animals have a concept of fairness and enforce it in the group. Since animals lack the means to communicate abstract ideas like ideology, this cannot be the result of ideology as we understand the concept. The most obvious explanation is that these other social species must possess a concept of fairness purely as a result of instinct. If dogs and apes can have an instinct for things like fairness, then why not humans as well? Why would we exhibit the same behaviors but derive it from a completely different source as other social mammals?

Obviously, there is more that animal behavior science can tell us about human morality and instinct vs ideology. If you're interested in the topic, there's more stuff here:

https://www.youtube.com/results?search_query=frans+de+waal


Evolutionary Biology and Evolutionary Psychology
I have to presume that each of these fields have something to say on this matter, but, uh, I have no idea what that would be. Hopefully, someone reading this can contribute. The question of how social species could possibly evolve the complex behaviors we see has been vexing biologists for a long time, and wasn't really solved until relatively recently when Dawkins introduced the idea of a gene-centric view of evolution. Presumably, research has been done in each of these fields that attempts to solve this particular riddle, and thus both fields probably have something relevant to say on this topic. Damned if I know what that would be, though.


Wild Unsubstantiated Speculation
Being a social species or a solitary species are each viable survival strategies. Individuals in a solitary species don't have to waste any time or effort benefiting others and can worry about helping themselves. Social species can pool their resources in surprising ways that can greatly enhance the chances of passing on your own genes, or at least genes that are very similar, but in exchange for this, individuals must work for the benefit of others, not just themselves.

In order for being a social species to work as a survival strategy, each species (or perhaps even each social group) will have to have some kind of standard of behavior. The actual standard will necessarily change from species to species because they live in different ecological niches under different circumstances. What enhances the survival of bees might not be helpful to a tribe of humans. Since animals other than humans lack the ability to communicate ideology to each other, then in every social species other than humans, these standards of behavior must be the result of instinct for the most part (although more intelligent social animals clearly have a "culture" that is taught to younger generations, and one could certainly argue that this can be compared to human ideology however vaguely).

I would argue that this instinctive standard of behavior forms the core of human ethics, or at least the ethics that are common to most cultures. Because we ourselves are human, when we look at other social groups we naturally focus on the ethical decisions that are different and may not fully appreciate just how much of our ethics are similar. While our sentience and culture clearly provide us with the means of violating our own instincts to a far greater extent than other species, it doesn't make sense to me that such instincts would exist in every single social species on this planet except for humans. Surely we must also have strong instincts in this area.

Think for a moment about sexual suicide in some species of animal:

http://waynesword.palomar.edu/ww0701.htm

For some species, the male either dies as a result of mating, is killed by the female or other members of the social group after mating, or is killed and eaten by the female or other members of the social group after mating.

Most humans probably have the same reaction to this stuff that I do: a deep revulsion that certainly feels a lot like instincts exerting themselves on my thoughts. For me, this is particularly repulsive in species like ants or bees because like humans, their offspring require an awful lot of work to raise, feed, protect, and nurture. Of course, it doesn't matter to a bee if the male dies as a result of mating or is killed after mating because a fertile female has an army of infertile females to raise her young for her. The male is simply not needed after mating takes place.

This behavior is repulsive to us because fertile human females don't have an army of infertile females to raise her young for her, but her offspring still requires a lot of work to raise. Other apes arrange their societies in the same harem-thing that most mammals follow, so fertile chimp females can distribute child-rearing tasks to the other females in the social group, but human females (at least those not in polygamous marriages) don't have that option either. The most obvious source of help for a human female is the male she mated with, which explains why sexual suicide revolts us so. Such behavior may not affect the survivability of praying mantises or bees, but it sure as heck would have a very large negative effect on the survivability of human offspring. Thus, our instincts fill us revulsion just from contemplating the idea.

This not only points towards instinct as a source of ethics, but provides a useful definition for morality: whatever improves the survival chances of the maximum number of offspring for the species (which generally translates to maximizing the well-being of as many humans as possible, which is Sam Harris' working definition). It's a crude definition perhaps, but probably the definition that has guided our behavior since before we were able to talk about such things with each other.

Anyway, what do you think?
 
I believe that our decisions are not always our own and most people act irrationally.

I think many of our morals ARE a result of being social animals, but I also think that as population grows, the pressure of society on the individual decreases, paradoxically. We no longer depend on a tribe, most of us no longer depend on a village, a town, or even our family. Therefore, censure by society is less of a threat to survival or survival to offspring.

Meanwhile, from ancient times, you have the seven deadly sins; wrath, greed, sloth, pride, lust, envy, and gluttony. If you look at these things they are all irrational behaviors, some even pathological. I think our evolutionary system of morality is just getting more and more disconnected from the reality of how humans actually live now, especially in first world countries. I draw my new line at rational/irrational.
 
Now, I took time off to think about your specific example of females killing males after reproduction. After hours of examination, I find I have no instinctive repulsion to that particular act. I seem to have instinctive repulsions that center more around children. Curiously, I find that while I have strong repulsions against biological parents torturing or deliberately starving their own children, or people killing and mistreating pre-pubescent children, I do not have the same high level of instinctive repulsion against parents that Kill their own children outright or people killing teenagers unless it involves sexual predation.

To clarify, it's not that I don't find the lesser acts reprehensible and "evil", but they don't kick me in the gut as hard as what you are describing you feel about male post sex predation. And I think that is what you are looking for, those "gut" level revulsion reactions.

And related, my ex and I discussed gay rights at length. And while he supports the right of two men to marry or do whatever intellectually, he swears the idea of male/male sex gives him a gut level revulsion.

Anyway, as to the part:

This not only points towards instinct as a source of ethics, but provides a useful definition for morality: whatever improves the survival chances of the maximum number of offspring for the species (which generally translates to maximizing the well-being of as many humans as possible, which is Sam Harris' working definition). It's a crude definition perhaps, but probably the definition that has guided our behavior since before we were able to talk about such things with each other.

I can't go with "improves survival chances of maximum number of offspring". Because that devolves really fast into "whose" offspring, and maximizing the survival chances of the offspring of Tribe A may mean the death of everyone in Tribe B. And unfortunately, tribalism, territorial imperatives, and xenophobia may also be a part of our evolutionary package.

I like Sam Harris's working definition more. Well-being is more of a critical issue at this time in our history than just maximizing offspring. And while instinct and evolution play an important role in the development of our morality and ethics, we can't just use the excuse "well we evolved this way" to support moral policies and ethics that have may have become destructive to the species as a whole.
 
Last edited:
Morals are all about how we treat each other. As such, it's all about how we get along in groups. Because absent groups, there's no one to treat morally or immorally. So yes, morality is about group behaviour.

And yes, it's likely that certain moral points are in part instinctive. Most of our behaviour is at least in part instinctive, as a shortcut for learning things that will be keep us alive and well.

So instinct is all about rules of thumb to keep us alive. Close your eye if something comes near it. Don't eat food that is rotten, spoiled, vomited or contaminated by faeces. Pay attention to things moving in the dark. Fear spiders. And so on. All of these produce gut-level reactions in people.

The same principles applies to rules for groups. If you have a rule that says don't kill people, then everyone is safer. Don't have sex with siblings. Try and make choices where more people are kept alive. Protect children.

As situations that produce gut-level reactions, there's a good case to suggest that these reactions are based on instinct. In fact, it would be surprising if they weren't, since it's hard to imagine a tribal group operating without them. Groups of animals show adherence to many of the same rules.

But does it follow from that all of our behaviour is instinct? That doesn't follow. Because the same experimental techniques that show that these basic moral values are constant across different cultures, also show that the synthesis of these values does vary across cultures. Where morality gets interesting is not in whether killing is good or bad, since that doesn't vary very much, but in how they are combined.

The Trolly problem is a simple combination of values, where the moral value of not killing people is weighed against the moral value of preserving life. Is it worth killing someone if more people live as a result. Or to put it another way, which is the stronger of these two values? In this case the answer is fairly constant, but in other famous examples, the differences are strong and marked. For example, what happens when you pit a desire to cooperate with the law against loyalty to a friend?

http://www.jasonpatent.com/2009/08/13/did-the-pedestrian-die/


In short, I'm broadly happy with the idea that we have instinctual moral values, but that doesn't make our moral judgments purely instinct. How we assemble and reconcile values (moral or otherwise) is still a decision to made, and people can and do override their instinct in making actual choices. Instead moral values act as a 'default' choice. All else being equal, you don't kill people. But in practice, other concerns intervene.

This pattern of a 'default' decision that gets modified later, continues in the much discussed, much misquoted line of experiments after Libet, an example of which you quoted in the 'Making decisions without realising it' section. The key point is that subjects know they're making a decision. In a typical case they've been making the same sort of decisions over and over again for at least an hour. So they know what the choices are, know they are going to make a choice. The experiment simply compares the time at which they can detect what the choice will be with the time they report they've made up their mind and reached a decision. You don't need a brain scanner for this, you can get similar results just by watching someone's eye movements. The eyes drift up to the left for a right brain decision and to the right for a left hemisphere decision. This corresponds to actions by the right or left hands, or, if you're feeling esoteric, to linguistic versus numerical tasks.

The interesting part of the experiment, which does require a scanner is in how far back they can trace these indicators of what the decision will be. Modern technology has increased the time to 10 seconds before the decision is made (Eysenck reported over 11 seconds). But this huge precursor time suggests not that the decision is 'made before we realise it', but rather that there is 'default' decision, which then gets carried through because the subject doesn't care about the outcome enough to change it.


So yes, we do have instincts loosely based around creating a 'successful' group that allows its members to survive and prosper. But we also have decisions, ethical systems, and a great deal of social and cultural structures around these too. Granting a role to instinct doesn't make other elements less important.
 
Animal behaviour is driven by self-interest. When it is in our own self interest to be moral, our instinct of self interest will cause us to be moral. Where higher order ideology comes in to play is when it is not in our own interest to be moral.

Consider a mother who doesn't love or want her child. She could do some pretty horrifying stuff, but due to realities of society it's not in her own interest to do so, so moral behaviour results. That's instinct.

Now consider a student who faces the difference between graduating or failing out of his program based on whether or not he cheats on a test, and knows he won't be caught. Instinct will again drive him to cheat, because cheating is in his own interest, but higher order thought might lead him to act morally, against his own interests.

To say that all morality is a result of instinct, or that all morality is a result of higher order thought seems wrong to me. Rather some morality is a result of instinct and some is a result of higher order thinking. Usually that arising from higher order thinking is more likely to be altruistic moral behaviour.
 
Instinct+Learning.

Darwin, Freud, Bandura, Ellis, Beck, Seligman, and everybody else who doesn't want to be laughed at and "kindly" escorted out the room at of his/her thesis defense, state this clear and convincingly.

You are never learned out of your instincts. You learn with them. Every decision you make influenced by your culture and experiences you do ultimately for and with your appetites.
 
[...]

The Trolly problem is a simple combination of values, where the moral value of not killing people is weighed against the moral value of preserving life. Is it worth killing someone if more people live as a result.[...]

No. No it's not. The Trolley Problem reveals something very odd about human morality.

In one scenario, you can throw a switch which causes the trolley to run over one guy, but avoid running over 5 people. In another scenario, you are on a bridge above the tracks, and have the option of saving the 5 people by pushing the one guy to his death.

In both cases, the evaluation should be the same: it's better to cause the death of one to prevent the deaths of five, and yet fewer people are willing to kill the one guy in the latter scenario than in the first scenario despite the fact that both have the same outcome. Somehow pushing the guy off the bridge seems worse to us than just throwing a switch. What is more strange is that the percentage of people who answer these questions is about the same even when the respondents come from wildly different cultures (e.g. big city yuppie versus stone age tribesman).

It is strange that most of us would react that way. It is even stranger that we all react the same way in the same percentages regardless of upbringing, culture, etc., or any of the other things we assume would change ideological perspectives and higher order thoughts.

- - - Updated - - -

If I were to posit a guess about why so many of us have more trouble pushing the guy than simply throwing the switch is that it involves more intimate interaction with our intended victim, which is a violation of the social/emotional bonding that makes our societies possible.
 
In both cases, the evaluation should be the same:
That's your moral judgement.


both have the same outcome.
That's not the case as you appear to recognise in your "updated" comment:

If I were to posit a guess about why so many of us have more trouble pushing the guy than simply throwing the switch is that it involves more intimate interaction with our intended victim, which is a violation of the social/emotional bonding that makes our societies possible.
There'll be many reasons why people don't see the two situations as morally identical. For me, the essential moral difference lies in the status/expectation of the 'victim' in each case and how this relates to the kind of world in which I want to live.

In one situation we are asked to consider the possibility that the life of anyone at any time could be sacrificed without their consent for a 'greater good'. In the other, we're asked to consider a world in which incautious individuals who find themselves in the middle of a railroad track could possibly be killed by an unexpected trolley.

Given just these two choices I'd rather not live in the world described by the first scenario so I'd reluctantly pull the lever. That's my moral judgement.
 
No. No it's not. The Trolley Problem reveals something very odd about human morality.

In one scenario, you can throw a switch which causes the trolley to run over one guy, but avoid running over 5 people. In another scenario, you are on a bridge above the tracks, and have the option of saving the 5 people by pushing the one guy to his death.

In both cases, the evaluation should be the same: it's better to cause the death of one to prevent the deaths of five, and yet fewer people are willing to kill the one guy in the latter scenario than in the first scenario despite the fact that both have the same outcome. Somehow pushing the guy off the bridge seems worse to us than just throwing a switch. What is more strange is that the percentage of people who answer these questions is about the same even when the respondents come from wildly different cultures (e.g. big city yuppie versus stone age tribesman).

The answer shouldn't be the same, because there are two conflicting values, and you're only acknowledging one. The first is that fewer people should die, and the second is that you, personally, should not kill people. The reason why the answer is different is because in the first scenario you're redirecting a tragic accident to kill fewer people and in the second you're killing someone with the intention of saving others as a result. The difference is that in the second scenario you're actively killing people, and it shows that people are not generally utilitarians - they don't base their morality simply on the immediate outcome - but rather on a more complicated web of connecting factors.

Now you can argue that the long-term outcome is better served by not having a society in which people push each other off bridges. That while more people die in the short run, the long-term cost of legitimising murder makes the society less stable and less successful. And certainly I'd rather live in a society with a high accident rate than in one with a high murder-of-convenience rate, even if the overall death rate is slightly higher in the latter.

Which leads on to all sorts of interesting moral issues, such as whether torture is justified to save lives. The answer, at least in western society, used to be an outright no, but has recently become more popular, particularly in the US. It may be that the people trusted with making these decisions may well be the same people who would push a guy off a bridge to save lives.
 
The trolley problem reveals more about how we think of causality than how about moral judgments per se. In both scenarios a person commits an act that is (and that he know is) certain to kill a person. Objectively and rationally, both acts are equal in terms of causal sufficiency and necessity in leading to another's death. The difference lies only in that the "lever" scenario has more transparent (but not really more actual) mediating causal steps in the process leading to the death. The lever, switches the rails and the train changes direction and then the person gets run over, versus a simplistic "you push them in front of the train and it hits them" (which ignores countless less perceptible steps the the physics of the situation).
I think it is more about people treating mediated causality as less causal, even when the mediating steps are fully determined and non-probabilistic. We then make judgments of morality based upon perceived causality, thus view the act with more obvious mediating processes as less immoral.

Treating the scenarios as morally different is like saying it is less immoral to kill a person by pushing the first of giant dominoes so the last one crushes them, than it is to push a single domino on top of them.

In the trolley scenario, I don't think the person willing to pull the switch but not push the person is any more moral than the person willing to kill the person in both scenarios, they are just more ignorant of causality and treat mediation as though it reduces causal dependency, even when it doesn't.
 
Objectively and rationally, both acts are equal in terms of causal sufficiency and necessity in leading to another's death.
Choosing to look at the problem as simply one of selecting the most efficient and effective way of reducing the number of deaths and ignoring the societal implications of universalising the principle that people can be randomly sacrificed for the greater good is a moral judgement in itself.

The two scenarios are not identical. Whether or not they're morally identically is a matter of moral judgement.
In the trolley scenario, I don't think the person willing to pull the switch but not push the person is any more moral than the person willing to kill the person in both scenarios, they are just more ignorant of causality and treat mediation as though it reduces causal dependency, even when it doesn't.
Some may be "ignorant" but you shouldn't dismiss the possibility that others may have seen a moral dimension to the problem that you may have missed or that, in your personal opinion, you simply don't consider to be morally significant.
 
I'm entirely happy with the idea that moving someone into the path of a train is not morally equivalent to moving the path of the train to intersect them rather than someone else. The idea that they are equivalent is based on the idea that all that matters is how many people die. I don't agree that that is the case.
 
The answer shouldn't be the same, because there are two conflicting values, and you're only acknowledging one. The first is that fewer people should die, and the second is that you, personally, should not kill people. [...]

You are causing the death of that individual in both scenarios. The main difference is that in the second scenario, your victim can look you in the eye while you kill him. Whether you kill him by throwing a switch or pushing him off a bridge, you are still the cause of his death.
 
You are causing the death of that individual in both scenarios. The main difference is that in the second scenario, your victim can look you in the eye while you kill him. Whether you kill him by throwing a switch or pushing him off a bridge, you are still the cause of his death.

Yes, I agree. What's your point? Do you believe that it is whether the death is caused by me that is the critical factor? Do you understand that that is a moral judgement you are making?
 
Choosing to look at the problem as simply one of selecting the most efficient and effective way of reducing the number of deaths and ignoring the societal implications of universalising the principle that people can be randomly sacrificed for the greater good is a moral judgement in itself.

I am doing nothing that you describe. I fully recognize that sacrificing people for the greater good is a moral issue. But that is equally true in both scenarios. Whether one kills the single person in either scenario or chooses not to act is a moral issue. I am referring to treating the two scenarios as morally different from each other. The person pulling the lever or pushing the person onto the tracks is making the same intentionally choice to act to cause the certain death of someone to save others. Nothing within any moral system, religious or secular, provides any basis for treating these scenarios as morally distinct.

The two scenarios are not identical. Whether or not they're morally identically is a matter of moral judgement.
Some may be "ignorant" but you shouldn't dismiss the possibility that others may have seen a moral dimension to the problem that you may have missed or that, in your personal opinion, you simply don't consider to be morally significant.

Again, no moral system has ever posited any basis for making a moral distinction in two such scenarios. Moral judgments still covary with variance in perceived features of the situation. There is nothing objectively different about the situations other than the mediated (but still fully determined) nature of the causal impact of the act on the death. No one would propose that such a meaningless difference be any kind of criteria for moral standards and to date no one has ever offered any coherent reason why their is a morally significant different between the scenarios. There is merely a vague unjustified feeling that one is more moral than the other. Thus, the most likely explanation is that those who feel differently about the scenarios are just having a vague biased emotive response to a subjective sense of reduced causality, despite there being no objective difference in causal determinism just because the seem to be more events that unfold in between the act and the death. It isn't much different from the fact that most people feel less threatened by people with more similar facial features to themselves, thus would likely rate two identical acts as morally different, depending entirely upon how similar in facial features to themselves the people involved were.
 
Nothing within any moral system, religious or secular, provides any basis for treating these scenarios as morally distinct.
I'm not sure what you mean by "moral system" in this context. Most people just have moral opinions (they don't have formal 'moral systems') and many people are of the opinion that the two scenarios are morally distinct.

Perhaps you haven't heard of the doctrine of double effect which states that you may take action which has bad side effects, but deliberately intending harm (even for good causes) is wrong? It's a principle that is sometimes used as just one of the many justifications for morally differentiating the two scenarios (it's mentioned in  Trolley Problem).

I'm not suggesting for a moment that you'd necessarily find any of these arguments compelling - I'm just pointing out to you that such arguments really do exist. You shouldn't assume that just because you haven't heard these arguments that there can be no moral justification for making the distinction.

There is nothing objectively different about the situations other than the mediated (but still fully determined) nature of the causal impact of the act on the death.
This is clearly not the case. The two scenarios indisputably describe two slightly different series of events. What I assume you meant to say is that in your opinion there is no morally significant difference between the two scenarios.
 
Yes, I agree. What's your point? Do you believe that it is whether the death is caused by me that is the critical factor? Do you understand that that is a moral judgement you are making?

Sorry, I guess I don't understand what you were saying about the Trolley Problem, then.
 
Again, no moral system has ever posited any basis for making a moral distinction in two such scenarios.

Psychologists studying this problem have come up with several. One of the more ensuring is the idea of remoteness, whereby causal proximity to the death influences the judgement. in this case the redirection of an accident onto fresh victim has less proximity to the victims than pushing the victims themselves.
 
Nothing within any moral system, religious or secular, provides any basis for treating these scenarios as morally distinct.
...
Again, no moral system has ever posited any basis for making a moral distinction in two such scenarios.

Well, there is always the Categorical Imperative, from Kant's Groundwork of the Metaphysics of Morals.

"One cannot, on Kant's account, ever suppose a right to treat another person as a mere means to an end."

When you push a guy off the bridge to stop the trolley, it's his body mass that stops the trolley, which means you're using him as a mere means to an end. When you throw the switch, you save the lives whether he's there or not. So his presence and death are incidental to your solution to the problem of saving the five lives, which means you are not using him as a mere means to an end.
 
Well, there is always the Categorical Imperative, from Kant's Groundwork of the Metaphysics of Morals.

"One cannot, on Kant's account, ever suppose a right to treat another person as a mere means to an end."

When you push a guy off the bridge to stop the trolley, it's his body mass that stops the trolley, which means you're using him as a mere means to an end. When you throw the switch, you save the lives whether he's there or not. So his presence and death are incidental to your solution to the problem of saving the five lives, which means you are not using him as a mere means to an end.

So how would that work if the chosen victim were a relative versus someone from another country? Are we now going to add categories for the imperative? What if you didn't know he was a relative?
 
Back
Top Bottom