According to commonsense morality and many non-utilitarian theories, there are certain moral constraints you should never violate. These constraints are expressed in moral rules like “do not lie!” and “do not kill!”. These rules are intuitively very plausible. This presents a problem for utilitarianism. The reason for this is that utilitarianism not only specifies which outcomes are best—those having the highest overall level of well-being—but also says that it would be wrong to fail to realize these outcomes.
Sometimes, realizing the best outcome requires violating moral constraints against harming others—that is, violating their rights. There is no reason to expect commonsense moral rules to always coincide with the best ways to act according to utilitarianism; sometimes they conflict. An example of such a conflict is the Transplant thought experiment:1
Transplant: Imagine a hypothetical scenario in which there are five patients, each of whom will soon die unless they receive an appropriate transplanted organ—a heart, two kidneys, a liver, and lungs. A healthy patient, Chuck, comes into the hospital for a routine check-up and the doctor finds that Chuck is a perfect match as a donor for all five patients. Should the doctor kill Chuck and use his organs to save the five others?
At first glance, it seems that utilitarianism has to answer the question with “Yes, the doctor should kill Chuck”. It is better that five people live than that one person lives. But on commonsense morality and virtually every other moral theory, the answer is “No, do not kill Chuck”. On most views, killing Chuck would be morally monstrous. Utilitarianism seems to be the rare exception that claims otherwise. This apparent implication is often taken as an argument against utilitarianism being the correct moral theory.
Proponents of utilitarianism might respond to this objection in four ways. We will go through them in turn.
Accommodating the Intuition
A first utilitarian response to the thought experiment might be to accommodate the intuition against killing Chuck by showing that utilitarianism does not actually imply that the doctor should kill him for his organs. Critics of utilitarianism assume that, in Transplant, the doctor killing Chuck will cause better consequences. But this assumption may be questioned. If the hospital authorities and the general public learned about this incident, a major scandal would result. People would be terrified to go to the doctor. As a consequence, many more people could die, or suffer serious health problems, due to not being diagnosed or treated by their doctors. Since killing Chuck does not clearly result in the best outcome, and may even result in a terrible outcome, utilitarianism does not necessarily imply that the doctor should kill him.
Even if we stipulate that the scenario is an unusual situation in which killing Chuck really would lead to the best outcome (with no further unintended consequences), it is hard to imagine how the doctor could be so certain of this. Given how incredibly bad it would be to undermine public trust in our medical institutions (not to mention the reputation harm of undermining utilitarian ethics in the broader society),2 it would seem unacceptably reckless, according to expectational utilitarianism, for the doctor to risk such population-wide harm to save just a small handful of lives. Utilitarianism can certainly condemn such recklessness, even while allowing that there are rare cases in which, by unpredictable fluke, such reckless behavior could turn out to be for the best.
This is a generalizable defense of utilitarianism against a wide range of alleged counterexamples. Such “counterexamples” invite us to imagine that a typically-disastrous class of action (such as killing an innocent person) just so happens, in this special case, to produce the best outcome. But the agent in the imagined case generally has no good basis for discounting the typical risk of disaster. So it would be unacceptably risky for them to perform the typically-disastrous act.3 We maximize expected value by avoiding such risks.4 For all practical purposes, utilitarianism recommends that we should refrain from rights-violating behaviors.
Debunking the Intuition
A second strategy to deal with the Transplant case is to debunk the intuition against killing Chuck by showing that the intuition is unreliable. A utilitarian might argue that it is almost always wrong to commit murder and that we should cultivate strong character dispositions and social norms against murder. Therefore, our intuition against killing Chuck may just result from us having embraced a general moral norm against murder. While this norm is correct in the vast majority of cases, it can fail under those very exceptional circumstances where killing someone would actually bring about the best consequences.
We may also worry that the intuition reflects an objectionable form of status quo bias. However terrible it is for Chuck to die prematurely, is it not—upon reflection—equally terrible for any one of the five potential beneficiaries to die prematurely? Why do we find it so much easier to ignore their interests in this situation, and what could possibly justify such neglect? There are practical reasons why instituting rights against being killed may typically do more good than rights to have one’s life be saved, and the utilitarian’s recommended “public code” of morality may reflect this. But when we consider a specific case, there’s no obvious reason why the one right should be more important (let alone five times more important) than the other, as a matter of principle. So attending more to the moral claims of the five who will otherwise die may serve to weaken our initial intuition that what matters most is just that Chuck not be killed.
Attacking the Alternatives
A third response to the Transplant case is to attack the available alternatives to utilitarianism to show that they have even more counterintuitive implications.
All of the standard arguments against deontic constraints become relevant at this point. For example, the hope objection flags that a benevolent observer should prefer that the five be saved, and it’s hard to see how deontic moral rules could matter more (or have greater normative authority) than what we—or any impartial benevolent observer—should hope is done.
As noted above, the charge of status quo bias seems especially pressing in this context. If you asked all six people from behind the veil of ignorance whether you should kill one of them to save the other five, they’d all agree that you should. A 5/6 chance of survival is far better than 1/6, after all. And it’s morally arbitrary that the one happens to have healthy organs while the other five do not. There’s no moral reason to privilege this antecedent state of affairs, just because it’s the status quo. Yet that’s just what it is to grant the one a right not to be killed while refusing the five any rights to be saved. It is to arbitrarily uphold the status quo distribution of health and well-being as morally privileged, no matter that we could improve upon it (as established by the impartial mechanism of the veil of ignorance).
Another challenge may be presented by increasing the stakes in our thought experiment:
Revised Transplant: Suppose that scientists can grow human organs in the lab, but only by performing an invasive procedure that kills the original donor. This procedure can create up to one million new organs. Like before, our doctor can kill Chuck, but this time use his body to save one million people. Should she do this?
Consider how two non-utilitarians would react to Revised Transplant. The Moderate non-utilitarian says that, unlike in the original case, the doctor should kill Chuck because the constraint against harming others is outweighed, since enough is at stake. The Absolutist non-utilitarian, on the other hand, says that the doctor still should not kill Chuck, since no amount of benefit can outweigh the injustice of killing him.
One objection to the Moderate is that their position is incoherent. The rationale underlying the intuition that the doctor refrain from killing Chuck in Transplant should also forbid killing him in Revised Transplant. In both cases, an innocent person is sacrificed for the greater good. Another objection to the Moderate is that their position is arbitrary. The Moderate must draw a line past which constraint violations become permissible: for example, when the benefit is for at least one million people. But why draw the line precisely at that point, rather than higher or lower? What is so special about this particular number, 1,000,000? Yet the same question can be asked for any specific number of lives saved. The only non-arbitrary positions are that of the Absolutist, for whom there is no number of lives saved that can justify killing Chuck, and that of the utilitarian, who says that killing Chuck is justified if the benefits outweigh the costs.
The problem with Absolutism is this position is even more counterintuitive than utilitarianism. If we continue to increase the number of lives we could save by killing Chuck—say, from one million to one billion, and so on—it soon becomes absurd to claim that doing so is impermissible. This position appears even more absurd when we consider cases involving uncertainty. For instance, it seems the Absolutist is committed to saying it is impermissible to perform the medical procedure on Chuck, even if it had only a very small chance of killing him and is guaranteed to save millions of lives.
Tolerating the Intuition
The final response is for the advocate of utilitarianism to “bite the bullet”, holding on to the claim we should—in this hypothetical situation—kill Chuck despite the intuition that killing Chuck is wrong. It is regrettable that the only way to save the five other people involves Chuck’s death. Yet the right action may be to kill him since it allows the five others to continue living, each having meaningful experiences and enjoying their lives as much as Chuck would have enjoyed his own. Chuck’s death, while unfortunate, is stipulated by the thought experiment to be required to create a world where there is as much well-being as possible.
Of course, it’s important to stress that real life comes with no such stipulations, so in real-life cases utilitarians overwhelmingly opt to “accommodate the intuition” and reject the assumption that killing innocent people leads to better outcomes.
How to Cite This Page
Resources and Further Reading
- Katarzyna de Lazari-Radek & Peter Singer (2017). Utilitarianism: A Very Short Introduction. Oxford: Oxford University Press. Chapter 4: Objections, Section “Does utilitarianism tell us to act immorally?”.
- Krister Bykvist (2010). Utilitarianism: A Guide for the Perplexed. London: Continuum. Chapter 8: Is Utilitarianism too Permissive?
- Shelly Kagan (1998). Normative Ethics. Boulder, CO: Westview Press. Chapter 3.
- Shelly Kagan (1989). The Limits of Morality. New York: Oxford University Press.
- Eduardo Rivera-López (2012). The Moral Murderer. A (more) effective counterexample to consequentialism. Ratio, 25(3): 307–325.
Adapted from Thomson, J. (1976). Killing, Letting Die, and the Trolley Problem. The Monist. 59 (2): 204–17, p. 206. ↩︎
This reputational harm is far from trivial. Each individual who is committed to (competently) acting on utilitarianism could be expected to save many lives. So to do things that risk deterring many others in society (at a population-wide level) from following utilitarian ethics is to risk immense harm. On the reputational costs of instrumental harm, see: Everett, J.A.C., Faber, N.S., Savulescu, J., and Crockett, M.J. (2018). The costs of being consequentialist: Social inference from instrumental harm and impartial beneficence. Journal of Experimental Social Psychology, 79: 200–216. ↩︎
Rivera-López, E. (2012). The moral murderer. A (more) effective counterexample to consequentialism. Ratio,25(3): 307–325. For critical discussion, see R.Y. Chappell, Counterexamples to Consequentialism. Note that Chappell is a co-author of this website. ↩︎
Even if we can somehow stipulate that the agent’s first-order evidence supports believing that murder is net-positive in their case, we also need to take into account the higher-order evidence that most people who make such judgments are mistaken. Given the risk of miscalculation, and the far greater harms that could result from violating widely-accepted social norms, utilitarianism may well recommend that doctors adopt a strictly anti-murder disposition, rather than being open to committing murder whenever it seems to them to be for the best. ↩︎