Research ethics is commonly presented as inherently anti-utilitarian.1 Its aim is to protect individual research participants from harm, exploitation, and disrespect. That remains its goal even when so treating participants would benefit a great many people, say, by permitting scientists to develop new medical treatments. Yet, rapidly developing these widely-beneficial treatments is what utilitarianism seemingly recommends, even when that requires harming, exploiting, or disrespecting a few individuals.
Kantian ethics—which treats individuals as ends in themselves and not as mere means—is often seen as the ground of research ethics’ reluctance to sacrifice individuals for collective ends. In particular, this reluctance is thought to stem from Kantian respect for individuals and for their autonomous consent. In short, utilitarianism is considered difficult to reconcile with core research ethics, and Kantian ethics to dovetail nicely with it.
An influential early proponent of this picture was theologian Hans Jonas. On his understanding of “a social utility standard”, the fastest way to complete risky research is to prey on manipulable or captive populations as study participants, contrary to research ethics.2 For Jonas, moreover, any experimentation on human participants potentially treats a person as a “thing”, a mere body, or a number because it requires none of her traits as an agent. That, he says, can only be thwarted by conditioning study participation on highly voluntary consent3—as is indeed often said to be necessary for ethical research on human participants.
Following public outrage at multiple research ethics abuses exposed in the 1960s and 1970s, some ethicists condemned these and earlier abuses as “unashamedly utilitarian”4 for their alleged “obnoxious politics” of prioritizing collective well-being over individual participants’ health.5 This picture remains common in research ethics teaching. Canonical introductions to research ethics regularly present elements of research ethics as contrasting with imagined utilitarian recommendations.6
This article questions this common picture of research ethics’ philosophical foundations. It argues that:
(I) utilitarianism can account for many core research ethics norms,
(II) Kantian ethics may conflict with many core research ethics norms, and
(III) a more utilitarian outlook would improve contemporary research ethics in concrete ways.
I. Utilitarian support for core research ethics norms
There are strong utilitarian reasons to respect the core norms of research ethics. This section relays three of them.
1. Protecting and respecting participants to sustain social trust
According to many contemporary research ethicists, maintaining trust in researchers and the medical system is a central point of research ethics.7 This includes (i) trust that investigators are both technically competent and ethically decent, such that the fruits of their research can safely and ethically be adopted; and (ii) trust in clinicians’ and public health experts’ competence and decency, inasmuch as that trust is affected by perceptions of medical researchers. Such trust is vital for people to be willing to see the doctor, abide by medical advice, fill organ donor cards without worry that they would be left to die for their organs, accept vaccinations, and much else.
Indeed, securing such trust greatly benefits society, making it a priority for utilitarianism. It hardly maximizes well-being when people so mistrust the medical system that they do not enlist for studies or ignore the advice of doctors and public health officials.8
Kantianism, on the other hand, may find it harder to explain why, just to promote overall well-being by promoting trust, investigators should ever constrain their treatment of voluntarily-consenting study participants. Consider a person wanting to participate in an important study in which she is likely to be seriously injured. A utilitarian might, depending on the details, support excluding her from the study, reasoning that her injury may set back public trust. A Kantian could find it hard to flout her will to participate in the study merely for the exogenous goal of sustaining public trust. Kantians tend to be suspicious of protecting well-being against the person’s own will—see II.1 below.
2. The clear utilitarian value of scientific validity and innovation
A socially-valuable question, a valid design, replicability, and other requirements for better protection of scientific interests are also parts of an ethical study, because without them, the study is a waste of social resources.9 They also correlate with the protection of social interests in learning from good science how to increase well-being, and therefore clearly serve utilitarian goals.
3. Grounding consent rights in the utilitarian value of personal autonomy
Modern clinical ethics also emphasizes individual consent and not just the medical good for patients. The emphasis on consent and, relatedly, on patient autonomy is sometimes seen as Kantian. But the concept of autonomy relevant in research and clinical ethics has little to do with that of Kant.10 Certainly, merely being free from Kantian violations like active and intentional lying and coercion is insufficient. Medical autonomy is emphatically about voluntary and informed decision making, with sufficient comprehension and voluntariness, and not just the absence of lies and coercion.11 As such, medical autonomy is more in line with John Stuart Mill’s utilitarian arguments for the importance of individual liberty, presented in On Liberty.12
Mill argues that deciding for oneself adds great value to the individual’s life. Individuals tend to know best and care the most about what is good for them, and deciding for ourselves is inherently good for us.13 Bioethicists’ defenses of autonomous decision-making, either in the clinic or in research,14 essentially repeat Mill’s rationale. If Mill is right then exercising one’s personal autonomy—even against doctors’ advice—will often benefit one overall, even when it harms one’s medical interests. Admittedly, individuals do not always benefit overall from having the last say on matters closely affecting their bodies and health—in either clinical care or research. But giving them that veto power even when it fails to maximize well-being may abide by the rules that tend to maximize well-being in the long run15—an old utilitarian recipe for maximizing expected value.16
It may seem as though Kantian ethics can at least require some form of consent to any study participation, whether or not that consent is fully autonomous. But even that is unclear: when Kant mentions the consent requirement, it is a requirement that the individual could possibly consent, not that she actually consents.17
II. Kantian obstacles to core research ethics norms
There are core elements of research ethics that are hard to justify on Kantian or neo-Kantian grounds. Here I highlight two:
1. Neo-Kantian anti-paternalism tends to permit consensual harm
Many contemporary Kantians find an inherent problem in forcing choices on adults for their own benefit.18 But research ethics is precisely about that.19 It forbids consensual deals between consenting adults—for example, an investigator’s invitation to a potential participant to contribute voluntarily to very risky research. The ethics of paternalism, namely, coercing or manipulating someone for her own good, is complex, and perhaps we could find a solution to reconcile research ethics with Kantian deontology’s anti-paternalistic inclinations. But the bond between research ethics and Kantian deontology is fraught in that respect.
2. Agent-relative ethics does not emphasize preventing violations by others
Commonsense morality claims that Peter has a duty not to perform abusive studies on Paul. It also claims that if Peter goes ahead regardless, this would permit or even obligate Mary, an unrelated third party, to come to Paul’s aid. Paul’s right not to be abused in research is considered a fundamental human right, which can warrant or indeed demand intervention from others. That’s why activists readily protest research ethics abuses in other countries, or ones committed by independently-funded pharmaceutical companies, and why abusers of study participants were tried for violating basic human rights by international courts.
Kantian ethics can explain why Peter should not abuse Paul, but hardly why Mary, who may head a court or an institutional review board or a human rights organization in a foreign country, should try to force Peter not to mistreat Paul.
The issue is that Kantian duties are usually agent-relative. For instance, while people should not break their promises, there is no Kantian duty to minimize promise-breaking by others. Some Kantians are emphatic that duties are relational, between two or more specific individuals.20 All that is hard to reconcile with the core research ethics demand, not only to treat participants right but also to ensure that other people treat them right. Kantian ethics has no obvious basis for this recommendation when the abusive researchers are unrelated to oneself or one’s group. And even when they are related, Kantian duties may be satisfied by merely severing ties with the abusive researchers. But that is clearly insufficient by the lights of modern research ethics.
Here is another way of putting the challenge. Research ethics concerns respecting and protecting participants’ rights. Yet Kantianism is a morality of duties, especially “perfect duties”—not primarily of rights.21 A Kantian must advise researchers to focus primarily on their own perfect duties toward their study participants. More questionably, the Kantian may also advise citizens or taxpayers to protect participants in studies conducted by their own nation or with its resources. But there is no Kantian duty to maximize rights fulfillment in the world. And some neo-Kantians mock what they call “manifesto rights,” namely, rights against everyone for them to come to one’s aid.22
Admittedly, some Kantians seek to account for commonsense moral obligations to minimize human rights violations, including those perpetrated by unrelated third parties.23 But defending these accounts and reconciling them with the rest of Kantian ethics is not trivial. In this respect, Kantianism makes it harder not easier to shore up the core research ethics norm that what an investigator does to a study participant is often everybody’s business.
III. Utilitarian improvements to research ethics
We have seen that utilitarianism accounts for the core of commonsense research ethics no less well, and arguably better than, some other philosophies—particularly Kantianism. Utilitarianism also recommends reforms to existing research ethics oversight. Those reforms are often plausible independently, further supporting the utilitarian governance of research ethics. Here are seven examples:
- Accounting for overall risk to study participants
- Accounting for risk to broad patient populations
- Accounting for risk to so-called study bystanders
- Accounting for risk to nonhuman animals
- Accounting for risk to patients left without medical treatments
- Maximizing (not satisficing) feasibility and scientific reliability
- What matters is human flourishing, not scientific prestige
1. Accounting for overall risk to study participants
Research ethicists’ concern about the risks to individual participants is not the only risk measure which matters ethically. Other aspects contribute to the cumulative risk from the study, as the next few sections argue.
For starters, distinguish between the risk of injury or death to the individual participant and the risk that, in the cohort, somebody will get injured or harmed. The latter surely matters as well. It captures the chance that the study will do harm. But the prospect of harm to all study participants combined is not something that Kantian ethics is well suited to heed. Some Kantians even deny that there are such things as overall good, overall bad, and overall risk, not to an individual but to a collective.24
Utilitarianism, on the other hand, readily demands minimizing harm to collectives, including the group of all study participants. It makes that demand even when the harms or risks to each individual in the group are held fixed. Compared with Kantianism, utilitarianism more readily supports even studies that involve slightly worse overall risks to each participant but that have—due to involving far fewer participants—a far smaller overall prospect of harm to the collection of participants.
To illustrate, imagine a small challenge study (namely, a study whose participants are deliberately exposed to a pathogen) with similar social value to a far larger field study (namely, a conventional study relying on many participants becoming naturally infected). If the challenge study involves only slightly elevated overall risks to each participant, there is a much lower chance that anyone would get hurt in it than in the field study. In that case, utilitarianism more readily supports the challenge study.
Assessments of the ethics of actual studies tend to pay lots of attention to the cumulative prospect of harm from these studies, such as the prospect that at least one study participant will be seriously injured. In the case of COVID vaccine trials, for example, most scientists criticizing the challenge trial option as “too risky” focused on the risk of an injury, not the risk of injury to any one participant25—although many failed to see that, thanks to challenge trials’ far fewer participants, that cumulative risk was probably lower than in COVID field trials.26 These scientists seem to have missed the real numbers involved. Yet, what they minded ethically, and what intuitively matters no less concerning risky trials, was precisely what utilitarianism is concerned with about risky research—the chance that great harms will ensue.
2. Accounting for risk to broad patient populations
Scientific trials always assess interventions in a circumscribed set of the population instead of releasing these interventions to the population at large, or at least to its low-risk segments, while monitoring the results. The main point of starting in a circumscribed set of people is to minimize the number of people injured. This may reflect the utilitarian concern with minimizing the cumulative harm to people. The point is clearly not to minimize the chance of harm to each study participant considered in isolation.
3. Accounting for risk to so-called study bystanders
Recent research ethics emphasizes the need to protect not only study participants but also third parties who are neither participants nor patients in need of novel therapies. Such third party groups—also called study bystanders or collaterals27—may nevertheless be at risk from research. One example is a study posing an informational risk to the wider public by providing recipes for biological attacks on them.28
When cumulative harm from a study is likely to be high, for example due to information risk, utilitarianism straightforwardly justifies stopping the study or limiting the publication of its results. Thus, utilitarianism highlights how research may wrongfully harm bystanders—something that research ethicists increasingly agree matters.
What about Kantianism? Here, the case for concern is more questionable. The potential harm to bystander populations will typically be a mere side effect of the study and not a means to the study’s ends.29 Kantians condemn the same amount of harm much less when it is a mere side effect rather than a means to one’s ends.30 Additionally, the individuals likeliest to be harmed as bystanders are seldom identified when deciding on study design and approval. Many neo-Kantians regard high cumulative risks as much more urgent when those risks are concentrated in a few determinate individuals rather than spread out as small individual risks across many people.31 Thus, Kantians have multiple reasons to play down the risks to study bystanders. The utilitarian reasons to protect study bystanders, by contrast, are preserved even when that bystander harm would be a mere side effect of research and when no individual bystander is identified in advance as likely to be harmed. In these respects, utilitarianism supports the intuitive case for protecting bystanders more readily than Kantianism does.
4. Accounting for risk to nonhuman animals
When a study is risky or has uncertain risks for human participants, most research ethicists would argue that the experiment should be conducted on nonhuman animals before moving to humans.32 That makes sense sometimes. But it would be helpful to have a more careful account of when animal experimentation is warranted to prevent suffering for human participants despite causing animal suffering and potentially delaying the overall development process. Utilitarianism, being non-speciesist, seems well-positioned to provide a workable account.
5. Accounting for risk to patients left without medical treatments
Research ethics committees are not held accountable for being overly cautious. But delays or blocks to valuable and legitimate studies can slow or prevent the development of important medical treatments. Excessive red tape may also stifle researchers’ proposals of valuable studies. We lack records on the extent of that stifling effect precisely because non-proposed studies are not being recorded publicly.
Whenever long ethics reviews delay the development of medical treatments, patients around the world suffer for longer. Absurdly, some patients are thereby exposed to greater risks than any of the study participants.33 For lethal diseases, a delay means more deaths—an “invisible graveyard”.34 Unfortunately, research oversight’s red tape is often pointless, delaying essential medical progress.
The population-wide cumulative harm from overprotective ethics oversight may well far exceed the potential cumulative harm to study participants, considering the often very large patient populations.35 It would thus be good for oversight institutions to be designed to prioritize more population-wide urgent needs for rapid, effective study designs.
Utilitarianism takes cumulative harm seriously. More broadly, research ethicists agree that it makes no ethical sense to ban all risky studies; a sensible balance between the net risks to participants and the social value of the proposed research is needed.36 A study to cure HIV can be ethical even if it carries more risk than benefit for individual participants when its social value is likely to be tremendous.37 In a future catastrophic pandemic, challenge studies involving significant risks for study participants could likewise be justified if they could help avert catastrophe.
Research oversight that more readily approves risky studies of tremendous social value is thus one more area where utilitarianism could improve on existing research ethics norms. This is not to say that we should make research ethics lax. For one thing, rigorous research ethics review stifles among other things proposals of studies with negative social value. Stifling those reduces the chance that such harmful studies would pass review and reduces the delay from having to review such studies or deal with their scandalous results. But the full implications of a utilitarian research ethics should be explored.
6. Maximizing (not satisficing) feasibility and scientific reliability
Contemporary research ethics demands that studies have “enough” statistical validity and practical feasibility to achieve their scientific aims.38 But two studies with “enough” validity and feasibility can be very different—both in the confidence they generate and their costs and impediments to safe completion. There is no principled reason to stop characterizing studies along these measures when they are found to be reliable “enough”, “sufficiently” affordable, and “amply” likely to end successfully. In principle, all improvements in statistical reliability and in feasibility should count in favor of a study portfolio, and can potentially balance issues with the study, such as risk to study participants or surrounding communities, or study questions that are important but would not otherwise count as important enough to warrant the study’s costs or risks.
Even sufficientarians, who think that improving people’s well-being is a moral duty only up to a certain level,39 would reject sheer satisficing in this context. After all, more reliable studies with high likelihood to be completed and low associated costs are more likely to promote human health and well-being—including below the sufficient level. Therefore, the value of additional scientific reliability, feasibility, and cost-saving are continuous, the more of any of these factors the better. It makes sense to consider this when deciding which studies to initiate, fund and approve—as utilitarianism recommends. A study’s expected social value increases the more important the questions it asks but also the more reliably it can answer them at the lowest cost. How exactly to balance all these pertinent factors, and whether balancing them is the job of investigators, funders, ethics oversight bodies, statistics oversight bodies, or others are questions for future investigation.
7. What matters is human flourishing, not scientific prestige
For utilitarians, the point of research is to maximize well-being. It is not to obtain knowledge or do science for its own sake. Certainly, it is not about doing the science considered most prestigious because it protects the health of the global rich and captures the attention of top medical journals dedicated to their health needs. And the ethical regulation of research is not about conforming to the technicalities of regulations and law, and clearly not about red tape and stifling research. It should focus more purely on preventing major ethical abuses that, directly or indirectly, have enough likelihood to do more harm to study participants or societies than any good coming out of the study.
Taking this idea seriously would require major changes in prioritization for research funding and oversight. The diseases and medical countermeasures we currently most invest in are often relevant only to the affluent. More resources should go to investigating scalable prevention and treatment against major contributors to the global disease burden and to global ill-being in general. We also overemphasize basic biomedical science relative to translating scientific findings into practical applications for the benefit of humanity, and to investigating which applications work.
Some studies even harm humanity more than they benefit it by generating “dual-use” insights.40 A more utilitarian or consequentialist system for research training, funding, oversight and publication would prioritize and de-prioritize based on the most important causes.
This article disputes a common view of research ethics as being fundamentally antagonistic to utilitarianism and friendly to Kantianism. I have argued that (I) utilitarianism can account well for many core research ethics norms, while (II) Kantianism conflicts with many of them, and (III) a more utilitarian outlook would improve contemporary research ethics. Thus, utilitarianism and research ethics may turn out to be complementary, certainly compared to some alternatives to utilitarianism.
About the Author
Nir Eyal is the inaugural Henry Rutgers Professor of Bioethics at Rutgers University. He founded and directs Rutgers’s Center for Population-Level Bioethics, with appointments at the School of Public Health and the Department of Philosophy. Dr. Eyal’s work falls primarily in population-level bioethics, and he co-edits Oxford University Press’ series in that area. He also contributes to research ethics and to other areas of ethics and political philosophy. Earlier, as a faculty member at Harvard, he and students together started Harvard’s effective altruism activities.
How to Cite This Page
Want to learn more about utilitarianism?
- Buchanan Allen E., and Dan W. Brock. 1990. Deciding for Others; The Ethics of Surrogate Decision Making. Cambridge: Cambridge University Press.
- Chappell, Richard Yetter, and Peter Singer. 2020. “Pandemic ethics: the case for risky research.” Research ethics, 16(3-4): 1–8.
- Darwall, Stephen. 2006. “The Value of Autonomy and Autonomy of the Will.” Ethics, 116(2): 263–284. doi: 10.1086/498461.
- Donagan, Alan. 1977. “Informed consent in therapy and experimentation.” Journal of medicine and philosophy, 2(4): 307–329.
- Dworkin, Gerald.1988. The Theory and Practice of Autonomy. Cambridge: Cambridge University Press.
- Emanuel, Ezekiel J., and Christine Grady. 2008. “Four paradigms of clinical research and research oversight.” In The Oxford Textbook of Clinical Research Ethics, edited by Ezekiel J. Emanuel, Christine C. Grady, Robert A. Crouch, Reidar K. Lie, Franklin G. Miller and David D. Wendler, 222–210. New York: Oxford University Press, Incorporated.
- Emanuel, Ezekiel J., David Wendler, and Christine Grady. 2000. “What makes clinical research ethical?” Journal of the American Medical Association, 283: 2701–2711.
- Esvelt, K. M. 2018. “Inoculating science against potential pandemics and information hazards.” PLoS Pathog, 14(10):e1007286. doi: 10.1371/journal.ppat.1007286.
- Eyal, N., and L. Holtzman. 2020. “Symposium on risks to bystanders in clinical research: An introduction.” Bioethics, 34(9): 879–882. doi: 10.1111/bioe.12830.
- Eyal, Nir. 2015. “Informed consent to participation in interventional studies: second-order in a different sense.” Journal of Law and the Biosciences, 2(1): 1–6. doi: 10.1093/jlb/lsv001.
- Eyal, Nir. 2017. “How to keep high-risk studies ethical: classifying candidate solutions.” Journal of Medical Ethics, 43: 74–77. doi: 10.1136/medethics-2016-103428.
- Faden, Ruth R., and Tom L. Beauchamp. 1986. A History and Theory of Informed Consent. New York: Oxford University Press.
- Feinberg, Joel. 1973. Social Philosophy. Englewood Cliffs, NJ: Prentice‐Hall.
- Frankfurt, Harry. 1987. “Equality as a moral ideal.” Ethics, 98: 21–42.
- Hare, Richard M. 1981. Moral Thinking. Oxford: Oxford University Press.
- Jonas, Hans. 1969. “Philosophical Reflections on Experimenting with Human Subjects.” Daedalus, 98(2): 219–47.
- Kant, Immanuel. 1999 . “Groundwork of the Metaphysics of Morals.” In Practical Philosophy, edited by Immanuel Kant and M. J. Gregor, 37–108. Cambridge: Cambridge University Press.
- Kass, N. E., J. Sugarman, R. Faden, and M. Schoch-Spana. 1996. “Trust, The fragile foundation of contemporary biomedical research.” The Hastings Center report, 26(5): 25–9.
- Kimmelman, Jonathan. 2005. “Medical research, risk, and bystanders.” IRB, 27(4): 1–6.
- Kimmelman, Jonathan. 2010. Gene Transfer and the Ethics of First-in-Human Research: Lost in Translation. Cambridge: Cambridge UP.
- Korsgaard, C. 1993. “The Reasons We Can Share.” Social philosophy and policy, 10: 24–51.
- Korsgaard, Christine M. 1988. “Two arguments against lying.” Argumentation, 2(1): 27–49. doi: 10.1007/BF00179139.
- Lewis, G., P. Millett, A. Sandberg, A. Snyder-Beattie, and G. Gronvall. 2019. “Information Hazards in Biotechnology.” Risk Analysis,39(5): 975–981. doi: 10.1111/risa.13235.
- Mill, John Stuart. 2003 . “On liberty.” In Utilitarianism and On Liberty, edited by Mary Warnock. Walden MA: Wiley-Blackwell.
- Miller, F. G., and A. Wertheimer. 2007. “Facing up to paternalism in research ethics.” Hastings Cent Rep, 37(3): 24–34.
- O’Neill, Onora. 2003. “Autonomy: The Emperor’s New Clothes.” Aristotelian Society Supplementary Volume, 77(1): 1–21.
- O’Neill, Onora. 2016. Justice across Boundaries: Whose Obligations? Cambridge: Cambridge University Press.
- Powers, Madison, Ruth Faden, and Yashar Saghai. 2012. “Liberty, Mill and the Framework of Public Health Ethics.” Public Health Ethics, 5(1): 6–15. doi: 10.1093/phe/phs002.
- Resnik, David, and Richard R. Sharp. 2006. “Protecting third parties in human subjects research.” IRB, 28(4): 1–7.
- Ripstein, Arthur. 2009. Force and Freedom: Kant’s Legal and Political Philosophy. Cambridge, MA: Harvard UP.
- Rosenblatt, Michael. 2020. “Human challenge trials with live coronavirus aren’t the answer to a Covid-19 vaccine.” STAT News, June 23. https://www.statnews.com/2020/06/23/challenge-trials-live-coronavirus-speedy-covid-19-vaccine/.
- Rothman, David J. 1987. “Ethics and Human Experimentation.” New England Journal of Medicine, 317(19): 1195–1199. doi: 10.1056/nejm198711053171906.
- Scanlon Thomas M. 2013. “Reply to Zofia Stemplowska.” Journal of Moral Philosophy, 10:508–14.
- Shah, S. K., J. Kimmelman, A. D. Lyerly, H. F. Lynch, F. G. Miller, R. Palacios, C. A. Pardo, and C. Zorrilla. 2018. “Bystander risk, social value, and ethics of human research.” Science, 360(6385): 158–159. doi: 10.1126/science.aaq0917.
- Sidgwick, H. 1981 . The Methods of Ethics. 7th ed. London, Macmillan, 1907; repr. Indianapolis: Hackett.
- Steuwer, Bastian, Euzebiusz Jamrozik, and Nir Eyal. 2021. “Prioritizing second-generation SARS-CoV-2 vaccines through low-dosage challenge studies.” International Journal of Infectious Disease, 105: 307–311. doi: 10.1016/j.ijid.2021.02.038.
- Tabarrok, Alex. 2021. The Invisible Graveyard is Invisible No More. Marginal Revolution (January 29). https://marginalrevolution.com/marginalrevolution/2021/01/the-invisible-graveyard-is-invisible-no-more.html. Accessed September 22, 2022.
- Tännsjö, Torbjörn. 1999. Coercive care: the ethics of choice in health and medicine. London: Routledge.
- Thomson, Judith Jarvis. 2008. Normativity. Chicago: Open Court.
- Waldron, Jeremy. 1985. “Introduction.” In Theories of Rights, edited by Jeremy Waldron. Oxford: Oxford University Press.
- Walen, Alec. 2020. “Using, Risking, and Consent: Why Risking Harm to Bystanders is Morally Different from Risking Harm to Research Subjects.” Bioethics.
- Wertheimer, Alan. 2014. “(Why) should we require consent to participation in research?” Journal of Law and the Biosciences, 1(2): 137–182. doi: 10.1093/jlb/lsu008.
- Whitney, S. N., and C. E. Schneider. 2011. “Viewpoint: a method to estimate the cost in lives of ethics board review of biomedical research.” Journal of Internal Medicine, 269(4): 396–402. doi: 10.1111/j.1365-2796.2011.02351_2.x.
For excellent comments, I am grateful to Richard Chappell, Dan Hausman, and Darius Meissner. ↩︎
Jonas (1969, 237); Rothman (1987); see also Donagan (1977, 325). ↩︎
Jonas (1969, 235) ↩︎
Rothman (1987, 1198) ↩︎
Donagan (1977, 320, 323, 326) ↩︎
See, e.g., Emanuel, Wendler, and Grady (2000, 2706); Emanuel and Grady (2008, 223-5). ↩︎
Kass et al. (1996) ↩︎
Tännsjö (1999) ↩︎
Emanuel et al (2000) ↩︎
O’Neill (2003) ↩︎
Faden and Beauchamp (1986) ↩︎
Mill (2003 ); Powers, Faden, and Saghai (2012); O’Neill (2003, 3f.)) ↩︎
Mill (2003 ) ↩︎
Dworkin (1988, 21-33); Buchanan and Brock (1990, 29-36) ↩︎
Eyal (2015) ↩︎
Sidgwick (1981 ); Hare (1981) ↩︎
Kant (1999 , §430); Wertheimer (2014, 149) ↩︎
Korsgaard (1988); Darwall (2006, 265) ↩︎
Miller and Wertheimer (2007) ↩︎
Korsgaard (1993) ↩︎
Waldron (1985) ↩︎
Feinberg (1973); O’Neill (2016) ↩︎
Ripstein (2009) ↩︎
Thomson (2008) ↩︎
Rosenblatt (2020) ↩︎
Steuwer, Jamrozik, and Eyal (2021) ↩︎
Kimmelman (2005); Resnik and Sharp (2006); Shah et al. (2018); Eyal and Holtzman (2020) ↩︎
Esvelt (2018) ↩︎
Walen (2020) ↩︎
Walen (2020) ↩︎
Scanlon (2013) ↩︎
See, e.g., Kimmelman (2010). ↩︎
Chappell and Singer (2020) ↩︎
Tabarrok (2021) ↩︎
Whitney and Schneider (2011) ↩︎
Emanuel, Wendler, and Grady (2000) ↩︎
Eyal (2017) ↩︎
Emanuel, Wendler, and Grady (2000) ↩︎
Frankfurt (1987) ↩︎
Lewis et al. (2019) ↩︎