Policy Forum
Jan 2009

Enrollment of Economically Disadvantaged Participants in Clinical Research

Neal Dickert, MD, PhD
Virtual Mentor. 2009;11(1):54-60. doi: 10.1001/virtualmentor.2009.11.1.pfor1-0901.

 

In 1996, the Wall Street Journal ran the following headline: "To screen new drugs for safety, Lilly pays homeless alcoholics" [1]. The article provoked reactions of outrage and disgust. How could researchers use society's most disadvantaged people as guinea pigs to develop drugs that these participants could never afford? Even worse, how could they entice them with offers of money or health care, preying on the very vulnerabilities that are so disconcerting?

These reactions are understandable, and there may indeed be something wrong with how we enroll homeless and other disadvantaged populations in clinical research. The source of the problem, however, may not be that homeless people with alcoholism are disadvantaged or that incentives are used to recruit them. Evaluating this matter responsibly reveals that these gut reactions are often misdirected.

The Risk and Importance of Research

The central ethical challenge in clinical research is that it exposes people to risks that must be justified by benefits to society or science in the form of medical knowledge. Researchers, institutional review boards (IRBs), and sponsors of research take great pains to minimize risks. And although the risk of every project differs and is difficult to estimate, approved clinical research is heavily scrutinized and appears to be safe, despite what media reports and participant consent forms may suggest.

Participating in research is definitely safer than high-risk occupations such as firefighting, law enforcement, or military service and almost certainly safer than the majority of construction work, manufacturing jobs, or garbage collection--jobs that are important and legitimate despite known risks. Like these occupations, clinical research produces a valuable social good that justifies placing individuals at some level of risk. That said, tragedies do happen, as exemplified by the case of Nicole Wan, a 19-year-old student who died as a result of a fatal reaction to lidocaine administered during a research bronchoscopy [2].

If we accept the imposition of some risk for societal benefit, we must confront the question of which people researchers should expose to it. Despite a tendency to react otherwise, there is no obvious reason to believe that economically disadvantaged people ought not to be exposed to the same levels of research risk as the rest of the population. Provided they participate in studies for which they give a valid informed consent, why for example, should we exclude homeless people [3]? One common view is that the principle of justice dictates that disadvantaged populations ought not to be exposed to risks when they are not likely to be among those who benefit from the research findings. But this view does not always stand up to scrutiny. We never argue that poor people should not collect garbage in wealthy neighborhoods or that a homeless person should not be employed building expensive houses. Similarly, if I want to participate as a healthy individual in a study designed to improve the understanding of Parkinson's disease, my own likelihood of developing the disease (thus potentially benefitting from the study data) seems irrelevant to whether I should be allowed to enroll.

Some have claimed there is no reason to exclude disadvantaged individuals entirely, but that enrolling a disproportionate number of them is a problem. Several leading scholars, for example, assert that enrolling 1 percent homeless participants is justified, but having 100 percent homeless participants is not (assuming that the study is not focused on a problem unique to the homeless) [3]. "Pattern equity" is the term used to describe the distribution of research risks or burdens and, although there is widespread concern over pattern equity, its importance is unclear [4, 5]. I suspect that enrolling very high numbers of homeless people in a study may indeed indicate that the study has potential problems, but it is difficult to argue that the distribution of research burden is itself unjust if the level of risk posed by the study is indeed acceptable and participants feel they have an opportunity for financial gain or health benefit.

Examining the Study

There is an important caveat to the preceding comments: clinical research that enrolls disadvantaged people must be ethically acceptable in the first place. In other words, studies must meet basic standards of ethical acceptability regardless of whom they enroll. As elaborated by Ezekiel Emanuel and colleagues, these standards include: the potential for social value, a scientifically valid methodology, fair participant selection, a favorable risk-benefit ratio (including benefits to society), independent and thorough review, informed consent, and respect for participants [6]. Studies that do not meet these standards should not be approved. Most of the studies cited by Elliott and Abadie as exploitative of disadvantaged populations, for example, are flawed in fundamental ways�either they have not been adequately reviewed, are conducted in unacceptable facilities, or have used compounds that may not have been sufficiently researched [7]. These studies should not be conducted. Ensuring that these basic ethical standards are met may require special measures for certain populations. There may be special challenges regarding informed consent when enrolling homeless participants given the high prevalence of psychiatric disease among this population. Special procedures may be necessary to guarantee that consent is adequate, and exclusion of individual participants is obligatory when this cannot be achieved [3].

It may be that only disadvantaged people without reliable ways to make money or obtain health care will find enrolling in ethically problematic studies attractive. Taking advantage of their vulnerability to circumvent ethical standards of research is fundamentally exploitative and must not be tolerated. Thus, additional safeguards may be needed to monitor and prevent such exploitation, if unethical research is indeed as endemic as Elliott and Abadie suggest [7]. The problem in many of these cases, however, lies in the research studies and the regulatory system and not in the participant population. What we owe these participants is adequate review and oversight and not exclusion or different ethical standards.

Incentives Are Rarely the Problem

Much of what I have argued thus far may not be controversial. Most people would agree that nobody, homeless or not, should be asked to participate in a study that fails to meet criteria for ethical research, and most recognize that some level of risk is inevitable and justifiable. Still, many feel that offering incentives to induce disadvantaged people to participate in otherwise approvable studies remains problematic. These worries are typically cast as concerns about coercion, undue inducement, or exploitation and are largely overstated and misunderstood [7].

To start, coercion is never acceptable in research. Fortunately, true coercion is incredibly rare in research, and no offer of money or health care can coerce anyone unless there is some threat of harm for refusing [5, 8-11]. For a situation to be defined as coercive, a potential participant must be made worse off for refusing to participate than if he or she had never been presented with the option in the first place. For example, it would be coercive to strong-arm a homeless man into participating in a study by threatening to report criminal behavior to his shelter if he does not agree to take part. It would not be coercive, however, to offer him a large amount of money to participate. The money may be very attractive to him, and he will almost certainly agree to participate. But this situation is not coercive because the man would not be worse off for refusing the offer than if he had never been asked.

The more appropriate and complicated concern is that disadvantaged participants may be unduly induced by large amounts of money or health benefits. What counts as "undue inducement" is debated, but most discussion of this issue focuses on the potential for attractive incentives to: (1) undermine participants' ability to give valid informed consent by either compromising the voluntariness of their decisions or causing them to ignore and remain uninformed about study risks, (2) cause people to exercise poor judgment, or (3) lead participants to hide pre-existing conditions, side effects, or other information that might make them ineligible for inclusion [5]. Emanuel, in a provocative and important series of papers, argues that concern for undue inducement is "nonsense on stilts" [12, 13]. There are no data to suggest that people misestimate risks because of payment. Some paid participants have said they care less about risks when the amount of payment is high; others have said they are actually more attuned to risks when payment is high [14, 15]. Even if offers did cause people to underestimate risk, the most appropriate initial solution would be to alter consent practices rather than reduce incentives. More importantly, it is far from clear why decisions made for monetary gain in research would compromise voluntariness when they do not in other contexts. How many of us would want a potential employer to reduce a salary offer in order to make sure our decision to take a job is voluntary?

As Emanuel argues, it is also not clear how enrollment in an appropriately approved study would represent poor judgment. After all, IRBs should only approve studies in which they believe it would be reasonable (and thus not poor judgment) for eligible people to enroll. If enrolling reflects poor judgment, the IRB should not have approved the study, regardless of how much payment is offered [13].

Finally, no good data exist on the extent to which potential participants lie or hide conditions in order to maintain eligibility, though it certainly has occurred. Bernadette Gilchrist, for example, was an NIH nurse who died in a paid sleep-deprivation study most likely as a result of electrolyte abnormalities secondary to bulimia, a condition she failed to disclose to researchers presumably because she thought it would disqualify her from participating [16]. Although there are potential data integrity implications when participants fail to report important information, researchers' responsibilities can only go so far in protecting people from risks that derive from their own intentional misrepresentation of their health.

Undue inducement concerns are largely overestimated, and there is a need for more data on the extent to which they are borne out in practice. I do, however, believe that this concern is relevant when considering incentives to severely disadvantaged people, but only at the end of the spectrum of approvable risk or when research involves asking people to trade off values they hold important. I suggest that real undue inducement occurs when large offers induce people to make choices--that may be entirely informed and voluntary--to do activities to which they have strong objections based on their own values [5]. Examples include a homeless person with deep risk aversion who is induced to participate in a very risky (but approvable) study or a homeless Jehovah's Witness induced to participate in a trial involving a blood transfusion. Because research institutions should not be in the business of making "indecent proposals," the concern for undue inducement gives us reason to avoid dramatic escalations of payment at the risky end of approvable research and when recruiting from populations known to have significant aversion to specific studies. But values vary greatly. IRBs cannot, and should not attempt to account for the values of all potential populations--otherwise no study would be approvable. As a result, some undue inducement is unavoidable and not the responsibility of researchers or IRBs to prevent entirely.

A crucial reason for avoiding overreaction to the potential for undue inducement is the opposite concern--exploiting severely disadvantaged populations by not paying them enough [7]. Just as disadvantaged people are vulnerable to inducements, they are vulnerable to being taken advantage of by offers that undervalue the service they perform. Consider a phase I study of a new antibiotic in healthy people that involves a several-day inpatient stay, drug infusion, and multiple blood draws. If that study offered participants $50 a day ($150 total), who would participate? Most would expect to be better paid for such a burdensome study. Providing very low payments knowing that there are people poor enough to find this offer attractive is paradigmatic of exploiting their situation in a morally problematic way [5, 7]. Lest this concern seem illegitimate, several interview studies of paid participants--many of whom are poor--illustrate that those participants are more worried about being paid too little than about being paid too much [15-17].

The difficulty of balancing concern for undue inducement against concern for exploitation is one reason to adopt an approach that pays participants based on the nature of the unskilled but valuable work they perform [5]. A relatively consistent and standardized payment strategy that pays participants according to the prevalent wage for similar jobs and allows adjustments for inconvenience, discomfort, and, to some extent, risk, will protect them from undue inducement, largely avoid exploitation, and reward people fairly for the valuable service they perform. This strategy will result in a high number of disadvantaged people participating in research, but there is no reason to exclude them as participants, and it is not clear that the pattern-equity concern is sufficient to make this problem a priority. Research studies still must pass stringent ethical muster on all other grounds, and reductions in payment would simply promote exploitation and restrict income from a viable and socially beneficial work option [18].

Conclusion

While disconcerting initially, the enrollment of highly disadvantaged people in paid research studies is generally acceptable on further analysis. Many frequent participants in paid research rightly view it as a valuable and viable work opportunity; denying this opportunity seems both unnecessary and inconsistent with our views of other occupations, many of which involve greater risks and less social value. Studies that enroll disadvantaged populations must be carefully scrutinized to ensure that they meet adequate conditions for ethical research. Similarly, fair payment for participation in acceptable and approvable research is entirely appropriate, and the fact that payment increases participation among poor participants is a relatively small problem. Underpayment, however, in an attempt to protect vulnerable participants, may result in exploitation and fails to recognize that monetary gain is factored into numerous decisions every day.

It is important to recognize the elephant in the room--the driver of many of the gut reactions to enrollment of disadvantaged populations in paid research. Many of these people are the victims of profound injustice and have been abandoned by much of society. The injustice that constricts their options for making ends meet is presumably one of the chief reasons why participation in paid research is attractive. This recognition should strengthen our resolve to ensure that ethical standards of research are met and, more importantly, address the source of injustices in the first place. It does not mean that we should further restrict their options to participate in improving medical knowledge.

References

  1. Cohen LP. To screen new drugs for safety, Lilly pays homeless alcoholics. The Wall Street Journal. November 14, 1996: A1, A10.

  2. Rosenthal E. New York seeks to tighten rules on medical research. New York Times. September 27, 1996: B4.

  3. Beauchamp TL, Jennings B, Kinney ED, Levine RJ. Pharmaceutical research involving the homeless. J Med Philos. 2002;27(5):547-564.
  4. Wilkinson M, Moore A. Inducements revisited. Bioethics. 1999;13(2):114-130.
  5. Dickert N, Grady C. Incentives for participants. In: Emanuel EJ, Crouch R, Grady C, Lie R, Miller FG, Wendler D, eds. The Oxford Textbook on the Ethics of Clinical Research. New York: Oxford University Press; 2008.

  6. Emanuel EJ, Wendler D, Grady C. What makes clinical research ethical? Am J Ophthalmol. 2000;130(3):381.

  7. Elliott C, Abadie R. Exploiting a research underclass in phase 1 clinical trials. N Engl J Med. 2008;358(22):2316-2317.
  8. Wertheimer A, Miller FG. Payment for research participation: a coercive offer? J Med Ethics. 2008;34(5):389-392.

  9. Hawkins JS, Emanuel EJ. Clarifying confusions about coercion. Hastings Cent Rep. 2005;35(5):16-19.
  10. Wertheimer A. Coercion. Princeton, NJ: Princeton University Press; 1987.

  11. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont report: ethical principles and guidelines for the protection of human subjects of research. 1979. http://ohsr.od.nih.gov/guidelines/belmont.html. Accessed November 16, 2008.

  12. Emanuel EJ. Undue inducement: nonsense on stilts? Am J Bioeth. 2005;5(5):9-13.

  13. Emanuel EJ. Ending concerns about undue inducement. J Law Med Ethics. 2004;32(1):100-105.
  14. Bentley JP, Thacker PG. The influence of risk and monetary payment on the research participation decision making process. J Med Ethics. 2004;30(3):293-298.
  15. Slomka J, McCurdy S, Ratliff E, Timson S, Williams ML. Perceptions of financial payment for research participation among African-American drug users in HIV studies. J Gen Intern Med. 2007;22(10):1403-1409.
  16. Kolata GB. The death of a research subject. Hastings Cent Rep. 1980;10(4):5-6.
  17. Kass NE, Meyers R, Fuchs EJ, Carson KA, Flexner C. Balancing justice and autonomy in clinical research with healthy volunteers. Clin Pharmacol Ther. 2007;82(2):219-227.
  18. Dickert N, Grady C. What's the price of a research subject? Approaches to payment for research participation. New Eng J Med. 1999;341(3):198-203.

Citation

Virtual Mentor. 2009;11(1):54-60.

DOI

10.1001/virtualmentor.2009.11.1.pfor1-0901.

The viewpoints expressed in this article are those of the author(s) and do not necessarily reflect the views and policies of the AMA.