In the late 1980s, a group of social scientists convened in France to examine the question of how democratic societies in North America and Europe had confronted the challenges posed by AIDS [1]. As each collaborator recounted the history of the political, social, and public health responses evoked by HIV in his or her country, three questions remained constant, even though the precise language reflected unique cultural differences. Did the history of responses to lethal infectious diseases provide lessons about how best to contain the spread of HIV infection? Should the policies developed to control sexually transmitted diseases or other communicable conditions be applied to AIDS? If AIDS were not to be treated like other communicable diseases, what would justify using different policies?
In some instances, these questions were explicitly addressed; on other occasions, they were simply implied in the critical policy discussions [2]. In summarizing the 2 days of discussion that ensued, one of us [RB] present at the meeting was struck by the “exceptional” nature of what had been decided in virtually every one of the cases considered. The term “HIV exceptionalism” thus emerged. It first appeared in print in an article in the New England Journal of Medicine in 1991 [2]. Since then, the term has been used in numerous policy settings, domestic and global, as well as at many AIDS conferences. A MEDLINE search identifies more than 50 citations in the literature.
Public health approaches to communicable disease are rooted in the late 19th century, when the threat of infectious disease provided a warrant for such forceful interventions by health authorities as compulsory examination and screening, named reporting of those infected, and the confinement of individuals through isolation and quarantine [3, 4]. Over the arc of the 20th century, the more coercive aspects of conventional public health became increasingly rare, following the introduction of effective vaccines and therapies and the subsequent waning of infectious disease-related morbidity and mortality. But the shock and alarm generated by the AIDS epidemic left proponents of civil liberties and advocates of gay rights fearful that traditional public health responses might be imposed on newly susceptible or infected populations [5]. What developed instead was a policy in which public health authorities further reduced coercive interventions, at least in their response to AIDS.
Reacting to the prevailing climate of anxiety and fear, intensified by a sense of clinical powerlessness, AIDS activists and their allies in public health fought for policies that would protect the autonomy and privacy rights of those with, or at greatest risk of contracting HIV infection and forestall discrimination. Their actions were critical to populations—gay men, IV drug users, their partners—who were already socially stigmatized and vulnerable. Coercive measures, many feared, would only serve to thwart efforts to reach those groups within which the epidemic was spreading.
In the United States, the argument for HIV exceptionalism, although used to limit public health surveillance and partner notification, was particularly important in defining antibody testing policy. Following the licensure of the HIV antibody test in 1985, AIDS activists warned of potential dangers, including the probability of stigmatization and discrimination and the psychological burden of knowing of one’s infection in the absence of effective therapies. But public health authorities viewed the test as central to their preventive strategies. From the ensuing tension emerged new standards requiring pretest counseling and written informed consent [5]. These requirements distinguished the HIV test from other blood tests routinely ordered by clinicians, often without explanation to the patient.
As physicians’ confidence in their competence to manage AIDS grew, they began to criticize the exacting restrictions tied to the antibody test. This was particularly true of pediatricians, who argued that babies, deserving close care if infected, had a right to be tested that superseded their mothers’ right to privacy. In the late 1990s, New York and Connecticut mandated HIV testing in newborns.
By then, the AIDS exceptionalism perspective was already under fire, especially as it affected infants. In 1994, a clinical trial reported that treating mothers during pregnancy and newborns directly after birth with zidovudine could reduce the vertical transmission rate of HIV by two-thirds. Two years later, the House of Delegates of the American Medical Association resolved in favor of mandatory testing of all pregnant women. The Institute of Medicine recommended routine testing of all expectant mothers in 1998 but allowed them an informed right of refusal [6]. In 1999, the same year as the New York and Connecticut statutes, the American Academy of Pediatrics and the American College of Obstetricians and Gynecologists together supported universal routine testing with an opt-out provision. In 2001, the Centers for Disease Control and Prevention (CDC) also supported universal screening, but without recommending an opt-out provision [7]. By 2004, 13 states required clinicians to offer testing to pregnant women, and four required routine testing.
If the argument for treating HIV tests like other diagnostic tools was first given voice in the face of the need to manage opportunistic infections in infants, it became an insistent demand when therapeutic prospects radically changed in the mid-1990s with the advent of effective treatment with antiretroviral drugs. Sometimes the call for change was framed in terms of “mainstreaming” HIV, but frequently critics explicitly decried HIV exceptionalism.
A striking feature of the debates that ensued was that, although the term “exceptionalism” was first employed descriptively as a way of characterizing broadly consonant policy decisions, the term itself was transformed into the subject of controversy. Those distressed by HIV testing decisions and those who believed that, from a clinical and public health point of view, change was necessary, saw exceptionalism as the problem. Those who sought to defend the new rights-protective regime were loath to use the term and, in fact, repeatedly asserted that what had emerged in the first years of the AIDS epidemic was simply good public health practice. Therefore, any effort to force HIV into a preconceived or traditional mold of public health would be counterproductive.
Because of the role he would play as a critic, both within the CDC in the United States and at the World Health Organization (WHO) as director of its AIDS program in 2006, Kevin De Cock is pivotal to understanding the continued critique of the exceptionalism perspective. Shocked by a patient whose positive serostatus had gone unrecognized despite multiple visits to his clinic in the United Kingdom, De Cock began in 1996 to demand a reevaluation of the norms governing HIV testing. Specifically, what had once been justified as protecting the rights of individuals could now be viewed as clinical negligence and an impediment to HIV prevention. To remedy this, he began to support routine testing [8, 9].
While working for the CDC in Kenya in 2002, De Cock attempted to challenge the international parameters for HIV testing, expressed in terms of the human rights of those who might be infected by the virus. In a hard-charging critique, “Shadow on the Continent,” De Cock asserted that those who defended such rights had spurned strategies that might better meet the public health crisis posed by AIDS in developing countries [10]. Rather, by treating AIDS like other communicable diseases and increasing HIV testing, public health and medicine would be protecting the rights of the uninfected, enhancing access of those already infected to therapies and clinical advice, and providing nations with tools to reduce the depredations of HIV/AIDS.
In September 2006, the CDC issued long-awaited recommendations for routine HIV testing in clinical settings [11]. Examined from the perspective of the standards established 2 decades earlier, the new approach was indeed a radical departure, an end to the exceptionalism that the CDC had at first embraced without ever using the word. Analyzed in the light of the CDC’s own emerging disenchantment with the strictures surrounding exceptionalism, the 2006 decision represented a culmination rather than an abrupt departure. Its new recommendations (states would still have to act through their laws and regulations) stipulated that patients be told that HIV testing was a routine part of care and be given the opportunity to opt out. Specific written consent would no longer be required because “general consent for medical care is sufficient to encompass consent for HIV testing.” What the CDC had proposed mirrored suggestions made by Dr. Thomas Frieden, now director of the CDC, then commissioner of the New York City Department of Health and Mental Hygiene [12]. The resistance that greeted Frieden’s proposals was emblematic of the opposition that would surface across the country as efforts were made to pass beyond exceptionalism. But despite such resistance, it was clear that the social, clinical, and political foundations of exceptionalism no longer held sway.
Also in 2006, the WHO, which De Cock had just joined, issued a draft document that stressed a clinician’s obligation to initiate HIV testing, with pretest information rather than counseling, and the patient’s right to opt out [9]. The result was a year-long debate. The struggle within the WHO to fashion global recommendations for HIV testing reflected the fissures that existed between human rights advocates and public health officials who fought the exceptionalism that had informed the WHO’s earlier recommendations. The former feared that proposals to weaken standards of explicit informed consent would render vulnerable populations even more vulnerable without, in fact, extending to them the benefits of antiretroviral therapy. While the outcome of this controversy was by no means as clear cut as that which had occurred in the United States, a close reading of the debate underscores how the exceptionalist paradigm retained its capacity to engender conflict, even when it was no longer hegemonic.
References
-
The meeting was held under the auspices of Columbia University’s School of Public Health and the Goldman School of Public Policy of the University of California, Berkeley.
-
Bayer R. Public health policy and the AIDS epidemic. An end to HIV exceptionalism? N Engl J Med. 1991;324(21):1500-1504.
-
Brandt AM. No Magic Bullet: A Social History of Venereal Disease in the United Stated from 1880. New York, NY: Oxford University Press, Inc.; 1985.
-
Leavitt JW. Typhoid Mary: Captive to the Public’s Health. Boston, MA: Beacon Press; 1996.
-
Bayer R. Private Acts, Social Consequences: AIDS and the Politics of Public Health. New Brunswick, NJ: Rutgers University Press; 1991.
-
Stoto MA, Almario DA, McCormick MC, et al. Reducing the Odds: Preventing Perinatal Transmission of HIV in the United States. Washington, DC: National Academy Press; 1999.
-
Centers for Disease Control and Prevention. Revised guidelines for HIV counseling, testing, and referral. MMWR Recomm Rep. 2001;50(RR-19):1-57.
- De Cock KM, Johnson AM. From exceptionalism to normalisation: a reappraisal of attitudes and practice around HIV testing. BMJ. 1998;316(7127):290-293.
- Bayer R, Eddington C. HIV testing, human rights, and global AIDS policy: exceptionalism and its discontents. J Health Polit Policy Law. 2009;34(3):301-323.
- De Cock KM, Mbori-Ngacha D, Marum E. Shadow on the continent: public health and HIV/AIDS in Africa in the 21st century. Lancet. 2002;360(9326):67-72.
- Branson BM, Handsfield HH, Lampe MA, et al. Revised recommendations for HIV testing of adults, adolescents, and pregnant women in health-care settings. MMWR Recomm Rep. 2006;55(RR-14):1-17.
-
The New York City of Health and Mental Hygiene. Making HIV testing a routine part of medical care. City Health Information. 2006;25(2):9-12. http://www.nyc.gov/html/doh/downloads/pdf/chi/chi25-2.pdf. Accessed November 4, 2009.