Goodbye to All That: The End of Moderate Protectionism in Human Subjects Research
Moreno, Jonathan D., The Hastings Center Report
Federal policies on human subjects research have undergone a progressive transformation. In the early decades of the twentieth century, federal policies largely relied on the discretion of investigators to decide when and how to conduct research. This approach gradually gave way to policies that augmented investigator discretion with externally imposed protections. We may now be entering an era of even more stringent external protections. Whether the new policies effectively absolve investigators of personal responsibility for conducting ethical research, and whether it is wise to do so, remains to be seen.
In May 2000 the Department of Health and Human Services announced new regulatory and legislative initiatives concerning federally sponsored research involving human subjects. In September, the Office for Protection from Research Risks was reconstituted as the Office for Human Research Protections and, with a new director, completed its transition to the Office of the Secretary at DHHS. In the months leading up to these changes, both the OPRR and the Food and Drug Administration had been increasingly active in levying sanctions against institutions whose ethics institutional review boards were malfunctioning or that had engaged in questionable research practices, especially in human genetics trials.
Weeks after DHHS secretary Donna Shalala's announcement, a bipartisan group of congressional sponsors led by Congresswoman Diane DeGette, Democrat from Colorado, introduced the Human Research Subjects Protections Act of 2000. Among other reforms, the act would extend informed consent and prior review requirements to all human subjects research, regardless of funding source. Senator Ted Kennedy, Democrat from Massachusetts, introduced a bill that would establish steep civil penalties for investigators and institutions that broke the rules. Leading up to all this activity, there had been since 1997 several congressional hearings on human subjects research as well as numerous reports and recommendations by public and private panels concerning the state of the regulatory system.
All these hearings, bills, and reports had one thing in common: They all found or presupposed a need to strengthen the human subjects protections system. Although some individuals representing the community of scientific investigators raised their voices in objection to increased regulation--especially the psychiatric community in response to the National Bioethics Advisory Commission's recommendations concerning research involving persons with mental disorders--theirs were largely lonely voices. Protests that new measures would block important research seemed hard to sustain in light of two decades of remarkable advances under the current system, which at the time of its introduction was itself described as being so burdensome that it would threaten medical progress. Reservations about increased bureaucracy, or even the question whether the proposals being advanced would have avoided any actual patient or subject injuries, were overwhelmed by a historical tide that presages a new era in the history of human subjects regulations, an era that I call "strong protectionism."
A New World Order
The essence of strong protectionism is the minimization of clinical researchers' discretion in governing their conduct with regard to human subjects. Among the measures implied by strong protectionism are concurrent third party monitoring of consent and study procedures, disclosure of financial arrangements or other potential conflicts of interest, required training of investigators in research ethics and research regulations, and independent review of the decisionmaking capacity of potential subjects. All these and other measures have been proposed, and many may be implemented, in spite of the additional costs in time and money they represent, and regardless of the inference an observer may draw that clinical researchers are simply not to be trusted.
In this article my purpose is neither to challenge nor defend these early stirrings of what I believe to be a new era in the history of human subjects protections. It is, rather, to note how inured we have become to this grim view of investigator discretion, and how far we have traveled to reach this pass. The current transition to strong protectionism builds on two previous stages. During the first, singularly important period, lasting roughly from 1947 to 1981, the ancient tradition of weak protectionism, which granted enormous discretion to physician experimenters, began to break down. Following that was the era that is now passing away, a compromise between physician discretion and modest external oversight that I call moderate protectionism.
Perhaps it was inevitable that moderate protectionism could last only about twenty years. It was a compromise that combined substantial researcher discretion with rules enforced by a minimal bureaucracy. An important part of this compromise was that researchers for the most part had the prerogative of identifying potential conflicts of interest themselves, without external review. Researchers' use of human subjects was approved before and after it actually took place, and only very rarely was there third party observation of research activities themselves.
The moderately protectionist era might have lasted longer had the research environment not changed so much, had so much money not poured into research as the result of promising new areas for investigation and investment, had the proportion of private funding not increased so drastically, and had the number and complexity of studies not grown so rapidly. Together, these elements strained the twenty-year compromise and may have caused its collapse, even though it was a period little blemished by harms to persons, at least as compared with the scandalous era that immediately preceded it.
Weak Protectionism: Virtue Has Its Day
Concerns about the involvement of human beings in research are at least a century old. In the nineteenth century, many institutionalized children in Europe and the United States were subjects in vaccine experiments, and by the 1890s antivivi-sectionists were calling for laws to protect children. At the turn of the century the Prussian government imposed research rules and Congress considered banning medical experiments for certain populations, such as pregnant women, in the District of Columbia. In the ensuing decades there were occasional well-publicized scandals, mostly involving child subjects, and the first attempt to test a polio vaccine was stopped after the American Public Health Association censured the program.
Prior to World War II, however, medical researchers were largely inoculated against regulation by the nearly legendary status of the self-experimentation conducted by members of the Yellow Fever Commission, led by U.S. Army physician Walter Reed. One of the commissioners, Dr. Jesse Lezear, died after subjecting himself to the bite of the mosquito that transmits the disease. Lezear thereby helped to confirm the hypothesis of the disease's spread. A less celebrated but equally notable element of the Reed story is his use in Cuba of an early written contract for the Spanish workers who were among the commission's other subjects.
For some reason, Reed himself was widely thought to have been one of the volunteer subjects, perhaps due to his untimely death only a few years later as a result of a colleague's error. This misconception added to the legend and to the model of medical researchers as having exceptional moral character, even to the point of martyrdom. The Reed myth became a singular reference point and justification for the self-regulation of medical science. During the 1960s, when researchers were coming under new scrutiny and weak protectionism was under attack, the distinguished physician-scientist Walsh McDermott referred to the Reed story to demonstrate the social importance of research and the high moral standing that went with it.
By the early 1950s, there were gestures in the direction of a protectionist attitude toward human subjects, but they were in a fairly abstract, philosophical vein rather than in a robust set of institutionalized policies and procedures. An example is the Army's failure to implement a compensation program for prisoners injured in malaria or hepatitis studies when it was contemplated in the late 1940s. The essential feature of the weak form of protectionism was its nearly wholesale reliance on the judgment and virtue of the individual researcher. Thus when the World Medical Association began deliberations in 1953 on the first Declaration of Helsinki, informed consent was made a far less prominent feature than it had been in the Nuremberg Code, which the medical community found unacceptably legalistic. Helsinki also introduced the notion of surrogate consent, permitting research when individuals are no longer competent to provide consent themselves. These moves place a substantial burden on the self-control of the individual researcher.
To be sure, until the middle and later 1960s, and with the significant exception of the Nazi experience, to many there did not seem to be good reason to worry about human protections. The development of penicillin, the conquest of polio, and the emergence of new medical devices and procedures all bolstered the public prestige of biomedical research. There were only inklings of a continuing, low-intensity concern about the concentrated power of medical researchers even in the 1950s, exemplified perhaps in the gradual disappearance from professional discussions of the term "human experiment" and its replacement with the more detached and reassuring "research."
On the whole, then, the world of clinical studies from the late 1940s up through the mid-1960s was one in which a weak form of protectionism prevailed, one defined by the placement of responsibility on the individual researcher. Obtaining written informed consent (through forms generally labeled "permits," "releases," or "waivers"), though apparently well-established in surgery and radiology, was not common in clinical research and cannot have provided more than a modicum of increased protection to human subjects. For example, whether a medical intervention was an "experiment" or not, and therefore whether it fell into a specific moral category that required an enhanced consent process, was a judgment largely left up to the researcher. Partly that judgment depended on whether the individual was a sick patient or a healthy volunteer. The former were likely to be considered wholly under the supervision of the treating doctor, even when the intervention was quite novel and unlikely to be of direct benefit. An individual might be asked to consent to surgery but might not be informed beyond some generalities about its experimental aspect.
There were some important exceptions. The Atomic Energy Commission established a set of conditions for the distribution of radioisotopes to be used with human subjects, including the creation of local committees to review proposals for radiation-related projects. Early institutional review boards were established in several hospitals (including Beth Israel in Boston and the City of Hope in California) in order to provide prior group review for a variety of clinical studies. The Clinical Center of the National Institutes of Health in Bethesda, Maryland, which opened in 1953, appears to have been one of a handful of hospitals that required prospective review of clinical research proposals by a group of colleagues. Yet as advanced as the Clinical Center might have been, the prior group review process it established seems, at least at first, to have been confined to research with healthy, normal volunteers. That at least some sick patients who would probably not be helped by study participation were morally equivalent to normal subjects who would not be benefited (with the possible exception of those in vaccine studies) was apparently not appreciated.
Prior group review is essential to the transition beyond weak protectionism and was not common before the 1970s. Yet decades earlier there was a keen awareness of the psychological vulnerability inherent in the subject role, a vulnerability that could have argued for independent review of a research project. An extensive psychological literature, founded mainly on psychoanalytic theory, propounded a skeptical view of the underlying motivations of experiment volunteers as early as 1954. That year, Louis Lasagna and John M. von Felsinger reported in Science on the results of Rorschach studies and psychological interviews of fifty-six healthy young male volunteers in drug research. The authors concluded that the subjects exhibited "an unusually high incidence of severe psychological maladjustment." "There is little question," they wrote, "that most of the subjects ... would qualify as deviant, regardless of the diagnostic label affixed to them by examining psychiatrists or clinical psychologists." The authors theorized that the group might not be representative of the population from which it was drawn (college students), and that they might have been attracted to the study for various reasons having to do with their deviance, beyond financial reward.
I describe this study not to endorse its psychology or its conclusions, nor to imply that neurotic tendencies are either typical of research volunteers or a priori disqualifying conditions for decisionmaking capacity. The point is, rather, that thought was being given as early as 1954 to the question of the recruitment of subjects who might be vulnerable despite their healthy and normal appearance. The article was published in a major scientific journal. It would have been natural to ask further questions about the vulnerability of potential research subjects who are known to be seriously ill. Yet despite this psychological theorizing, which could be viewed as quite damning to the moral basis of the human research enterprise, protectionism was at best a weak force for years to come.
An occasion for the significant revision of this picture came at the end of the Second World War, when twenty-three Nazi doctors and medical bureaucrats were tried for crimes associated with vicious medical experiments on concentration camp prisoners. The defendants were selected from about 350 candidates. Although only 1,750 victims were named in the indictment, they were a handful of the thousands of prisoners used in a wide variety of vicious experiments, many in connection with the Nazi war effort. Some involved the treatment of battlefield injuries or prevention of the noxious effects of high altitude flight. Others, such as the sterilization experiments, were undertaken in the service of Nazi racial ideology, and still another category had to do with developing efficient methods of killing.
A strong defense mounted by the defendants' lawyers noted that the Allies had also engaged in medical experiments in the service of the war effort. As the prosecution's attempt to demonstrate that there were clear international rules governing human experimentation faltered, the judges decided to create their own set of rules, known to posterity as the Nuremberg Code, the first line of which is, "The voluntary consent of the human subject is absolutely essential." Although the court seemed to believe that protections were needed, it is not clear how intrusive they wished these protections to be in the operations of medical science. The judges declined, for example, to identify persons with mental disorders as requiring special provisions, although their medical expert urged them to do so. The very requirement of voluntary consent for all undermined the relevance of their code to experiments involving persons with diminished or limited competence, and the extreme circumstances that gave rise to the trial seemed quite distant from normal medical research.
Unlike the medical profession as a whole, some government agencies attempted to put the code to use, although with little success. In the early 1950s the Department of Defense adopted the Nuremberg Code, along with written and signed consent, as its policy for defensive research on atomic, biological, and chemical weapons. A 1975 report from the Army Inspector General pronounced that initiative a failure. In 1947 the new Atomic Energy Commission attempted to impose what it termed "informed consent" on its contractors as a condition for receiving radioisotopes for research purposes. It also established--or attempted to establish--a requirement of potential benefit for the subject. Both of these conditions were to apply to nonclassified research. This relatively protectionist attitude may not have been adopted with a great deal of appreciation of its implications. In any case, the AEC met with resistance among some of its physician contractors, but not its physician advisors. The agency's stance ultimately was not institutionalized, and the letters setting out the requirements seem to have been soon forgotten. (Indeed, the requirement of potential benefit seems incompatible with the research on trace-level radiation that the AEC sponsored shortly thereafter.)
Historians of research ethics generally date the increasing vigor of protectionist sentiment among high-level research administrators, as well as the general public, to the series of events that began with the Thalidomide tragedy and continued with scandals such as the Brooklyn Jewish Chronic Disease Hospital Case and, later, the Willowbrook hepatitis research. These cases cast doubt on the wisdom of leaving judgments about research participation to the researchers' discretion. The Jewish Chronic Disease Hospital Case, in which elderly debilitated patients were injected with cancer cells, apparently without their knowledge or consent, attracted the attention and concern of James S. Shannon, director at that time of the NIH. Shannon's intervention, and the resistance from within his own staff, was an important and revealing moment in the history of human subjects protections.
In late 1963 Shannon appointed his associate chief for program development, Robert B. Livingston, as chair of a committee to review the standards for consent and requirements of NIH-funded centers concerning their procedures. The Livingston Committee affirmed the risks to public confidence in research that would result from more cases like that of the Jewish Chronic Disease Hospital. Nonetheless, in its 1964 report to Shannon the committee declined to recommend a code of standards for acceptable research at the NIH, on the grounds that such measures would "inhibit, delay, or distort the carrying out of clinical research." Deferring to investigator discretion, the Livingston Committee concluded that the NIH was "not in a position to shape the educational foundations of medical ethics" (pp. 99-100).
Disappointed but undeterred by the response of his committee, Shannon and Surgeon General Luther Terry proposed to the National Advisory Health Council that the NIH take responsibility for formal controls on investigators. The NAHC essentially endorsed the proposal and resolved that human subjects research should be supported by the Public Health Service only if "the judgment of the investigator is subject to prior review by his institutional associates to assure an independent determination of the protection of the rights and welfare of the individual or individuals involved, of the appropriateness of the methods used to secure informed consent, and of the risks and potential medical benefits of the investigation." The following year Surgeon General Terry issued the first federal policy statement that required PHS-grantee research institutions to establish what were subsequently called research ethics committees. The seemingly innocent endorsement of "prior review by institutional associates" was the most significant single departure from the weakly protectionist tradition to a process that finally yielded the moderately protectionist system we have today.
The Surgeon General's policy was hardly typical of contemporary attitudes, however, and the practice it sought to implement is one we are still trying to effect. To appreciate the weakness of the form of protectionism that prevailed through the 1960s, it is useful to recall the dominant role that prison research once had in drug development in the United States. By 1974 the Pharmaceutical Manufacturers Association estimated that about 70 percent of approved drugs had been through prison research. Pharmaceutical companies literally built research hospitals on prison grounds. Although in retrospect we may think of limits on prison research as a triumph of protectionism (on the grounds that prisoners cannot give free consent), at the time it was a confluence of political and cultural forces that had little to do with actual abuses (though there certainly were some) and was resisted by prison advocates. Perhaps the most important public event that signaled the inevitable end of widespread prison research was the 1973 publication of "Experiments behind Bars" by Jessica Mitford in the Atlantic Monthly.
Within the medical profession itself, then, weak protectionism remained the presumptive moral position well into the 1970s, if not later. Neither of the most important formal statements of research ethics, the Nuremberg Code and the Helsinki Declaration, had nearly as much effect on the profession as a 1966 New England Journal of Medicine paper by Harvard anesthesiologist Henry Beecher. The importance of timing is evident in the fact that Beecher had been calling attention to research ethics abuses since at least 1959, when he published a paper entitled "Experimentation in Man," but his 1966 publication "Ethics and Clinical Research" attracted far more attention. One important distinguishing feature of the latter work was Beecher's allusion to nearly two dozen cases of studies alleged to be unethical that had appeared in the published literature. By "naming names" Beecher had dramatically raised the stakes.
It would, however, be an error to conclude that Beecher himself favored external review of clinical trials that would remove them from medical discretion. To the contrary, Beecher was one among a large number of commentators who favored (and in some instances continue to favor) reliance primarily on the virtue of the investigator. Although he strongly defended the subject's right to voluntary consent, he argued in his 1959 paper that the best protection for the human subject would be obtained by ensuring that the investigator possessed "an understanding of the various aspects of the problem" being studied, and he was quite critical of the Nuremberg Code's dictum that the subjects themselves should have sufficient knowledge of the experiment before agreeing to participate. Nor was Beecher's attitude toward the Code's provisions limited to philosophical musings. In 1961 the Army attached a new provision to its standard research contract that essentially restated the Nuremberg Code. Along with other members of Harvard Medical School's Administrative Board, Beecher protested and persuaded the Army Surgeon General to insert into Harvard's research contracts a statement that its Article 51 offered "guidelines" rather than "rigid rules."
Beecher's attitude was shared by many other distinguished commentators on research practices through the 1960s and 1970s. In 1967 Walsh McDermott expressed grave doubt that the "irreconcilable conflict" between the "individual good" and the "social good" to be derived from medical research could be resolved, and certainly not by "institutional forms" and "group effort"--apparently references to ethics codes and peer review. McDermott's comments were by way of introduction to a colloquium at the annual meetings of the American College of Physicians on "The Changing Mores of Biomedical Research." In his remarks McDermott alluded to the growing contribution of research to the control of disease, beginning with Walter Reed's yellow fever studies. Thus, he continued, "medicine has given to society the case for its rights in the continuation of clinical investigation," and "playing God" is an unavoidable responsibility, presumably one to be shouldered by clinical investigators.
Another distinguished scientist who made no secret of his skepticism toward the notion that the investigator's discretion could be supplemented by third parties was Louis Lasagna. In 1971 Lasagna wondered "how many of medicine's greatest advances might have been delayed or prevented by the rigid application of some currently proposed principles to research at large." Rather, "for the ethical, experienced investigator no laws are needed and for the unscrupulous incompetent no laws will help" (p. 109). Six years later, when the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research proposed a moratorium on prison research, Lasagna caustically editorialized that the recommendations "illustrate beautifully how well-intentioned desires to protect prisoners can lead otherwise intelligent people to destroy properly performed research that scrupulously involves informed consent and full explanation and avoids coercion to the satisfaction of all but the most tunnel-visioned doctrinaire."
It is perhaps worth noting that both Beecher and Lasagna had good reason to reflect on the problem of research ethics. In 1994 Lasagna told interviewers for the Advisory Committee on Human Radiation Experiments that between 1952 and 1954 he was a research assistant in a secret, Army-sponsored project, directed by Beecher, in which hallucinogens were administered to healthy volunteers without their full knowledge or consent. Lasagna said that he reflected "not with pride" on the episode.
Weak Protectionism: The Death Knell
Among those who developed an interest in research ethics during the 1960s was Princeton theologian Paul Ramsey. Although Ramsey is today remembered as one who took a relatively hard line on research protections, and although he significantly advanced the intellectual respectability of a protectionist stance, in retrospect his position seems quite modest. In his landmark 1970 work, The Patient as Person, Ramsey declared that "No man is good enough to experiment upon another without his consent." In order to avoid the morally untenable treatment of the person as a mere means, the human subject must be a partner in the research enterprise. However, Ramsey was prepared to accept nonconsented treatment in an emergency, including experimental treatment that might save life or limb. He also acceded to the view that children who cannot be helped by standard treatment may be experimental subjects if the research is related to their treatment and if a parent consents.
The emergence of modern bioethics at the end of the 1960s brought another nonphysician commentator onto the scene. While generally agreeing with the theologian Ramsey in advocating strict limits on professional discretion, the philosopher Hans Jonas struck a more passionate, even haunting tone: "We can never rest comfortably in the belief that the soil from which our satisfactions sprout is not watered with the blood of martyrs. But a troubled conscience compels us, the undeserving beneficiaries, to ask: Who is to be martyred? in the service of what cause? and by whose choice?" In explicitly calling forth survivor guilt in the benefiting public, Jonas also deepened the moral burden on the clinical investigator and called attention to the moral paradox of human experimentation.
By 1970 the notion that consent was ethically required was well established in principle (including surrogate consent for children and incompetents), however poorly executed in practice. Ramsey's contribution was in calling attention to the problem of nonbeneficial research participation, a decision that required at a minimum the human subject's active participation, while Jonas insisted on the inherent and unavoidable unfairness of human experimentation. As though to underline the point, only two years after Ramsey's and Jonas's writings, the Tuskegee Syphilis Study scandal broke into the open. Here was a case in which the subjects were clearly not informed participants in research and were obviously used as mere means to ends they did not share. The subsequent federal panel appointed to review the study, the Tuskegee Syphilis Study Ad Hoc Panel, concluded that penicillin therapy should have been made available to the participants by 1953. The panel also recommended that Congress create a federal panel to regulate federally sponsored research on human subjects, a move that foreshadowed and helped define the later transition from weak to moderate protectionism.
News of Tuskegee demolished the approach defended in the 1960s by McDermott and Lasagna. In the years immediately following Beecher's 1966 article it was still possible to argue that scientists should take responsibility to make what McDermott regarded as appropriately paternalistic decisions for the public good, decisions that recognize that societal interests sometimes take precedence over those of the individual. Although there are surely instances in which this general proposition is unobjectionable, following the syphilis study such an argument became much harder to endorse. In a word, the tide of history was turning against the physician commentators.
As the implications of Tuskegee became apparent, philosopher Alan Donagan published an essay on informed consent in 1977 that symbolized the altered attitude. Donagan's critique ventured well beyond those of Ramsey and Jonas. In Donagan's essay the invigorated informed consent requirement is taken as nearly a self-evident moral obligation in clinical medicine. In his discussion of informed consent in experimentation, Donagan explicitly compared the arguments of a Nazi defense attorney with those of McDermott and Lasagna, concluding that they are both versions of a familiar and (one infers) rather primitive form of utilitarianism. Donagan concluded that, by the lights of the medical profession itself, the utilitarian attitudes instanced in the Nazi experiments and the Brooklyn Jewish Chronic Diseases Hospital case cannot be justified. Perhaps still more telling is the mere fact that Donagan, a highly respected moral philosopher who could not be dismissed as a "zealot," could associate the arguments of Nazis with those of some of America's most highly regarded physicians. Donagan's essay underlined a leap in the evolution of protectionism through the Tuskegee experience, especially on the question of the balance between the subject's interests and those of science and the public, and on the subsequent discretion to be granted the lone investigator.
Two Forms of Accessionism
To be sure, the story is not one of an inexorable march toward stronger protectionism. Although the tendency since the advent of the Nuremberg Code--greatly strengthened in the United States by the Belmont Report--has been to limit the scope of investigator discretion, there have been countervailing forces. One of these has been the Declaration of Helsinki, which has employed the concepts of therapeutic and nontherapeutic research, defining the former as "Medical Research Combined with Professional Care." According to the version of Helsinki drafted in 1989, "If the physician considers it essential not to obtain informed consent, the specific reasons for this proposal should be stated in the experimental protocol for transmission to the independent committee." Thus Helsinki continued to contemplate a relatively permissive attitude toward investigator discretion, as it has since the first version in 1954. Henry Beecher preferred Helsinki to Nuremberg precisely because the former is a "set of guides" while the latter "presents a set of legalistic demands."
Another force counteracting the tendency to limit investigator discretion has been a movement on behalf of greater access to clinical trials. The most pronounced expression of this effort has occurred among AIDS activists, who in the later 1980s successfully insisted on the creation of alternative pathways for making unproven anti-AIDS drugs available. In the face of a disease that resisted treatment and struck down people just entering the prime of life, the determination to find solutions was understandable. The slogan embraced by ACT-UP (AIDS Coalition to Unleash Power), that "A Drug Trial is Health Care Too," was a political expression of confidence in the power of science. The slogan also depended on some assumptions about the benefits of research participation and the self-discipline of the medical research community. Further, it relied on the very protections it sought to shortcut. This movement has apparently already largely run its course; the activists who launched it have revised their attitude toward alternative pathways of access to nonvalidated medications and have become more critical of their earlier position.
The ACT-UP slogan reflects what might be called "therapeutic accessionism." Another and much more durable movement could be described as "scientific accessionism." In the late 1980s female political leaders noted the paucity of women in clinical trials and finally brought about significant changes in NIH and FDA policies. Similar policy reforms have recently been introduced for children. Unlike therapeutic accessionism, this view is wholly consistent with strengthened protections for subjects. In fact, it could be said to follow from the principle of justice enclosed by the National Commission in The Belmont Report, since it attempts to further extend the benefits of research.
Moderate protectionism was perhaps being dismantled as soon as it was born. In the early 1980s the later President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research made recommendations on the evaluation and monitoring of institutional review boards and endorsed the proposition that research-related injuries should be compensated. The impact of the President's Commission's efforts to sustain the pressure brought to bear by the National Commission was muted, however, by a relatively (and uncharacteristically) scandal-free period in the history of human research ethics. Instead, the pressing need for anti-AIDS medications, and the accessionist movement that went with it, dominated the discussion of human subjects research for much of the 1980s and early 1990s.
The serenity was challenged in the 1990s by the revelations of cold war radiation experiments. Though the studies took place decades earlier, the story of human radiation experiments story provided an occasion for another look at the regulatory regime and how well it was working. Among the recommendations of the Advisory Committee on Human Radiation Experiments in 1995 were several that would strengthen human subject protections. For example, the ACHRE urged that regulations be established to cover the conduct of research with institutionalized children and that guidelines be developed to cover research involving adults with questionable competence. The ACHRE also recommended steps to improve existing protections for military personnel concerning human subject research. Substantial improvements were urged in the federal oversight of research involving human subjects: that outcomes and performance be evaluated beyond audits for cause and paperwork review, that sanctions for violations of human subjects protections be reviewed for their appropriateness in light of the seriousness of failures to respect the rights and welfare of human subjects, and that human subjects protections be extended to nonfederally funded research. The ACHRE also recommended the creation of a mechanism for compensating those injured in the course of participation as subjects of federally funded research.
Within eighteen months of the ACHRE's final report, on 17 May 1997, the National Bioethics Advisory Commission unanimously adopted a resolution that "No person in the United States should be enrolled in research without the twin protections of informed consent by an authorized person and independent review of the risks and benefits of the research." That same month President Clinton stated that "[w]e must never allow our citizens to be unwitting guinea pigs in scientific experiments that put them at risk without their consent and full knowledge."
At the end of 1998 NBAC recommended increased protections for persons with mental disorders that might affect their decisionmaking capacity, reminiscent of suggestions made by the National Commission twenty years before. On the whole, two decades since its advent, moderate protectionism was on the run before a flurry of federal activity.
On the account I have presented, protectionism is the view that a duty is owed those who participate as subjects in medical research. The underlying problem is how to resolve the tension between individual interests and scientific progress, where the latter is justified in terms of benefits to future individuals. Weak protectionism is the view that this problem is best resolved through the judgment of virtuous scientists. Moderate protectionism accepts the importance of personal virtue but does not find it sufficient. Strong protectionism is disinclined to rely, to any substantial degree, on the virtue of scientific investigators for purposes of subject protection.
We are today so accustomed to moderate protectionism that we have nearly forgotten the struggle that led to its establishment. Where once it was considered radical, moderate protectionism is now embraced by the medical community. Consider, for example, the position exemplified in a recent essay on ethics in psychiatric research, in which the authors state that "the justification for research on human subjects is that society's benefit from the research sufficiently exceeds the risks to study participants." But then the authors continue, "potential risks and benefits must be effectively communicated so that potential subjects can make informed decisions about participation." The current battleground, then, is not whether the subjects should in theory be full participants, or whether prior review of experiment proposals should be required, but whether, or to what extent, subjects can take an active role in the clinical trials process. To the extent that such active participation can be achieved, the introduction of more strongly protectionist requirements may be forestalled.
Implicit in all discussions about the ethics of clinical trials has been the assumption that the investigator bears a significant degree of moral responsibility for the dignity and well being of the human subject, a responsibility that cannot be sloughed off and assigned to someone else. In the words of the first article of the Nuremberg Code, "The duty and responsibility for ascertaining the quality of the consent rest upon each individual who initiates, directs or engages in the experiment. It is a personal duty and responsibility which may not be delegated to another with impunity."
This sensibility has not wholly disappeared from our public discourse, even in the onslaught of calls for higher levels of subject protection by regulatory means. Rather, the dispute turns on how much we should rely on the moral virtue of the individual investigator. While he was still a medical school professor, the person who recently became the first director of the Office for Human Research Protections wrote a passage explicitly reminiscent of Beecher's sympathy for a system based on the scientist's virtue. "In truth," wrote Greg Koski in 1999, "investigators are much better positioned during the course of their studies to protect the interests of individual research subjects than are the IRBs. Paradoxically, the person most likely to do something to harm a subject, the investigator, is also the person most capable of preventing such harm. And so, as Beecher ... concluded many years ago, the only true protection afforded research subjects comes from a well-trained, well-meaning investigator."
Koski's admiration for Beecher (another Harvard anesthesiologist) is evident, but his peroration has an air of nostalgia about it. Since his assent to the OHRP directorship, Koski has emphasized the importance of research ethics training for investigators. It remains to be seen whether education alone can slow the historic march towards strong protectionism.
A Moral Hazard
I have argued that the march of history is resolute in its rejection of investigator discretion. There is nonetheless a moral hazard in the strong protectionism that aims to supplant the scientist's virtue.
It would be understandable, though of course not admirable, if the scientist's sense of personal responsibility for his subjects were to be undermined in a much more intensely regulated environment. Paradoxically, the research scientist's sense of personal moral responsibility might weaken as the official and continuous scrutiny of scientific work is strengthened. From the investigator's standpoint, the care of human subjects could come to be seen as a concern secondary to the efficient and careful execution of the scientific mission, especially when society has assigned to others the job of protecting subjects. The clinical researcher might then feel justified in taking what Josiah Royce called a "moral holiday," focusing only on the science and leaving the task of protecting human subjects to those whose charge it is.
In this way strong protectionism might inadvertently result in undermining physician investigators' sense of personal moral responsibility in the conduct of human experiments. For all the limitations of that virtue in the protection of human subjects, it is surely not one that we would want medical scientists to be without. No less an authority than the Nuremberg Code tells us so. But in spite of the stirring appeals it might still inspire, the code was a product of a long history of weak protectionism, and we shall not see that time again.
[1.] S.E. Lederer and M.A. Grodin, "Historical Overview: Pediatric Experimentation," in Children as Research Subjects: Science, Ethics, and Law, ed. M.A. Grodin and L.H. Glantz (New York: Oxford University Press, 1994).
[2.] S.E. Lederer, Subjected to Science: Experimentation in America before the Second World War (Baltimore: Johns Hopkins University Press, 1995).
[3.] W. McDermott, "Opening Comments on the Changing Mores of Biomedical Research," Annals of Internal Medicine 67, Supp. 7 (1967): 39-42.
[4.] Advisory Committee on Human Radiation Experiments, The Human Radiation Experiments (New York: Oxford University Press, 1996), p. 55-56.
[5.] R.R. Faden and T.L. Beauchamp, A History and Theory of Informed Consent (New York: Oxford University Press, 1986).
[6.] L.M. Lasagna and J.M. Von Felsinger, "The Volunteer Subject in Research," in Experimentation with Human Beings, ed. J. Katz (New York: Russell Sage Foundation, 1972), pp. 623-24, at p. 623.
[7.] J.D. Moreno, Undue Risk: Secret State Experiments on Humans (New York: W.H. Freeman, 1999).
[8.] See ref. 4, Advisory Committee on Human Radiation Experiments, The Human Radiation Experiments, p. 63.
[9.] J.S. Reisman, Executive Secretary, NAHC, to J.A. Shannon, 6 December 1965 ("Resolution of Council").
[10.] W.J. Curran, "Governmental Regulation of the Use of Human Subjects in Medical Research: The Approach of Two Federal Agencies." in Experimentation with Human Subjects, ed. P.A. Freund (New York: George Braziller, 1970), pp. 402-54.
[11.] J. Mitford, "Experiments behind Bars: Doctors, Drug Companies, and Prisoners," Atlantic Monthly 23 (1973): 64-73.
[12.] H.K. Beecher, "Experimentation in Man," JAMA 169 (1959): 461-78; H.K. Beecher, "Ethics and Clinical Research," NEJM 274 (1966): 1354-60.
[13.] See ref. 4, Advisory Committee on Human Radiation Experiments, The Human Radiation Experiments, pp. 89-91.
[14.] See ref. 3, W. McDermott, "Opening Comments on the Changing Mores of Biomedical Research," pp. 39-42.
[15.] L. Lasagna, "Some Ethical Problems in Clinical Investigation," in Human Aspects of Biomedical Innovation, eds. E. Mendehlsohn, J.P. Swazey and I. Taviss, (Cambridge, Mass.: Harvard University Press, 1971), p. 105.
[16.] L. Lasagna, "Prisoner Subjects and Drug Testing," Federation Proceedings 36, no. 10 (1977): 2349.
[17.] L. Lasagna interview by J. M. Harkness and S. White-Junod (ACHRE), transcript of audio recording, 13 December 1994 (ACHRE Research Project Series, Interview Program File, Ethics Oral History Project), 5.
[18.] P. Ramsey, The Patient as Person: Explorations in Medical Ethics (New Haven, Conn.: Yale University Press, 1970), pp. 5-7.
[19.] H. Jonas, "Philosophical Reflections on Experimenting with Human Subjects," in Experimentation with Human Beings, ed. J. Katz. (New York: Russell Sage Foundation, 1972), p. 735.
[20.] A. Donagan, "Informed Consent in Therapy and Experimentation," Journal of Medicine and Philosophy 2 (1977): 318-29.
[21.] Sir W. Refkhauge, "The Place for International Standards in Conducting Research for Humans," Bulletin of the World Health Organization 55 (1977), 133-35, quoting H. K. Beecher, "Research and the Individual," Human Studies 279 (1970).
[22.] President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research, Implementing Human Subject Regulations (Washington, D.C.: Government Printing Office, 1983).
[23.] President's Commission for the Study of Ethical Problems in Medicine and Biomedical and Behavioral Research, Compensating j3r Research Injuries: The Ethical and Legal Implications of Programs to Redress Injured Subjects, Vol. I, Report (Washington, D.C.: Government Printing Office, June 1982).
[24.] Advisory Committee on Human Radiation Experiments, op. cit., pp. 527-28.
[25.] National Bioethics Advisory Commission, Full Commission Meeting, Arlington, Virginia, 17 May 1997.
[26.] W.J. Clinton, Morgan State University Commencement Address, 18 May 1997.
[27.] National Bioethics Advisory Commission, Research Involving Persons With Mental Disorders That May Affect Decisionmaking Capacity (Washington, DC, 1998).
[28.] J.A. Lieberman et al., "Issues in Clinical Research Design: Principles, Practices, and Controversies," in Ethics in Psychiatric Research, eds. H.A. Pincus, J.A. Lieberman, and S. Ferris (Washington, D.C.: American Psychiatric Association, 1999), pp. 25-26.
[29.] G. Koski, "Resolving Beecher's Paradox," Accountability in Research 7 (1999).
Jonathan D. Moreno, "Goodbye to All That: The End of Moderate Protectionism in Human Subjects Research," Hastings Center Report 31, no. 3 (2001): 9-17.
Jonathan D. Moreno is Kornfield professor of biomedical ethics and director of the center for biomedical ethics at the University of Virginia, Charlottesville.…
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Article title: Goodbye to All That: The End of Moderate Protectionism in Human Subjects Research. Contributors: Moreno, Jonathan D. - Author. Journal title: The Hastings Center Report. Volume: 31. Issue: 3 Publication date: May 2001. Page number: 9. © 1999 Hastings Center. COPYRIGHT 2001 Gale Group.
This material is protected by copyright and, with the exception of fair use, may not be further copied, distributed or transmitted in any form or by any means.