Asian and Pacific Islander Cultural Values: Considerations for Health Care Decision Making

Article excerpt

By the end of the century, almost 30 percent of U.S. residents will be people of color, that is, non-Caucasian; by the year 2050, this proportion is expected to reach almost 50 percent (Day, 1996). This high level of diversity is surpassed in Hawaii, where 71 percent of the population is of Asian or Pacific Islander ancestry, and only 27 percent of the population is white (Yatabe, Koseki, & Braun, 1996). However, the medical standards of today's health care system on the basis of Western values promulgated through such organizations as the Joint Commission for the Accreditation of Healthcare Organizations, the American Medical Association (AMA), and the American Hospital Association (AHA). Conflicts that reach hospital ethics committees are resolved based on a Western model of ethics that includes four principles: autonomy, beneficence, nonmaleficence, and justice (Beauchamp & Childress, 1983). This article looks primarily at the principle of autonomy and the health care conflicts that can arise when it is applied to cultures that are more collectivist than individualist (Barker, 1994). In further exploration of these issues, this article discusses the individualist value base that supports current U.S. health care policies; discusses collectivist decision-making norms, with examples from six Asian and Pacific Islander cultures; discusses specific problems in health care when culture and policy clash; and presents implications for practice and research.

HEALTH CARE DECISION MAKING IN THE UNITED STATES

It may surprise many Americans that before 1960, health care decision-making practices in the United States were often paternalistic (Novack et al., 1979). Deference to doctors' health care decisions was commonplace, although there are reports that patients were uncomfortable with their sole dependence on physicians to disclose their true diagnosis (Edge & Groves, 1994). Even when informed consent became an ethical obligation in 1957 (as articulated in the Code of Ethics of the AMA and the AHA), physicians were still resistant to telling patients about serious illness, especially if the prognosis was terminal (Feifel, 1990; Novack et al., 1979). For example, Oken (1961) found that only 12 percent of physicians surveyed in 1960 said they would tell patients of a diagnosis of incurable cancer.

Originating in the consumerism movement of the 1960s and 1970s was the increased demand for the right of autonomous health care decision making in the United States. Advances in medical technology resulted in increased treatment options that provided consumers with more choices (Edge & Groves, 1994). To help judge whether they were receiving the best course of treatment, many people demanded more knowledge about their diagnosed condition, the treatment options, and the benefits and risks associated with each option. With increased discretionary spending ability, some people also exercised this demand through their pocketbooks, preferring to buy services from professionals who provided more information and options. In other cases, people sued their physicians for withholding information or not allowing patients to choose their course of treatment (Edge & Groves, 1994). It was not until 1973 that the Patient's Bill of Rights was passed; it elevated patient self-determination from an ethical concern to a legal obligation for physicians (Edge & Groves, 1994; Foster & Pearman, 1978; Hattori et al., 1991). A big change in practice resulted, as evidenced by a 1977 follow up to Oken's (1961) study. Of the physicians surveyed in 1977, 97 percent said that they would tell patients their true diagnosis, even if the prognosis was terminal (Novack et al., 1979). The Patient Self-Determination Act (PSDA) of 1990 (P.L. 101-508) furthered the individual's right to self-determination in health care decision making by requiring that health care institutions follow patient preferences for medical treatment as outlined in advance directives. …