A few years ago, Bill Joy, a cofounder of Sun Microsystems and coauthor of the Java software specification, published a controversial article in Wired magazine in which he suggested that certain paths of scientific and technological research -- genetic engineering, robotics, and nanotechnology -- posed such great dangers to the future of the human beings that we ought to think twice before proceeding down those paths. Joy believes that what distinguishes these technologies from earlier ones is their potential for self-replication, thus raising the specter of a "future [that] doesn't need us." However, not all technologists share Joy's concern. For example, in a panel discussion of "humanoid robotics" that appeared in Discover, Marvin Minsky, one of the founders of the field of artificial intelligence, commented, "I don't see anything wrong with human life being devalued if we have something better."
Others, while not necessarily agreeing with Minsky's optimistic outlook for robots, have dismissed Joy's article as a naive statement of technological determinism. For example, in a recent review of Michael Crichton's nanorobot thriller Prey, Freeman Dyson argues that "Joy ignores the long history of effective action by the international biological community to regulate and prohibit dangerous technologies." Nonetheless, I find Joy's article worthy of notice for a number of reasons. First, a leader in the technical community speaking out on ethical issues, though not unheard of, is certainly rare. Second, Joy's focus on "macroethical" issues reflects a growing trend in engineering ethics. And third, the three problem areas cited by Joy -- robotics, nanotechnology, and genetic engineering -- indicate the growing need for greater collaboration among engineering ethicists and computer ethicists.
I started work as a consultant in the electric utility industry in the mid-1970s a few years after earning my bachelor's degree in electrical engineering (and after a brief interlude studying creative writing). Though the first oil shock had just taken place, the utility industry was still barreling toward the future with plans to double generating capacity every ten years. In retrospect, I can identify many ethical issues that went unnoticed at the time. Conflicts of interest, such as in underestimation of costs in planning studies to perpetuate the need for consulting services, though not everyday occurrences, were clearly present. Construction flaws and survey errors were overlooked to maintain good relations with contractors and to avoid embarrassing other engineers. Public concerns about nuclear power were belittled. And while these events sometimes tugged at my conscience, engineering ethics was a subject that was never broached in my education or work experience. Hand calculators had replaced slide rul es, but computer simulations were still uncommon. I recall being criticized by a supervisor for writing in a business-development prospectus that we would attack a particular problem using a digital computer. Computers, he scolded, are merely tools -- it was our engineering expertise that made us attractive to clients.
By the time I returned to my graduate studies in the early 1980s, engineering ethics was emerging as a full-fledged branch of applied ethics. Federally funded collaborations among engineers and philosophers led to significant developments in research and teaching. While moral theories, grounded in philosophy, and engineering codes of ethics, grounded in part in engineering's desire to earn respect as a "profession," competed for the attention of scholars and teachers, the case study emerged as a principal mode of pedagogy. Issues covered ranged from conflict-of-interest cases and industrial secrets to protecting public health, safety, and welfare, which all contemporary codes of engineering ethics now list as of "paramount" importance. For the most part, the behavior of individual engineers and the internal workings of the engineering profession (or what now might be called "microethics") received the most attention. …