A crucial battle in the longstanding conflict between science and religion will concern the prospects for "cybernetic immortality"--the possibility that converging technologies will offer humans extended lives within information systems, robots, or genetically engineered biological organisms.
Cyberimmortality would require redefining human personalities as dynamic patterns of information, and human life as a process of evolution from material to computational planes of existence. The convergence of cognitive science with information technology already threatens traditional beliefs that are the heart of religion, notably the need for a God to save souls.
In the middle of the nineteenth century, psychiatrists struggled with the apparent contradiction between the traditional notion of an immortal soul and the discovery that injury, illness, and old age can rob an individual of memory, personality, and many other functions. If much of an individual's personality can be destroyed during life, the idea that it can survive death intact becomes less plausible. Later, the psychoanalytic school of psychology, led by Sigmund Freud, explicitly criticized religion as an illusion or a shared psychosis. But most branches of psychology, like science more generally, discreetly chose to avoid the topic.
Beginning around 1980, cognitively oriented researchers in sociology, psychology, and anthropology began developing rigorous models of religious faith. By explaining it, they came perilously close to debunking it. In the 1980s, sociologist Rodney Stark and I published a deductive theory, bolstered by extensive sociological data, about how social interaction can lead to belief in supernatural sources of rewards, when highly desired rewards (like eternal life) are unavailable in the real world. Wishful thinking, we suggested, is the typical mode of religious cognition.
On the basis of psychological and anthropological data, a number of writers have argued that the human brain evolved in such a way that people tend to see complex phenomena as the actions of conscious beings, thus encouraging belief in gods. In Descartes' Baby (2004), psychologist Paul Bloom argues that humans imagine they have souls because the human brain has no awareness of its own functioning. We falsely perceive ourselves to be separate from our bodies.
Cognitive scientists may not say explicitly that religious beliefs are errors of perception and interpretation, but that is what many people would infer. Cognitive science is making considerable progress in understanding how human thought and behavior arise in the structures and electrochemical processes of the human brain, and it has found no evidence of a soul.
At the same time, computer scientists have been creating technologies that effectively imitate more and more distinct functions of human intelligence, although the goal of full artificial intelligence (AI) remains elusive. As more people gain experience with computers, information systems, robots, and AI agents, cultural assumptions are likely to change. These machine intelligences cannot yet duplicate human mental behavior, but they do provide rather compelling illustrations of how pure mechanism could be responsible for complex thought.
More to the point, people are increasingly archiving their memories, experiences, and thoughts by means of computers. In the future, when combined with AI surrogates (avatars, robots, and distributed intelligence), these archives will enable technological resurrection and migration to new worlds in outer space. Although this development will be of vast historical significance, it may begin so imperceptibly within the standard customs of the society that it initially arouses no opposition from the churches or other conventional institutions.
In modern societies with market-oriented economies, the influence of religion and religious institutions has retreated with the advance of industrial and service corporations. …