THE SEARCH FOR CONSTRAINT
Artificial Morality borders on fantasy. Writing moral principles for imaginary creatures playing abstract games sounds like science fiction. Indeed, Artificial Morality is science fiction—imaginative fantasy constrained by science. All simulation is; this is not grounds for dismissal. 1 A problem remains. Without scientific constraint, fantasy can be tedious. 2 And I appear to be doing all that I can to cut myself loose from any such mooring. In particular, I seem to ignore recent advances in the scientific understanding of behaviour. Sociobiology claims to explain many kinds of behaviour, including the moral phenomenon of altruism, within a methodologically attractive framework of individualistic rationality.
Sociobiology is attractive and I do wish to avoid fantasy. Indeed, this book began when reading Richard Dawkins’ splendid Selfish Gene convinced me that my moral intuitionism was methodologically embarrassing. So this project began with sociobiology and owes most of its methods to research in that field. None the less one cannot build a fundamental justification of morality on the basis of the main results of sociobiology: kin and reciprocal altruism. The first provides no fundamental justification of morality; the second is not about morality (strictly speaking) at all. I shall set out these reasons in detail in the first two sections of this chapter. This leaves me free of nature; is there any source of constraint left? Yes, there are the boundaries of what can be constructed using the most general and adaptable of means, a programmable general purpose automatic symbol interpreter: a computer. Artificial Morality takes its source of constraint here, from the limits of what is procedurally possible.