If I have been successful so far, I have induced you to take seriously the possibility that we can learn something about rationality and morality by projecting social problems onto a radically simplified artificial world. In this chapter I reflect on the distance that we have covered. How far have we come and how close are we to our goal? I begin by stating my conclusions. Then I characterize my method as functionalism, contrast it with some criticisms that deny the possibility of artificial morality, and ask whether it is objectionable that my theory may not apply to people. Finally I consider how we might improve and more thoroughly test my sketchy conjectures, ending with some brief lessons learned from artificial morality.
THE CONTENT OF ARTIFICIAL MORALITY
The term ‘artificial morality’ could be taken to mean many things, providing a vague and moving target to critics. To remedy this I will pin my project down by assigning a proper name to the claims to which I am committed. Artificial Morality (the theory) consists of the following general methodological aim and the specified claims about the best means to satisfying this goal.
Methodological thesis. Programming artificial players to score well in tournaments of various players playing abstract non-iterated mixed motive games is a good way to develop a fundamental justification of morality.
Rational thesis. A player capable of responsively constraining herself to pursue outcomes mutually beneficial to itself and other