I have defended the possibility of rational moral constraint under relatively favourable information conditions. My moral agents, like Gauthier’s, employ the general strategy of publicizing their principles and responding to other players’ public principles. This strategy is information-intensive. Therefore assuming that reliable information about strategies is freely available may beg questions central to Artificial Morality concerning the balance of advantage between more and less moral strategies.
In this chapter I weaken my information assumption. I begin by attending to the way straightforward maximizers use information about the other player. This alerts us to the possibility that some (flavours of) straightforward maximizers will sometimes co-operate with constrained players. Next I turn to the costs of responsive strategies. Both predicting others’ behaviour and exposing one’s own principles can be risky. I consider some ways that responsive strategies can fail. Attending to the impact of the costs of information on different agents upsets Gauthier’s generalization of his results to the less than transparent case. More important, it allows unconditional co-operators to do surprisingly well, leading to stable populations of diverse agents, with strongly disturbing results for the relation between game and moral theory.
SM STRIKES BACK
We should reconsider the claim that only morally constrained agents can co-operate in the Prisoner’s Dilemma. Game theorists will insist