In the previous chapter I used the methods of Artificial Morality to defend David Gauthier’s conception of constrained maximization. I argued that conditional co-operation (CC) was a procedurally possible strategy that was more successful than straightforward maximization (SM). That argument is incomplete; several important questions remain unanswered. First, conditional co-operation does not exhaust the possibilities of constrained behaviour. We need to consider other possible moral agents and the means instrumentally to test them. (I will put off moral evaluation until the next chapter.) Conditional co-operators are not the only responsive players; perhaps other designs will prove more successful. More successful playing with whom? This raises the second question, about the population used to test various players. Gauthier simplifies drastically by dividing the world into moral and amoral agents. I think we need to consider more complex populations. I introduce a new constrained agent, the reciprocal co-operator (RC), a responsive player that exploits unconditional co-operators. In this chapter I argue for the rational superiority of reciprocal co-operation; in the next I consider its evident moral defects.
So far I have followed Gauthier’s line of argument; now we diverge. Gauthier, convinced of the rationality of constrained maximization in the simplest case where players’ dispositions are transparent, proceeds to discuss more complex and realistic cases where transparency fails. In contrast, I see a problem with the justification of conditional co-operation as rational even in the transparent case. (I will drop the transparency assumption later in Chapter 8.) While CC does better