Agent-Relative Restrictions and Agent-Relative Value

Article excerpt

IN THIS PAPER, I POSE A CHALLENGE for attempts to ground all reasons in considerations of value. Some believe that all reasons for action are grounded in considerations of value. Some also believe that there are agent-centered restrictions, which provide us with agent-relative reasons against bringing about the best state of affairs, on an impartial ranking of states of affairs. Some would like to hold both of these beliefs. That is, they would like to hold that such agent-centered restrictions are compatible with a view that grounds all reasons for action in considerations of value. This is what I will argue is problematic.

My argument challenges a particular project, of showing that all ethical theories are broadly consequentialist. Proponents of this project claim that all ethical theories can be captured by the claim that what one ought to do is to perform that act which would bring about the optimal outcome. (1) Theories would merely differ on how they go about determining the ranking of outcomes.

The idea is to take whatever other factors the deontological theory claims are relevant, and to work those factors into the evaluative ranking of outcomes. In this way, one has consequentialized the theory. An agent-neutral theory would claim that the ranking of outcomes is the same for everyone. But a theory that accorded more closely with common sense would have agent-relative rankings; how outcomes are ranked would vary from agent to agent. The attraction of this strategy, according to proponents, is that it would preserve what is compelling about consequentialism--its teleology and maximizing (2)--while also preserving more of commonsense morality than consequentialism does. (3)

I will call theories that ground all reasons in considerations of the good "teleological theories." My claim is that agent-centered restrictions will not fit into a teleological theory, understood as one which grounds all reasons in considerations of the good. If the correct moral theory is a teleological one, then there are no agent-relative restrictions. If there are agent-relative restrictions, then teleology is false. (4)

1. Agent-relative Restrictions and Consequentializing

In this section, I show why those who seek to develop a teleological theory must accept agent-relative value if they are to capture agent-relative restrictions. Agent-relative restrictions are restrictions on what one can do to bring about the (agent-neutrally) best state of affairs. (5) Agent-relative restrictions are thought to be an important element of commonsense morality that agent-neutral theories fail to capture--the other being agent-relative options, permissions to act in ways that do not bring about the best state of affairs. (6) For instance, commonsense morality recognizes a restriction against the killing of innocent persons. It is impermissible, commonsensically, to kill an innocent individual in order to harvest his organs, even though doing so would enable us to save five other lives. Of course, it is open for a theory with a pluralistic axiology and no agent-centered restrictions to claim that such killings are particularly bad, worse than mere deaths, so that killing the innocent individual would not in fact produce the most good. (7) But such a theory, although it captures the previous case, does not capture the following case: Suppose that killing one innocent individual is the only way to stop the killings of two other innocent individuals. It seems that commonsense morality still recognizes a restriction here--and many ethical theorists, for instance, Kantians, believe there to be a restriction here--but any agent-neutral teleological theory will not recognize such a restriction. (8)

The restriction against killing an innocent person says that at least sometimes one may not kill an innocent person, even if this would prevent the killings of multiple other innocent people, and there are no other morally relevant circumstances. …