Contiguity and Covariation in Human Causal Inference

Article excerpt

Nearly every theory of causal induction assumes that the existence and strength of causal relations needs to be inferred from observational data in the form of covariations. The last few decades have seen much controversy over exactly how covariations license causal conjectures. One consequence of this debate is that causal induction research has taken for granted that covariation information is readily available to reasoners. This perspective is reflected in typical experimental designs, which either employ covariation information in summary format or present participants with clearly marked discrete learning trials. I argue that such experimental designs oversimplify the problem of causal induction. Real-world contexts rarely are structured so neatly; rather, the decision about whether a cause and effect co-occurred on a given occasion constitutes a key element of the inductive process. This article will review how the event-parsing aspect of causal induction has been and could be addressed in associative learning and causal power theories.

Psychological research on human causal learning has largely adopted David Hume's (1739/1888) framework, according to which causality cannot be observed directly in the environment (see also Young, 1995, for an analysis of the Humean cues to causality). Although certain physical events may appear to give rise to instant causal perception (such as one billiard ball setting in motion a stationary ball by colliding with it), there is actually nothing in the event itself that can assure us of the causal relation (the ball could have been moved, for instance, by an ingenious magnetic mechanism from below).1 However, as Hume observed, two important principles can guide us in our search for causal explanations: the contingency between two events, and their contiguity. Starting with contiguity, the obvious constraint imposed by Hume is that causes need to be followed by their effects immediately in order to be credited with causality.2 But contiguity alone is of course not sufficient for causality, as two events could have followed each other merely by chance. Hence the importance of contingency or regularity: Only if two things repeatedly and reliably follow each other do we infer that they are causally related.

As this volume testifies, the vast majority of work on human causal learning, particularly in the last decade or so, has been exclusively concerned with the causal evaluation of contingency data (see Shanks, Holyoak, & Medin, 1996, for an overview). One reason for this could be that a wide range of computational models have been proffered to explain how contingency information can be transformed into a measure of causal strength, and, as the literature shows, researchers in the field are still engaged in debates over which measure is the appropriate one (e.g., Buehner, Cheng, & Clifford, 2003; Cheng, 1997; Lober & Shanks, 2000; White, 2003). What unifies all these approaches is the assumption that reasoners evaluate information about the (joint) presence or absence of a cause c and an effect e toward a measure of causal strength. In its most simple form, such information can be represented in a 2 × 2 contingency table, as depicted in Figure 1.

A long-standing proposal (see, e.g., Allan & Jenkins, 1980; Jenkins & Ward, 1965) has been that causal strength is determined by subtracting the probability of the effect in the absence of the cause, P(e | ¬c) = c/(c + d) from the probability of e in the presence of c, P(e | c) = a/(a + b). This measure ΔP = P(e | c) - P(e | ¬c) is commonly accepted as the mathematical formalization of contingency, and for many years has also been heralded as a normative measure of causation. At the same time, several studies have consistently reported that causal judgments often deviate systematically from ΔP (e.g., Wasserman, Elek, Chatlosh, & Baker, 1993; see Cheng, 1997, for an overview), resulting in a plethora of suggestions-ranging from weighted probability contrasts or decision rules to associative learning networks-of how exactly the four entries in the contingency table are subjectively transformed into impressions of causal strength. …