David Danks (email@example.com)
Department of Philosophy, Carnegie Mellon University; & Institute for Human & Machine Cognition
135 Baker Hall, Pittsburgh, PA 15213 USA
Much of human cognition and activity depends on causal beliefs and reasoning. In psychological research on human causal learning and inference, we usually suppose that we have a set of binary potential causes, C1, …, Cn, and a known binary effect, E, all typically present-absent values of a property or event. The differentiation into potential causes and effect is made on the basis of external factors, including prior knowledge or temporal information.
Given these variables, people are then asked to infer the existence and strength of causal relationships between the Ci's and E from observed data in one of several formats (serially, as a list, or in a summary). The standard measure of people's causal beliefs is a rating of some proxy for causal influence, where a zero rating indicates no causal relationship. The exact probe question varies between experiments, and has been found to significantly impact participants' ratings (e.g., Collins & Shanks, under review).
A variety of theories have been proposed to explain people's causal inferences in this type of highly limited scenario (see Danks, forthcoming, for a theoretical overview and synthesis). One general view for which there is a growing body of evidence is that people's causal beliefs and learning are well-modeled as though they are learning a causal Bayesian network (CBN, henceforth).
CBNs have proven to be a powerful framework for representing and learning causal structure from observational, experimental, and mixed data (e.g., Pearl, 2000; Spirtes, Glymour, & Schemes, 2000). At a general level, a CBN contains two distinct, related components: a directed acyclic graph (DAG) that represents qualitative causal relationships (X-> Y means that X is a direct cause of Y); and quantitative information about the strengths of the various causal connections (e.g., a joint probability distribution; a set of a linear equations; and so on). These components are connected through the Markov and Faithfulness/Stability assumptions, which constrain the ways in which causal relationships manifest themselves in observational and experimental data. These assumptions are domain-general, and themselves testable.
There are essentially two different strategies for learning a CBN from data: (i) score-based or Bayesian approaches; or (ii) constraint-based (C-B) approaches. In the former approach, we search either heuristically or exhaustively for the CBN that maximizes P (CBN| observed data). We focus here on the latter approach, in which we determine the set of DAGs that could possibly have produced the observed independencies and associations (given Markov and Faithfulness). C-B algorithms thus take a set of independence and association judgments as input, and output an equivalence class of DAGs, all of which make identical predictions about the observed data. For small numbers of variables, the equivalence class will frequently not be a singleton, and so we will have a set of possibilities that cannot be distinguished given the data.
As an example of a C-B algorithm, suppose that we have data on X, Y, and Z. There are six different independencies (conditional and unconditional) that may or may not hold for these three variables. Suppose that some process yields the following statistical judgments about the variables: the only independence (of the six possibilities) is that X and Z are unconditionally independent. If we further suppose that there are no unobserved common causes (latents) of these variables, then there is exactly one DAG that could have produced these data: X → Y → Z (If we drop the “no latents” assumption, then there might be unobserved common cause(s) of X and Y, or Z and Y, in addition to or in place of the X → Y, Z → Y, edge.)
Both types of approaches have been used for rational analyses of causal learning and categorization. Examples using Bayesian approaches include Griffiths, Baraff, & Tenenbaum (2004); and Tenenbaum & Griffiths (2001), (2003). Rational analyses with C-B algorithms include Gopnik, Glymour, Sobel, Schulz, Kushnir, & Danks (2004); and Kushnir, Gopnik, Schulz, & Danks (2003).
A range of evidence suggests that human causal learning is best-modeled by CBN structure learning, including: learning from manipulations (as opposed to just observations); learning when the variables are not differentiated into causes and effect; and differences in predictive and diagnostic learning. All of these phenomena can be explained both by C-B and Bayesian approaches to CBN structure learning, and have been modeled elsewhere.
Another important piece of evidence for the CBN theory is that people are seemingly able to use domain-specific knowledge to draw causal conclusions based on small sample sizes, and CBN learning algorithms are the only ones currently on offer that have the flexibility to model this adaptive behavior (Griffiths, et al., 2004; Tenenbaum & Griffiths, 2003).
Griffiths, et al. (2004) have drawn a further conclusion based on inferences from small samples: C-B algorithms cannot model this phenomenon—and so are incorrect— because they do not incorporate domain-specific knowledge
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Book title: Proceedings of the Sixth International Conference on Cognitive Modeling: 6th ICCM 2004 : Integrating Models, July 30-August 1, 2004, Carnegie Mellon University, University of Pittsburgh, Pittsburgh, Pennsylvania, USA. Contributors: Marsha Lovett - Editor, Christian Schunn - Editor, Christian Lebiere - Editor, Paul Munro - Editor. Publisher: Lawrence Erlbaum Associates. Place of publication: Mahwah, NJ. Publication year: 2004. Page number: 342.
This material is protected by copyright and, with the exception of fair use, may not be further copied, distributed or transmitted in any form or by any means.