Theory-based causal induction

Psychol Rev. 2009 Oct;116(4):661-716. doi: 10.1037/a0017201.

Abstract

Inducing causal relationships from observations is a classic problem in scientific inference, statistics, and machine learning. It is also a central part of human learning, and a task that people perform remarkably well given its notorious difficulties. People can learn causal structure in various settings, from diverse forms of data: observations of the co-occurrence frequencies between causes and effects, interactions between physical objects, or patterns of spatial or temporal coincidence. These different modes of learning are typically thought of as distinct psychological processes and are rarely studied together, but at heart they present the same inductive challenge-identifying the unobservable mechanisms that generate observable relations between variables, objects, or events, given only sparse and limited data. We present a computational-level analysis of this inductive problem and a framework for its solution, which allows us to model all these forms of causal learning in a common language. In this framework, causal induction is the product of domain-general statistical inference guided by domain-specific prior knowledge, in the form of an abstract causal theory. We identify 3 key aspects of abstract prior knowledge-the ontology of entities, properties, and relations that organizes a domain; the plausibility of specific causal relationships; and the functional form of those relationships-and show how they provide the constraints that people need to induce useful causal models from sparse data.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Association Learning*
  • Bayes Theorem
  • Causality*
  • Data Interpretation, Statistical*
  • Humans
  • Intuition
  • Logic
  • Models, Theoretical*
  • Observation
  • Problem Solving*
  • Psychological Theory*