Wednesday, March 22, 2017

Causal inference (4): Benefits and barriers

Over the past week, I have written a series of posts on contemporary advances on causal inference.  The series has covered a lot of ground, ranging from the foundations of causal inference (part 1), to adjustment for confounding (part 2), to causal inference for mediation designs (part 3).  The wide-ranging nature of my review reflects the enormous progress of the past few decades of work in this area.

In the process of writing these posts, I have been struck by two observations.  First, formal causal inference could have enormous benefits for psychological science.  Second, despite these apparent benefits, formal causal inference is largely absent from the field.

I have detailed some of the benefits of causal inference in my first three posts.  However, I wanted to spend a final post reflecting on these benefits.  I will also describe what I perceive to be the main obstacles for adopting causal inference into psychological research.


The benefits of formal causal inference


Scientists think about causality a lot, and psychological scientists are no exception.  The main point of departure between Pearl's Structural Causal Model and the current state of affairs is that the SCM provides a formal mathematical machinery for evaluating causal claims.  Without this machinery psychologists are left to puzzle out their causal inferences on their own.

The most obvious benefit of using a formal mathematical machinery for this process is that, as long as your causal assumptions hold, the quantity that you estimate from the data is proven to be causal.  The fact that you must lay out the assumptions that underlie your causal model also forces you to explicitly identify the conditions under which your proof will not hold.  Once these assumptions are out in the open, they can be accepted or critiqued as appropriate; they cannot escape from reader scrutiny.

A secondary benefit is that creating a rigorous mathematical machinery for causal analysis builds a bridge to other areas of math, which in turn allows the development of new tools that would otherwise be unavailable.  Here is a sampling of possibilities:
  1. Non-parametric causal inference.  An interesting property of the causal inference tools that I've described is that they don't care about the functional form of the relationships between different variables, nor do they care about the distribution of errors.  This means that they are completely independent of the General Linear Model or any other parametric statistical model.  This in turn means that we can apply these tools regardless of whether our outcome variables are quantitative, ordered, or categorical; our causal inferences are independent of the particular form we impose on our data.
  2. Evaluating claims that cannot be tested experimentally.  In my post about mediation, I noted that although the definition of the mediation effect relies on impossible hypotheticals, we can estimate this effect provided certain assumptions are satisfied.  Many other causal questions share this property of impossibility, and formal causal inference allows us to address these questions  (Pearl, 2000).  These questions include whether, for example, my headache would have been prevented if I had taken an aspirin and what fraction of adverse side effects are due to a candidate drug.  Despite their important scientific and practical relevance, these questions cannot be evaluated experimentally, but they can be evaluated using the machinery of causal inference.
  3. Evaluating the robustness of causal claims.  I often have an intuitive sense that certain causal claims are more "robust" than others.  Putting causal inference on a firm mathematical foundation allows us to develop rigorous notions of "robustness".  In particular, "robustness" is framed as the sensitivity of causal quantities to relaxing causal assumptions (Pearl, 2004).  This can sometimes allow us to put bounds on what values are possible for particular causal quantities, even if these quantities are not identifiable.
  4. Meta-synthesis.  Once a field of study is sufficiently developed, researchers face a challenge of aggregating knowledge across the many available studies in a way that provides an accurate picture of the state of the field.  One common way to accomplish this is through meta-analysis: taking weighted averages of a particular statistic across studies.  Of course, this process assumes that the contributing studies are sufficiently comparable that averaging makes sense.  Causal inference can bring additional tools to this problem.  By specifying, for each study, the causal assumptions that are and are not reasonable, researchers can make maximum use of the information from each study, enabling not just meta-analysis, but meta-synthesis (Pearl, 2012).
Many of these topics are ones that already deeply interest psychological scientists.  A unified mathematical framework could arm them with a powerful logical machinery to attack these issues.


Barriers to its adoption in psychology


These benefits are, to my mind, quite significant.  However, the relatively slow uptake of formal causal modeling in psychology suggests that a few barriers must be overcome before such adoption can occur.  I can think of at least three:
  1. The necessity of assumptions.  Causal inference is inseparable from the assumptions on which it rests.  The problem is that psychological researchers do not like for their conclusions to be conditional on assumptions.  The irony here is that are conclusions are always conditional on assumptions -- we just rarely acknowledge or defend them.
  2. Math-y-ness.  The machinery of causal inference is rooted in formal mathematical proof.  That should be an asset, but it is not for the purpose of popularizing these concepts in psychology.  The reason for this is simple: Psychological scientists typically receive very little training in formal math.  There are no doubt many reasons for this -- for example, math-avoidant people could self-select into psychology, a problem that could be exacerbated by training that de-emphasizes the formal foundations of the mathematical concepts that we do use.  But there is little doubt that formal math does not, at the moment, play a strong role in most areas of psychological research.
  3. The underdevelopment of psychological theory.  This is the big one.  Valid causal inference requires that you fully specify ahead of time the causal assumptions that you are willing to make.  Some of these assumptions can come from the design of your study (e.g., from the knowledge that conditions were assigned at random).  However, some must come from substantive theoretical knowledge.  The problem is that I fear there are few areas of psychology where knowledge is well-developed enough to allow for easy a priori specification of causal assumptions.
Of the three barriers that I identify above, (1) and (2) are both addressable through changes in education that better emphasizes math and logic.  Barrier (3) is a bit harder.

However, if we are to make progress in developing psychological theory, we need to start somewhere.  This means that we should define a going set of assumptions, even if these assumptions are not entirely justifiable.  If, over time, we find that these assumptions are no longer justifiable given the data that we have accumulated, we define a new set of assumptions and re-evaluate the conclusions on which we based our old assumptions.

This project is already under way in some areas.  For example, Jan Smedslund (1991) developed a full set of assumptions that he called "psychologic".  These assumptions could serve as a good basis for the specification of causal assumptions.  In a more recent example, MacInnis and Page-Gould (2015) developed a set of assumptions for predicting the impact of intergroup contact, which could serve as a basis for causal inference in this area.

I believe that formal causal inference could be a huge boon for accelerating psychological science.  I hope that I have convinced you of the same.


References

MacInnis, C. C., & Page-Gould, E. (2015). How Can Intergroup Interaction Be Bad If Intergroup Contact Is Good? Exploring and Reconciling an Apparent Paradox in the Science of Intergroup Relations. Perspectives on Psychological Science, 10, 307–327.

Pearl, J. (2004, July). Robustness of causal claims. In Proceedings of the 20th conference on Uncertainty in artificial intelligence (pp. 446-453). AUAI Press.

Pearl, J. (2012). The do-calculus revisited. arXiv preprint arXiv:1210.4852.

Smedslund, J. (1991). The Pseudoempirical in Psychology and the Case for Psychologic. Psychological Inquiry, 2, 325–338.

No comments:

Post a Comment