Causal inference is the study of how actions/interventions/treatments impact an outcome. This blog post provides a high-level introduction to the two main ways of making causal claims 1) using experiments and 2) observational studies.
In a previous post, we introduced the Neymanian approach to inference, within the broader randomization-based framework. This post introduces the second dominant inferential strategy within the randomization-based framework – the Fisher Randomization Test.
In this post, we introduce the fundamentals of randomization-based inference for the Neymanian approach.
Statistical inference teaches us “how” to learn from data, whereas identification analysis explains “what” we can learn from it. Although “what” logically precedes “how,” the concept of identification is less widely understood than that of estimation or inference. Since it is an important topic in causal inference, we will devote a series of posts to the topic. In this first installment, we give a general but somewhat abstract definition of identifiability. The next few posts in the series will focus on identification in the causal context.
People don’t always agree; that is a fact of life. Similarly, when running an experiment, not everyone has the same reaction to the intervention! It’s critical that data scientists, academics, and the general public understand that the global average may not always be the most important or meaningful measure. Instead, it is often more informative to study how the effect of an intervention varies across different population subgroups. This post explains, at a high level, what heterogeneous treatment effects are, why they are essential, and how to think about them.