You’ve probably heard causal folks claim that ``you can’t just run a regression on observational data.’’ But what does this mean? Scientists have been running regressions for a very long time - could they all have been wrong? We examine the question and discuss some examples.
Causal inference is the study of how actions/interventions/treatments impact an outcome. This blog post provides a high-level introduction to the two main ways of making causal claims 1) using experiments and 2) observational studies.
In a previous post, we introduced the Neymanian approach to inference, within the broader randomization-based framework. This post introduces the second dominant inferential strategy within the randomization-based framework – the Fisher Randomization Test.
In this post, we introduce the fundamentals of randomization-based inference for the Neymanian approach.
Statistical inference teaches us “how” to learn from data, whereas identification analysis explains “what” we can learn from it. Although “what” logically precedes “how,” the concept of identification is less widely understood than that of estimation or inference. Since it is an important topic in causal inference, we will devote a series of posts to the topic. In this first installment, we give a general but somewhat abstract definition of identifiability. The next few posts in the series will focus on identification in the causal context.