65th ISI World Statistics Congress 2025

65th ISI World Statistics Congress 2025

The Role of Statistics and Data Science in Impact Evaluation

Organiser

EE
Elizabeth Eisenhauer

Participants

  • JO
    Dr Jean Opsomer
    (Chair)

  • SS
    PROF. DR. stefan sperlich
    (Presenter/Speaker)
  • On the search for data-driven impact evaluation frameworks

  • LD
    Dr Lucy DAgostino McGowan
    (Presenter/Speaker)
  • Causal inference is not just a statistics problem

  • DD
    Dr Daniela De Angelis
    (Presenter/Speaker)
  • Bayesian factor analysis for policy evaluation using time-series observational data

  • DR
    Dr Jose R. Zubizarreta
    (Presenter/Speaker)
  • Anatomy of event studies: Hypothetical experiments, exact decomposition, and robust estimation

  • EE
    Dr Elizabeth Eisenhauer
    (Presenter/Speaker)
  • Addressing Recruitment Bias in RCTs for Impact Evaluation

  • Category: International Statistical Institute

    Proposal Description

    Impact evaluation involves estimating an intervention’s effect on outcome(s) of interest. It is often investigated in policy domains including public health, social services, and education. Statistical and data science tools are broadly applicable in impact evaluation, but they must be used carefully to correctly estimate causal relationships. The speakers will discuss modern approaches to impact evaluation.
    In different disciplines, “impact evaluation” refers to different problems and approaches, leading to confusion about the many methods, definitions, and notations. Stefan Sperlich (Université de Genève) will present a procedure that combines methods. Sperlich will guide practitioners from selecting indicators and causality models to treatment effect significance tests. He will step through a specific example, then discuss strategies to minimize the influence of subjective judgement on results in non-pre-designed experiments. Graph theory and nonparametric statistics can play important roles.
    Daniela De Angelis (University of Cambridge) will discuss variants of the causal factor analysis (FA) model broadly used to estimate the impact of an intervention using observational time-series data on multiple units, allowing for measured and unmeasured confounders. De Angelis will demonstrate FA’s utility in settings with limited data and an extension to model the dependence of causal effects on modifiers. Fitting these models under the Bayesian paradigm leads to straightforward uncertainty quantification for causal quantities and can ensure data-driven model parsimony by exploiting regularizing priors.
    José R. Zubizarreta (Harvard University) will propose a robust weighting approach for estimation in studies leveraging changes in policies or programs over time and in exposed and unexposed locations (known as event studies), which allows investigators to progressively build larger valid weighted contrasts by leveraging, in a sequential manner, increasingly stronger assumptions on the potential outcomes and the assignment mechanism. This approach is adaptable to a generally defined estimand and allows for generalization. Zubizarreta will provide weighting diagnostics and visualization tools, and illustrate these methods in a case study of the impact of divorce reforms on female suicide.
    Lucy D’Agostino McGowen (Wake Forest University) will discuss four datasets, similar to Anscombe’s quartet, that aim to highlight the challenges of estimating causal effects. Each dataset is generated based on a distinct causal mechanism. Despite statistical summaries and visualizations being identical, the true causal effect differs, and estimating it correctly requires knowledge of the data-generating mechanism. These example datasets demonstrate the assumptions underlying causal inference methods and emphasize the importance of gathering more information beyond what can be obtained from statistical tools alone.
    Elizabeth Eisenhauer (Westat) will discuss bias in randomized controlled trials (RCT) for impact evaluation of social programs. Randomization makes treatment and control groups comparable, but biases can be introduced at other stages. For example, participants in the study may systematically differ from typical program participants. While a larger sample size increases statistical power, conclusions may not be generalizable. This talk will focus on selection/recruitment bias introduced before randomization and walk through a recruitment bias analysis for an RCT impact evaluation of employment and training programs.