65th ISI World Statistics Congress 2025 | The Hague

65th ISI World Statistics Congress 2025 | The Hague

Advances in the Reduction of Bias Through Adaptive Design and Nonresponse Adjustments

Organiser

AH
Anders Holmberg

Participants

  • NS
    Prof. Natalie Shlomo
    (Chair)

  • JW
    Dr James Wagner
    (Presenter/Speaker)
  • The impact of model selection strategies on outcomes in responsive and adaptive survey designs

  • PS
    Paul Schubert
    (Presenter/Speaker)
  • Sampling designs to minimise nonresponse bias in a digital first multimode collection

  • KV
    Kees Van Berkel
    (Presenter/Speaker)
  • Accounting for measurement and selection effects in a mixed-mode adaptive survey design

  • SC
    Dr Stephanie Coffey
    (Presenter/Speaker)
  • Managing errors and controlling costs in a pseudo-longitudinal survey

  • JE
    Dr John Lamont Eltinge
    (Discussant)

  • Category: International Association of Survey Statisticians (IASS)

    Proposal Description

    The session covers two challenges when conducting modern sample surveys. First to try and reduce biases caused by selection and collection effects, and second to succeed in smoothly and timely integrating auxiliary data into the survey procedures to make it effective for several collection modes. In the session we will hear four speakers from three continents present how their organisations are utilising different survey strategies to modernise their capabilities and meet demands on survey quality/cost tradeoffs.

    Paper1: by Stephanie Coffey, Carl Lieberman, and Jonathan Eggleston, USCB
    The paper discusses an adaptive survey design (ASD) implemented in the American Community Survey (ACS) to control costs in the CAPI mode while minimizing impacts on variance. The design leverages predictions of balancing propensities, response propensities, and field data collection costs to identify cases for which effort can be stopped. We extend current the current ASD methodology to handle the small monthly sample sizes and small geographies of the ACS in a practical way. Results summarize the impact (in simulation) on key estimates, weights and costs with and without accounting for geography in our design. We then present similar results after the design was implemented in production.

    Paper2: by Anders Holmberg, Bruce Fraser and Paul Schubert, Australian Bureau of Statistics (ABS).
    A modernisation effort of the ABS is to shift the direct data collection strategy to one where the preferred first collection mode is through a digital channel, as opposed to a face-to-face interview. This paper presents how this change can affect the survey strategy for surveys about people in Australia, i.e. the combination of sampling design, the collection operations and estimation. Special focus is on the sampling design and our investigations of how auxiliary information help mitigate the risk of selection biases coming from trade-offs between mode dependent unit response propensities and volume targets on response. The risk exists when the collection operations are tailored to meet practical circumstances of a ‘digital first’ approach without being able to monitor who in the target population is missed and when.

    Paper3: by Kees van Berkel and Barry Schouten, Statistics Netherlands.
    Since 2018, Statistics Netherlands has been conducting the Health Survey in a mixed-mode adaptive survey design. Initially, people are asked via a letter to fill out a questionnaire online. Nonrespondents after two written reminders are eligible for a face-to-face interview. In groups with low internet response, more people are contacted in person. This reduces the probability of bias in the survey results. In 2022-2023, an experiment with re-interviews was conducted. Variations in survey outcomes resulting from online and face-to-face observation were evaluated and is described.

    Paper4: by James Wagner UMich, Andy Peytchev RTI and Xinyu Zhang UCLA
    The paper uses a simulation study to examine the impact of different model selection strategies on survey outcomes such as response rates and the bias of estimates of key statistics. The model selection strategies involve focusing on predictors of nonresponse drawn from paradata compared to predictors that are highly correlated with the key survey variables. The model predictions are used to allocate effort.