Abstract

Does nonresponse in surveys matter?

Dan Hedlin

Nonresponse levels in Swedish official statistics are staggering. For example, more than 50% of women under 35 years of age did not respond to the Labour force survey in December 2015. In random sample surveys conducted by market research institutes nonresponse rates may be far higher. For example, a random web panel sample survey that I have recently got acquainted to had about 95% nonresponse.

Although Groves (2006) and Groves and Peytcheva (2008) have shown empirically that the nonresponse rate is a poor predictor of nonresponse bias, a nonresponse rate higher than 50%, and certainly a rate higher than 95%, makes most survey methodologists apprehensive.

However, many survey methodologists are confident that modern calibration methods alleviate the nonresponse bias, and there are theoretical explanations as to why this may indeed be the case. There are also theoretical expressions of the size of expected nonresponse bias. I shall discuss some of these (and issue a warning about the simplistic expression of nonresponse bias that can be found in many textbooks).

I am also going to talk about some early simulation results on the size of nonresponse bias and on how well the theoretical expressions correspond to the actual biases.

I’ll start with a 10-minute crash course on modern calibration methods in surveys.