Non-Response Bias and Non-Response Errors

Is it a problem when people refuse to take part in my research and how can I avoid it?

So, you sent a survey to 100 people, and only 13 replied. Or, you wanted to compare responses from men and women, but 90% of your respondents are male. How might this affect your research findings? Here, we introduce you to the concept of non-response bias, discuss the threat it poses to your conclusions and outline how you can minimize it.

What is non-response bias?

Non-response bias is the technical term used to describe errors researcher make in estimating a certain population characteristic which occurs because specific types of the respondent are under-represented in their research sample size. Population parameters need to be taken into account.

When the characteristics of those who take part in a research study are markedly different from the characteristics of those who do not take part (for example, in terms of age, gender or lifestyle preferences), the sample size can be said to be non-representative of the population from which it was drawn.

Basing insights or conclusions on samples characterized by a high level of non-response bias can be a disaster. Luckily, there are several steps that researchers can take to minimize non-response bias at the research design phase and to correct for it if it does occur. Before discussing some of these strategies, let’s take a look at how non-response bias affects research results.

How does non-response bias affect research conclusions?

Non-response bias can lead to wrong conclusions from the research, so it's important to be aware when it occurs in your study. Let’s consider the following example. A retailer hires a firm of marketing professionals to develop an advertising campaign, involving posters placed in certain subway stations in one large metropolitan area. After the posters have been in place for three weeks, the firm conducts a survey to evaluate how many residents have seen the posters, and what their opinions of its core message are. A researcher calls 1000 city residents randomly selected from the phone book. Calls are made between 9 am and 5 pm over the course of one week between Monday and Friday. After trying each number at least twice, the researcher has managed to speak to 580 respondents. The results of the research suggest that only 4% of city residents were exposed to the posters and that only 40% of those found that the message resonated with them. The posters are pulled.

Was this a well-designed study?

The researcher might say yes, because a random sample was drawn, which means that all members of the population had an equal chance of being included in the final sample, which in turn increases the chances that the sample is representative of the population.

But, what are the implications of calling people at home between 9 am and 5 pm? This is a time when employed people are less likely to be home, which may systematically omit them from the sample. Had the researcher compared the mean age of those who did answer the phone with the mean age of those who did not, they might have observed systematic differences, so that the average age of the sample is biased upwards compared to the population.

Since older people are more likely to be retired, they may also be less likely to take the subway and therefore have seen the posters. So, the calculated population-level exposure rate might be biased downwards. There might also be differences in aspects such as parental status, income, and other factors that affect resonance with the advertising campaign. If non-respondents had been included in the sample, the results of this study might have been very different. The decision to pull the adverts prematurely might have been a very costly – and avoidable mistake. This also explains why targeting the right sample group at different times of the day comes handy to avoid non-response bias and non-participation bias.

Why does non-response bias occur?

As shown in the example above, non-response bias can skew your results or completely invalidate them. In order to avoid these problems, it is important to know why non-response occurs. Common reasons for non-response bias are:

  • Intentional Refusal or Abandonment

    This is where participants start and do not complete a survey during data collection process, or otherwise refuse to supply data. For example, sample members might have privacy concerns about supply data or might find the questions to be embarrassing. Sending a follow up could be helpful.

  • Unintentional Non-Response

    This might be for technical reasons when the bias occurs, such as when a mailed survey goes missing, or if a respondent to an online survey forgets to press submit, or even the email invites land into the spam folder.
  • Audience Characteristics

    Some people might be more inclined to respond to a survey than others. For example, if a survey is designed to gather data on customer satisfaction, customers who were dissatisfied may be more willing to respond because they want their complaints to be heard.
  • Solicitation Characteristics

    The way in which responses are solicited could increase non-response. In the example above, gathering data on weekdays during business hours increased non-response among workers. Similarly, surveys targeted at elderly people might have high non-response if the survey invitation is sent via email, or email could be directed into Spam folders.
  • Hard-to-reach Communities

    Research has found that certain demographic groups are hard to sample and achieve enough sample size. For example, transient communities, such as students often change address, drug users may be difficult to motivate to share their experiences, and business owners are often too busy to respond. These groups are part of a hard-to-reach target population.

    How can non response bias be minimized?

    It is important to note that non-response bias can never be completely eliminated. It is very rare to achieve a response rate of 100%, and there are often shared characteristics among those who do not respond. However, there is much that you can do to minimize response bias and to mitigate its risks.

    Administration design

    The method of administration design should be designed with the qualities and needs of the target audience in mind. It might make more sense to target younger people through surveys that can easily be conducted via a smartphone, for instance, Consideration should also be paid to the fact that people use different types of devices (e.g. computers, tablets and smartphones) to complete surveys, so online surveys should be optimized for all types of screens. Having a mobile friendly survey would defintely increase the response rate when respondents take the survey on their smarphones. It will also help to avoid the participation bias.

    Instrument design

    There are a number of principles of instrument design that can be followed to reduce the likelihood of non-response. Data collection instruments should be designed in such a way as to be as unobtrusive and to reduce the sense of burden perceived by the respondent. Questions should be organized in a logical way and written clearly so as not to confuse the respondent. Questions that are insensitive, breach privacy, or unnecessary is to be avoided. In addition, surveys that have a large number of open-ended questions can be perceived as too time-consuming. There is no hard and fast rule about their optimum length, but longer surveys have been found to lead to survey fatigue in respondents. Many respondents will opt to abandon the survey part way through if they perceive the survey to be burdensome. One strategy that market researchers can employ in order to boost response rates is to inform respondents of the length of time that they expect that the survey will take. Managing expectations in this way are known to reduce the incidence of survey fatigue. Also, respondents are unlikely to give their personal contact information so it is better to not ask them unless the client really requires it.

    Invitation and instruction design

    The invitation to take part in the research should inform potential respondents about the purpose of the study. This may seem a simple strategy, but research shows that it can increase interest in the study, as well as the willingness of respondents to provide truthful and useful responses. A follow-up invitation can significantly boost response rates, but too many invitations can turn respondents off. Another simple strategy is to ensure that respondents are thanked for their time and interest in the study. In addition, it is important to provide a clear and concise set of supporting instructions that potential respondents can use when they are completing a research instrument. These might define certain terms or clarify language, explain how the question is to be completed, or give examples of the type of answers the researcher is looking for. It is also advised to ask the respondents to regularly check within their spam folders if the email invitations are in there to avoid a lower response rate or so that they do not miss the chance to share their opinion.

    We hope this article has helped you understand the difference between Quantitative and Qualitative research.  

    Do not hesitate to contact us to get your market research project up and running. You can find more information on our reach and our panels.

    We really hope these pointers have convinced you to consider mobile surveys for helping your company. If yes, please do not hesitate to reach out. The team at TGM will be more than happy to support you with your project.

    Get better responses. Faster.

    TGM Research Panel Audience gives access to millions of respondents waiting to take your survey.

    TGM Academy aims to educate on the transformative potential of market research and insight technology. On this website, you'll find articles covering everything from research design, sample selection to analysing results.

    Connect with us on Social Media:

    © 2022 TGM Academy, Copyright © TGMResearch