Survey Biases: Lies, Damned Lies, and Statistics

As data and analytics drive more and more of business, getting analytics right is more and more essential to making better decisions.  How do you make sure you’re using insights to drive your business, without your insights falling into Mark Twain’s adage that there are “lies, damned lies, and statistics?”

The most common biases in survey work can be broken into three major categories: Selection bias (inaccuracies are due to the pool of respondents selected to take your survey), response bias (inaccuracies are due to how the survey is constructed such as leading questions and skewed response scales), and analytical bias (inaccuracies introduced from how you analyze or present findings).  Below is a quick primer on what each means and how to make sure your analytics provide true and impactful insights.

Selection Bias

Selection bias is well known, but sometimes not as easy to spot.  In general, it comes from a mismatch in who you are surveying, and whose thoughts you really wanted to capture.  Sometimes, this can come from casting too wide a net, such as asking the general population how they feel about a product that is only targeted at pet owners.  Conversely, many companies use too narrow of a group and miss out on important opinions from groups they inadvertently excluded. 

Perhaps the most popular example of this is relying on your current customer base to do market research – while your most devoted customers are happy to tell you what they value about your product, they aren’t in the best position to tell you what is missing, or what your competitors are better at than you.  Your devoted customers care less about what you don’t do than most people – that’s why they picked you!  In general, selection biases sneak into a survey process because how you recruit survey takers doesn’t reflect the audience you care about.

As a general rule, if you aren’t sure whether a certain group should be included in your survey population, it’s better to include them and add identifying questions so you can analyze the group separately or remove them if they turn out to not be relevant.  For example, if most of your buyers are historically age 25+, including all adults might help you learn why younger buyers don’t like your product.  It’s important to be pragmatic about this – if you are a rental car agency and you don’t serve any customers under 25, excluding them is probably fine – but too many companies create blind spots by ignoring parts of the population that could be part of their growth engine.

Response Bias

Writing loaded/biased survey questions falls into the category of response bias.  This can skew the results of even highly representative sample groups. It is easy to fall into the trap of wording questions that can elicit a positive or negative response, or make a respondent feel there is only one “correct” answer. For example, instead of, “How delicious was your meal on a scale of 1-5?” ask, “How would you rate your meal on a scale of 1-5, with 1 being very unsatisfied and 5 being very satisfied?”

While unethical researchers sometimes intentionally write leading/biased questions, it’s just as common for one’s own beliefs and experiences to unconsciously lead to wording a question with the assumption that respondents will answer in a way that matches the writer’s own opinion. Potentially skewing the results in one direction takes away the chance to obtain unexpected but helpful insights. Take the seemingly innocent question “How often do you shop at our competitor, Company X.”  The survey taker might not have even thought of Company X as your competitor, because they buy different products there than they do at your store.  Now, for the rest of the survey, they are only thinking about the small set of products that they buy from both stores, and you miss the most important insight – they shop at Company X to buy something you don’t even offer!

The easiest way to minimize this bias is to have peers or experts review your drafted survey or interview guide to bring to light any unconscious bias, reframe potentially loaded questions, and offer constructive feedback to guarantee accurate and actionable results.

Analytical Bias

One of the hardest biases to learn how to avoid is analytical bias.  Even if you ask ideal survey respondents your questions in an unbiased way, you can still miss out on key insights if you aren’t cautious about how you analyze data.

Take, for example a company that serves two different segments – one segment values a wide range of products first, and convenience second, while the other segment values the high-quality store brand first, and convenience second.  If you group those two segments together you may inadvertently conclude that convenience is the most important thing your customers value.  In aggregate that’s true, but convenience wasn’t the most important thing to either of your key segments, so investing heavily in convenience is probably not the most effective path to growth.

How do you avoid analytical biases?  Aside from ensuring you have experienced analysts looking at the data, you should be reviewing preliminary findings with stakeholders from a range of perspectives and functions.  There’s also often a rush to publish preliminary findings while analysts do a deeper dive into data – while this can help organizations move faster, it also increases the risks that misleading data gets shared without appropriate context.

Delivering Better Research

At Attadale Partners, our research team works closely with each other and our clients to recognize and eliminate potential sources of survey bias. Not sure if your organization is getting all the insights you should from your surveys?  We would be delighted to offer our expertise in reviewing your survey practices, examine your survey design and question formulation, and offer any feedback for potential areas of improvement.  Contact us today!

Sign up for our newsletter
(You can unsubscribe any time)

Recent Insights

Survey Biases: Lies, Damned Lies, and Statistics

As data and analytics drive more and more of business, getting analytics right is more and more essential to making better decisions.  How do you make sure you’re using insights to drive your business, without your insights falling into Mark Twain’s adage that there are “lies, damned lies, and statistics?”

The most common biases in survey work can be broken into three major categories: Selection bias (inaccuracies are due to the pool of respondents selected to take your survey), response bias (inaccuracies are due to how the survey is constructed such as leading questions and skewed response scales), and analytical bias (inaccuracies introduced from how you analyze or present findings).  Below is a quick primer on what each means and how to make sure your analytics provide true and impactful insights.

Selection Bias

Selection bias is well known, but sometimes not as easy to spot.  In general, it comes from a mismatch in who you are surveying, and whose thoughts you really wanted to capture.  Sometimes, this can come from casting too wide a net, such as asking the general population how they feel about a product that is only targeted at pet owners.  Conversely, many companies use too narrow of a group and miss out on important opinions from groups they inadvertently excluded. 

Perhaps the most popular example of this is relying on your current customer base to do market research – while your most devoted customers are happy to tell you what they value about your product, they aren’t in the best position to tell you what is missing, or what your competitors are better at than you.  Your devoted customers care less about what you don’t do than most people – that’s why they picked you!  In general, selection biases sneak into a survey process because how you recruit survey takers doesn’t reflect the audience you care about.

As a general rule, if you aren’t sure whether a certain group should be included in your survey population, it’s better to include them and add identifying questions so you can analyze the group separately or remove them if they turn out to not be relevant.  For example, if most of your buyers are historically age 25+, including all adults might help you learn why younger buyers don’t like your product.  It’s important to be pragmatic about this – if you are a rental car agency and you don’t serve any customers under 25, excluding them is probably fine – but too many companies create blind spots by ignoring parts of the population that could be part of their growth engine.

Response Bias

Writing loaded/biased survey questions falls into the category of response bias.  This can skew the results of even highly representative sample groups. It is easy to fall into the trap of wording questions that can elicit a positive or negative response, or make a respondent feel there is only one “correct” answer. For example, instead of, “How delicious was your meal on a scale of 1-5?” ask, “How would you rate your meal on a scale of 1-5, with 1 being very unsatisfied and 5 being very satisfied?”

While unethical researchers sometimes intentionally write leading/biased questions, it’s just as common for one’s own beliefs and experiences to unconsciously lead to wording a question with the assumption that respondents will answer in a way that matches the writer’s own opinion. Potentially skewing the results in one direction takes away the chance to obtain unexpected but helpful insights. Take the seemingly innocent question “How often do you shop at our competitor, Company X.”  The survey taker might not have even thought of Company X as your competitor, because they buy different products there than they do at your store.  Now, for the rest of the survey, they are only thinking about the small set of products that they buy from both stores, and you miss the most important insight – they shop at Company X to buy something you don’t even offer!

The easiest way to minimize this bias is to have peers or experts review your drafted survey or interview guide to bring to light any unconscious bias, reframe potentially loaded questions, and offer constructive feedback to guarantee accurate and actionable results.

Analytical Bias

One of the hardest biases to learn how to avoid is analytical bias.  Even if you ask ideal survey respondents your questions in an unbiased way, you can still miss out on key insights if you aren’t cautious about how you analyze data.

Take, for example a company that serves two different segments – one segment values a wide range of products first, and convenience second, while the other segment values the high-quality store brand first, and convenience second.  If you group those two segments together you may inadvertently conclude that convenience is the most important thing your customers value.  In aggregate that’s true, but convenience wasn’t the most important thing to either of your key segments, so investing heavily in convenience is probably not the most effective path to growth.

How do you avoid analytical biases?  Aside from ensuring you have experienced analysts looking at the data, you should be reviewing preliminary findings with stakeholders from a range of perspectives and functions.  There’s also often a rush to publish preliminary findings while analysts do a deeper dive into data – while this can help organizations move faster, it also increases the risks that misleading data gets shared without appropriate context.

Delivering Better Research

At Attadale Partners, our research team works closely with each other and our clients to recognize and eliminate potential sources of survey bias. Not sure if your organization is getting all the insights you should from your surveys?  We would be delighted to offer our expertise in reviewing your survey practices, examine your survey design and question formulation, and offer any feedback for potential areas of improvement.  Contact us today!

Sign up for our newsletter
(You can unsubscribe any time)

Recent Insights

Free ‘Global Real Estate Company’ case study

Get a free printable copy of this case study.