Request a Quote

Survey projectability creates confidence in your results. There are two important factors to consider in terms of your survey sample to insure your results are projectable.   
You’ve decided to use a survey to conduct your research. You’re clear about the purpose, know what you want to learn, and understand what data you need to support the decision. One of the critical decisions in fielding your instrument is determining your target population.  These are the people who will take your survey.  Knowing  you want to target is essential to how your construct and field your study.  Once you know your target population you need to decide on your survey sample.  

Two Factors Impact Your Survey’s Projectability

Projectability addresses whether the result of your study means anything beyond the people who participated in the research. Sampling is a key aspect to insure your study has projectable results – that is the findings apply to beyond the group of people who took it.  Response quantity and response rate are two critical factors for determining whether your study results are projectable.

Response rate is about size.  It represents the number of people who actually participate in your research.  To determine the response rate you need requires that you have identified the number of people you need in your sample to reflect the overall target group. The higher the response rate the better.  A 95% response rate from your sample suggests that there will be minimal differences among the 5% who didn’t response. The lower the response rate the greater the potential for the non-responders perspective to be different from the responders. When you have a higher response rate from the correct sample size, you will have great confidence that the results are applicable to the target population.  This leads to the next factor, confidence level.

The confidence level is the plus or minus figure that is used to indicate how likely the outcome will occur. For example, if you use a confidence value of 5, and 40% percent of respondents say they will chose your product, the poll is valid in that if all respondents were questioned about their product choice, between 35% (40-5) and 45% (40+5) would have picked your product. The confidence level tells you how sure you can be. It is expressed as a percentage and represents the true percentage of the overall group. The 95% confidence level means you can be 95% certain; the 99% confidence level means you can be 99% certain. Most researchers use the 95% confidence level. When you put the confidence level and the confidence value together, you can say that you are 95% sure that the true percentage of the overall group is between 32% and 48%. The wider the confidence value you are willing to accept, the more certain you can be that the whole overall group answers would be within that range.

survey projectability

Sample Size and Response Rate Impact Projectability

Confidence Value and Projectability

Confidence level is important to your study findings projectability. There are three factors that determine the size of the confidence value for a given confidence level:

  1. sample size
  2. percentage and
  3. overall group size

Generally the larger your sample, the more sure you can be that their answers truly reflect the overall group. The lower your +/- variance of your confidence level and the larger your sample size, the greater your confidence value.  For example if you have +/1 5 percentage point at the 95% confidence level, that means that 95% of the time the actual average for the entire population will be within the range of 5 points lower and 5 points higher than the reported statistic.  Essentially the +/1 represents your margin for error. This is why sample size is essential to producing projectable results.

When determining the sample size needed for a given level of accuracy you must use the worst case laws of probability percentage, which is fifty percent (50%). You should also use this percentage if you want to determine a general level of accuracy for a sample you already have. There are a number of tools to help you calculate sample size and margin of error

To determine the confidence value for a specific selection your sample has given, you can use the percentage picking that selection and get a smaller value. How many people are there in the overall group your sample represents? This may be the number of people in a vertical you are studying, the number of people who use a particular type of software, the number of people who demonstrate a specific behavior, such as online banking.

No research results are 100% projectable. Error is a fact of life.  The key is to apply research principles so that the data you collect will be meaningful and significant enough to support key business decisions and investments. Of course, there is only value in your research results if you take action.

Research is only good when the results are actionable.

Survey Projectability Helps Make Your Results Actionable

A Strativity Group report, “Discovering the Real Answers,” claimed that companies are “conducting surveys to merely validate the value of their products and services”.  The value of research is using the data to drive change and make decisions.  If you are not using the results from your surveys to make changes and to help determine appropriate action, you are wasting time and money.

Some quick and common tips for how to design your surveys  so they deliver meaningful results that drive action. 

  • Identify and write down the specific objectives for the survey before composing the questions. Limit your questions to those that support the research objective.
  • Carefully plan enticements or requirements for the survey participants.
  • Provide a brief overview of the objectives of the survey in the survey document. Keep your survey questions between 15-30. Enthusiasm and interest wane if the survey is too long impacting the validity of the questions.
  • Keep questions brief, direct, unambiguous and cover a single topic. Questions should be written in neutral language. Avoid biased or judgmental wording (such as should, ought to, bad, wonderful, etc.).
  • Group your questions into subsets, with headings to orient the respondent. All of the questions should fit together in a logical, orderly, thematically holistic manner. Questions from “left field” are distracting.
  • Make early questions less controversial designed to peak interest.
  • Leave demographic questions for the very end. They clearly require the least thought, thus allowing the respondents enough energy to tackle more complex questions. Plus questions that are perceived as “personal” may make participants more defensive and believe that their anonymity is being violated, thus altering their subsequent responses.
  • The best questions for group analysis use a Likert-type intensity based scale. A concise, easily understood question stem should be followed by a number of specific responses. Likert questions may have from 3-10 scales arranged along a continuum of responses. We recommend at least a 7 responses scale. Odd number scales are designed to allow the middle scale to be a true neutral response. Even number scales are designed to force a choice. Larger scales permit greater discrimination but require extra time to respond.
  • Maintain consistency among scale questions, so that all worse-to-best scales run in one direction (left to right or right to left). Research suggests that a scale with the best response to the left provides a higher mean response than the same question arranged worse-to-best (called the primacy effect, or the tendency of people to favor the left side of the scale). So, best-to-worse will yield better results, while worse-to-best may give a lower mean with a wider standard deviation.
  • Use open-ended short answer questions to provide richer, more personalized responses. Remember these are harder to analyze, since group summaries can only be made after time-consuming content analysis. We recommend limiting the number of short-answer questions unless you are prepared to spend considerable time in assessing group consensus.
  • End with a brief description of how the data will be used. Participants are well aware that the data is most useful for groups that follow, but a sincere thank you and reassurance that someone will be actually reading and analyzing the data is helpful to encourage survey completion in the future.

 Whenever possible, survey results should be compared to other data, such as surveys from previous years, other published studies, etc. Summary results should be reviewed to look for possible bias. Common sources for bias in questionnaires include:

  • Sample bias. If there is not 100% participation, is the group completing the form representative of the entire group?
  • Leading or poorly written questions in the survey.
  • Participant confusion in answering a given question.

Analyze and interpret the data. Present your findings so they tell answer a question and tell a story. Always include recommendations for actions and the potential investment and impact of these actions. Learn more about how we can help you conduct market and customer research.

Comments are closed.

Follow me on Twitter

%d bloggers like this: