Request a Quote

Garbage in garbage out is a common idiom that has been applied in a variety of situations. It aptly applies to research. Poorly constructed research instruments won’t yield you what you need.  Leverage these best practices to make sure your investment of time, money, and people aren’t wasted.

Gaining insight from customers as to why they chose to do business with you, or why they didn’t, what current issues they’re trying to address, how they evaluate products and services in your industry, the companies they evaluate and so on can be instrumental in your customer acquisition and retention efforts. Hence the value of conducting market and customer research.  

Design an effective research instrument.

There’s a science and art to designing effective research instruments.

One of the core components of conducting research is your survey instrument. Creating a survey instrument whose results provide an organization with quality, accurate and actionable information to make sound business decisions, is more difficult than most people realize. 

Best Practices for Creating an Effective Survey Instrument

The key to creating an effective survey is to construct with an eye towards validity, reliability, replicability, and generalizability. 

The starting point for any research is to know what you want to know. The more you follow these best practices to develop your survey instruments the more likely you’ll achieve these:

  1. Determine the business objectives. Start with your hypothesis. Have a clear idea of what you decision you need to make and what information you need to make it.  Before embarking on the survey design, users need to determine the business objectives and answer questions such as:
    • What is the purpose of the survey?
    • What are we trying to measure?
    • How many questions should we include?
    • What type of rating scale should we use?
    • How will we know that the survey worked – what will make the data actionable?

2. Design the “questionnaire”. Good survey instrument design is the most important step and ensures that you are able to get the results your organization needs. Frame your questions in direct, unambiguous, simple and unbiased language. Make sure every question is “measuring” something in a dispassionate way. Every survey should:

  1. Begin with a title and a preamble that explains the overall aim of the survey; whether it is part of the survey invitation or at the beginning of the survey itself, informs participants upfront such as the length of the survey and the level of confidentiality.
  2. Balance white space to improve readability – You will want to balance the use of white space between both questions and sections to improve readability, without unduly increasing the apparent size of the survey.
  3. Well written instructions. and questions Keep instructions and questions at the eighth grade level or lower, without being condescending.

All instructions, question directions and response categories need to be clear, especially with written surveys. For example, if you want to know the frequency of use for a service or product, we tend to create a response category that includes extremely often, very often, not too often and never. But what does “often”” mean? “Often” may have a different meaning from one respondent to the next. A way to be clearer might be to create a response category such as: everyday, not every day but at least once a week, , 3 times per month, once a month or less. Remember not to overlap your categories. Overlapping categories is very common when people put together dollars. Instead of less than $25,000, $25,000 – $50,000, $50,000 – $75,000, $75,000 – $100,000, great than $100,000 use less than $25,000, $25,000-$49,999, $50,000 – $74,999; $75,000-$99,999, $100,000 or higher.Use as few words as possible in both the question and the alternatives and avoid the use of polysyllabic words.

Provide general instructions to the respondents at the beginning of each section and clearly define specific instructions associated with the different question types to order to aid in the correct completion of that question. This includes phrases such as “Please check one box only” or “Please rank in order of 1 to 5” with 1 being most important. It is a good idea to test your survey before you deploy it and one of the key items you should test is the instructions.

Ask only one question at a time. It is not uncommon for folks to ask what we call double-barrel questions, such as how satisfied are you with cost and convenience of….? With this question you will not know whether the response applied to cost or convenience. We suggest that if both are important then make them two separate questions.Use the language of your respondents.

Avoid biasing questions. For example, asking a question such as, “what did you dislike about…? And then only offering response categories to support the questions forces the respondent to bias their answer. A better way to approach this question is to ask a qualifying question first such as, “Did you dislike anything about…? Yes or No.” Those that say yes go on to the question that asks them what did they dislike.

These questions are important to the design of any type of survey, whether it be a customer satisfaction survey, a product evaluation survey, or a program evaluation survey. It often helps to involve multiple departments in the process in order to gain consensus. When all business units are involved upfront, your chances of asking the right questions and not collecting unnecessary data are greatly improved.

Pre-test your surveys against the intended target. When you do the pre-test ask the tester to let you know if there are any words they do no regularly use or did not understand.

4.Use filter questions – Filter questions make it possible for respondents to be able to bypass questions (or whole sections) that are not relevant to them. For example, if you ask your respondent if they have ever used Product A and they have not, you may want to take them to an entirely different set of questions then someone who is familiar with your product. Allow “don’t know” and “not applicable” selections – if a respondent is unsure about whether to answer a question, or which answer is the most appropriate, they should be provided with a “let-out” selection, such as “Don’t Know” or “Not Applicable”. This will help ensure people don’t select something just because there is no other option available to them. However, when a large number of respondents choose such options, it is time to examine whether the question is badly worded, or in the wrong place in the questionnaire.

5. Keep It Short, Simple and Focused. It is easy for the number of questions in a survey to creep up, mostly because companies do not survey often enough and see a survey as an opportunity to ask everything. Usually the person receiving the survey is not in a position to answer everything. Keep in mind what you want to know and who you need to know it from. Organize the survey into sections or topics with the easier items first. Keep sentences to 8 words or less and limit the number of questions per page. Extensively long surveys lead to fatigue and high abandonment rates. The shorter the better.

6. Determine a rating scale. Scales are critical to the success of your research. Well designed scales are easy to understand and accurately represent the respondent’s true attitude, preference or opinion. However, two or three point scales are traditionally not distinct enough to rate the importance of various attributes. For instance, scales with 4 – 8 points provide far more insight into the subtle distinctions and value of an attribute. Clear and well thought out rating scales, as well as clearly defined instructions, are key to minimizing rating errors.

7. Select the sample and define your respondents.  Select the population that you want to understand and then gather a representative sample of this group. It is important to have a sufficient sample size if want the results to be statically meaningful. If the sample size is too small, it is very possible to draw erroneous conclusions. Factors that affect a sample size are how large the group difference you wish to detect, how variable your measure is in the sample or populations, and how precise you want the results to be. A sample should be chosen at random from the population, so that it is representative of the population. In this manner decisions based on the characteristics of the sample can then be generalized to the entire population. Random sampling enables the researcher to draw statistical inference based on information collected from a small group representative of the population under investigation. With a random sampling technique, each subject in the population has an equal chance of being included in the sample. In order to avoid sampling errors, your minimum sample size should be kept between 30 – 50. Larger populations don’t require larger samples. Once the sample size is over a few hundred, the level of precision doesn’t improve proportionately with the increases in sample size. In order to achieve an accurate sample, you must decide on a level of granularity to reach a conclusion. Develop a process for finding these respondents and engaging them

3. Survey implementation: Response rate is the single most important indicator of how much confidence can be placed in the results of a survey. A low response rate impacts the reliability of a study. Always test your survey before deployment. What may seem obvious to the survey author may be completely unclear to the typical receiver. Or worse, a difficult question will be misunderstood or skipped and a difficult to understand survey is most certainly destined to be thrown away. Be sure to plan for survey reminders. Traditionally, between 10 and 60 percent of those who are sent questionnaires respond without follow-up reminders whether by phone, email, or post card or a combination of these. However, these rates are too low to yield confident results, so the need to follow up with targets is imperative to the success of the survey. You may want to consider an incentive to improve the response rate.

Make sure the respondents are qualified to answer the questions, otherwise they may feel compelled to make up an answer. It is acceptable to include a “Don’t know” category, however, when you receive a lot of “don’t knows” it may mean that they really do not know and are not qualified to answer, or that this was the easiest answer to select.

4. Analyze and report the results. Be systematic in your data collection and analysis. Report your results in neutral statements that reflect the facts. When analyzing research results you want to be sure to address validity. The validity of the questions can be assessed by examining the number of respondents who chose each response option. No single option should have more than 85% of the responses and none less than 5%. The business issues can be assessed by examining responses to individual questions and groups of questions on a single theme that are treated as separate measures. The inclusion of key demographics provides valuable opportunities for insightful subgroup analysis.

Research is both a science and an art.  Be sure you have the science under your belt. Otherwise leverage experts. Should you decide to do you own research instead of taking advantage of experts, these books might help. 

Comments are closed.

%d bloggers like this: