Helping Quality Along

Helping Quality Along

We at OpinionRoute spend a lot of time talking about the correlation between the method of respondent recruitment and its correlation to the ultimate quality of the data they provide in our surveys. As the dynamic of online data collection continues to morph and evolve, the opportunity to work within the survey program to help drive better data can’t be overlooked. Today, data collection firms have a tremendous suite of technologies available to help with this need. From automatic verbatim comment parsing to data consistency algorithms, we have never had it so great.

‍While we have access to some of these technologies, I have found simple, cost-efficient ways to implement effective quality control mechanisms without adding technology expense. Here is a brief overview of techniques you can use to markedly improve the data you’ll be reviewing.

Attention Checks

In my 15 or so years in this industry, I have come across two general approaches to designing questions to test a respondent’s attention level.

‍1. Questions designed to “trap” the respondent
2. Questions designed to “remind” the respondent

‍While in my experience both can be equally effective, I tend to utilize the “remind” method for the following reasons:

• There was time and effort spent to get the respondent into the survey. It’s a better practice to try and push them to provide quality responses than it is to replace them

• It reinforces a professional relationship between the data collection company and the respondent

• It is a softer approach which makes for a better respondent experience

• Easier to implement globally (less opportunity for cultural misunderstandings)

‍These questions should be designed to openly communicate to the respondent, “Hey, we are paying you for your honest and sincere feedback. Please ensure that is what you are providing.” They should not include any subordinating language nor attempt and trick the respondent into selecting an incorrect answer. KEEP IT SIMPLE! For example, “Please select “˜5′ from the scale below.” It is straightforward enough that those who select another value other than five either did so by honest mistake or were randomly clicking to try and speed through the survey.

‍We typically recommend one attention check question be added for every 5 minutes of survey length. Attention check questions should be placed at points in the survey where fatigue is expected to set in. For example:

• in between long batteries of ranking or rating exercises

• at the start of new lines of questioning

• after more engaging methodologies (i.e., conjoint, MaxDiff, Gabor Granger, etc.)

When collecting data internationally, it is imperative to be aware of any cultural differences that impact how an attention check question would be interpreted and answered.

Speed Traps

Speeders, or respondents who complete surveys in less time than they should if reading all content, are relatively straightforward regarding quality control measures. How we typically approach this function is by converting survey questions into response equivalent time measures. Those measures are added together to come up with an estimated average and minimum respondent survey length. One essential item to keep in mind is that some questions may not be asked of all respondents, so a minimum survey length must be used for estimation purposes. Using that minimum, we take 50% of that time and use that as the recommended threshold for removal.

‍During the soft launch of a project, we will disable automatic speeder removal, compare the times of those who completed to the threshold and adjust as necessary before full launch.

‍Again, when collecting data internationally, different cultures take surveys at different speeds. These projects must be able to institute speeder checks by country. Be sure to rely on the soft launch data to confirm realistic minimum time thresholds.

Open Text Analysis

For surveys that prompt the respondent to provide lengthy text, we would implement a mechanism that counts both the words and characters of the respondent’s answer. This helps identify respondents to review. However, it does not always allow a proper quality control mechanism and is typically not utilized within an automatic removal algorithm. We can use this data point within an automated system in rare circumstances where we can anticipate a minimum number of words required.

Straight-Lining

‍Straight-lining, or the event where a respondent selects the same response for all attributes in a grid, is an area I approach with extreme caution. There are a few factors we look at when assessing if this metric can be included in the quality control mechanism:

• how many attributes are being evaluated?

• how many answers are available to select from?

• is it possible for a respondent to select all the same answers?

There are infrequent circumstances where I feel straight-lining is a legitimate quality control measure. One such instance is when there are attributes in the same question using the same answer list that have opposing meanings. For example, “how likely are you to recommend”¦” vs. “how likely are you to not recommend”¦.” Again, keep in mind cultural differences when implementing straight-lining as a quality control measure.

Automatic Removal Algorithm

Each survey is unique, as is the sample that will be going through it. We can implement an automatic removal algorithm for many projects based on the total number of quality control mechanisms and the number of failed ones. For example, if we have three quality control mechanisms, I recommend a threshold to terminate respondents who fail two or more automatically.

‍For surveys that are 10 minutes or less in length, I would not recommend implementing an automatic removal algorithm. In these cases, we would typically be between 0-2 quality control mechanisms in the survey. To err is human! It is expected that an engaged respondent accidentally fails a quality control mechanism, and failing two is far less likely and can be more reliable. However, it is still possible that removal based on two fails could remove good quality respondents. For this reason, I do not recommend implementing an automatic algorithm but more of a manual data review at 10%, 50%, 90%, and 100% of fielding.

For some projects and some clients, we only identify respondents in the data with a marker that we feel should be removed and allow the client to make the final cuts.

Setting the Expectation

The most important item to successfully implementing suitable quality control mechanisms is to communicate with the respondent at the beginning of the engagement that this survey contains quality control mechanisms, and failure will result in ineligibility of incentive.

Including what you are implementing, why you are implementing it, and what will happen when adherence is not met will give you a solid foundation when respondents are not compliant and are denied incentives. I recommend that this communication is on the first page of the survey.

So often, we forget how vital the respondent is to what we do in market research. I agree that there are people out there who are constantly finding new ways to cheat the system, and they, unfortunately, make it that much harder for the good ones to participate. So, it is vital to weed out those bad apples but just as important not to alienate the good ones in the process. Implementing a straightforward quality control program in your survey can be cost-efficient to achieve both objectives.

Insights & News

News and Perspectives for the constantly evolving market researcher.

About OpinionRoute

Learn more about the team committed to redefining survey insights delivery.

Want to hear more?

Let us show you how to take your quantitative research to a new level.

About OpinionRoute

We deliver accurate data by utilizing our expertise in online survey sampling and proprietary technology solutions to simplify research processes. This enables clients to scale and researchers to stay ahead in a dynamic and competitive market.

Contact Info

hello@opinionroute.com 216-282-0793

Headquarters: Cleveland, OH

© 2024 OpinionRoute, LLC. | All Rights Reserved.