Industry events (virtual or otherwise!) often include one or more sessions on the topic of quality. Almost always, that presentation shines a heavy spotlight on survey design. I think we can all agree that shorter surveys, more engaging UI and fun topics increase respondent engagement. But in our daily realities, survey design must be balanced against study objectives, budget and avoiding new bias introduction. In other words, there is not always extra budget to split long surveys into two projects or to add interactive programming elements. Does that mean that data quality should just be written off?
We view data quality as requiring a macro strategy for long-term success. We use a three-point plan with each piece playing a vital role to drive more reliable, cleaner data sets. To drive sustainable confidence in your insights, we recommend reviewing a strategy based on: Device dynamics, the person taking the survey, the project approach. Device, person, project.
Succinctly, is every respondent click’s device really operating in a clean way? Is every click tied to one respondent trying to complete your survey? Fraud has gotten incredibly complex and is being driven by advanced technology approaches that surpass the standard survey tool’s capacity to catch it. In fact until recently, the industry hasn’t had a viable, current fraud prevention system designed for survey research. Since the fraud methods are changing up so quickly, it’s incumbent on researchers to do the same. Fortunately, there is a viable solution now that understands the unique dynamics behind mobile and desktop alike.
The respondent him/herself is the next focal point for data quality. In B2B and healthcare surveys, we must ensure the respondent actually IS a professional in the target field. In other words, we need to make certain that our study’s IT Decision Makers are employed in IT. This sounds like a given, but in today’s automated landscape, we cannot take for granted honesty given the elevated incentives for professional audiences. If data quality is a paramount concern, researchers need to consider how they can commit to their clients that they source respondents who truly are the professionals required.
On the project itself, survey design and programming checks can have a valuable place in this phase. However, at this point, QC removals are a lagging indicator and often discover larger problems when it’s too late to avoid disrupting the schedule, budget or client trust in the data. Ultimately, if the quality checks in the survey program, reveal a data problem, you will scramble to react. A proactive dynamic approach prevents many of the challenges before clicks into a survey link and then the project level checks can be used to catch stragglers.
Ultimately, all three pillars work together as a cohesive mitigation strategy. Project checks wind up validating that step 1 and 2 are done well while reducing the labor and hassle. Ultimately, this concept provides a checks-and-balances model that leaves researchers feeling confident while they optimize their team’s time on the story-telling and final deliverables.