We’ve all grown up hearing about the housing boom of the 1950s, when large subdivisions of homes started to pop up outside of major cities. These cookie-cutter houses were architected for standardization and optimization: quick to go up, easy to spot, and a standardized approach to ensure quality and return.
That all changed in the late 1990s. Land scarcity forced builders to stick to what they were good at- acquiring land. Instead of employing specialists in certain competencies (such as plumbing, HVAC, or tiling) they began to subcontract the work. This allowed them to optimize their margins while still producing a high frequency of houses. It also compromised the quality, for the ability to control quality and standards took an immediate nosedive with heavy fragmentation of contractors. Sold by a single vendor, houses were part of the same community – yet owners’ experience could vary widely.
This may be a provocative statement, but it feels like the modern market research landscape is in a similar state.
Historically, evaluating vendors and isolating the quality players was a simple thing to do. Selection helped optimize the quality of sample so that researchers could focus on what they do best-analysis. But, like the housing shift, the advent of programmatic sampling has upended the ability to control for data quality. Selecting a specific vendor or panel does not guarantee that the data collected will accomplish what is needed without a significant amount of tedious cleansing to ensure a quality data set.
To go back to the housing example, it’s like taking possession of your new home only to find there’s still scrap material all over the place; the floors aren’t installed, and the knobs are missing from all the doors. It falls on you to complete the last pieces to make the house complete. If installing flooring and drilling knob holes aren’t your skillset, it’s going to end up being a frustrating, long, expensive process.
Like building a house, creating a high-quality, integrous data set requires a strategy before surveys are in the field. It proactively leverages systems and processes that weed out issues like fraud or dupes in advance. It validates that the individual targeted is who they claim to be and that the answers provided are coherent, accurate, and consumable.
At OpinionRoute, we call this a Device-Person-Project strategy, which we believe is the key to creating a quality data set that can be trusted for accurate data.
Standing as individual components, each module in a Device-Person-Project strategy allows market researchers to address fraudulent behavior, validate participants, or isolate problematic verbatims. Together, Device-Person-Project allows researchers to field surveys that produce well-architected data sets, eliminating the cumbersome time spent in DIY tools that might isolate response bad behavior but can’t get at root issues of fraud or identity that could potentially ruin a data set.
This means that instead of hours walking row by row, trying to resolve IP addresses, looking for incoherent entries from speeders, or deploying traps to catch bots, you can leverage tools from experts in data collection strategy.
If you feel like one of our clients, who said she feels like “I’ve spent more than a year from the last five years cleaning data", then we have a solution for you. While you were going row by row, OpinionRoute was busy perfecting technology solutions and processes that allow market researchers to focus on insights, not spending one fifth of their available time cleansing data.
To learn more about deploying a Device-Person-Project strategy in your next project: let’s talk. We can discuss your current approach and how to implement a proactive plan for your next project that prioritizes quality data collection-taking the tedious work off your shoulders so you can shift to completing the project and delighting your client.