This is part 4 of the “What is Data Quality?” series. Click here to read part 3: https://www.opinionroute.com/blog/topic-3-how-to-measure-data-quality/
No matter how systematized you get with a data quality strategy and great tech tools, you still need expertise and informed judgment as a precursor to “cleaning” any data set.
We are proud of the array of talent that we work with on the Market Research side. We see nuances based on experience across all of our clients, particularly when a couple of the quality measures seem to be in conflict. I’ll give two examples:
Common Case 1 = Verbatim repetition
Scenario: In a simple open-end response, a respondent answers the same way multiple times: “I like it”.
Evaluation: This could be an indicator of fraud or perhaps a respondent is just mentally checking out on you. How do you judge?
If it’s an engagement issue, how do you decide to keep them or toss them? In these cases, I highly recommend looking at other data points through a professional lens and then make an informed judgment call based on a holistic set of inputs.
Common Case 2 – Straight liners
Scenario: The usage of automated logic checks like “Straight lining”. The philosophy here is that if a respondent answers the same way for X number of questions in a row, that’s a good indicator of bad data quality.
Evaluation: In action, this could look like a respondent answering “3” on a 5-point scale ranking appeal of 10 products in row. In cases like this, it’s wise to ask whether this is reasonable. Is it conceivable that everyone just rates 3s across the board because of a lack of passion either way about the topic?
Again, experience and judgment win out here.
Now that we’ve gone through all 4 topics, click here for a full summary.
Please click here for Part 5 of 5.