Blog | December 11, 2025

Three Small Survey Tweaks That Flush Out Fraud
By: Rich Ratclif, CTrO
If you’ve followed this series, you know the premise: I listen to how fraudsters beat surveys, then turn those tricks into defenses.
This time I’m going inside the survey, focusing on small tweaks, rather than big redesigns. The goal is to quietly make life harder for fraudsters and easier for real people.
Fraudsters are successfully exploiting a couple of common things in surveys: Fixed Screeners and LOI calculation on the whole interview. The exploits to these two common practices are ‘Path cloning’ and “partial speeding”. We’ll define those below and offer simple solutions to lock down your data a little tighter.
Fraud rings and solo cheaters love fixed screeners. To them, they see screener questions being asked in the exact same order every time as a hackable feature. Once they “succeed” getting through one, they share answer keys or “path clone” a survey in the bad-guy communities. Then, naturally, they build automations or AI agents to follow that same route.
A lot of non-English fraud works the same way: they’re not reading text, they’re following a “cheat sheet” button pattern.
To jam this up, whenever methodology allows it, rotate these things:
You’re not trying to confuse genuine respondents; you’re killing the value of a static “cheat sheet.” Automation and non-English fraud break down quickly when the path isn’t the same every time. Just don’t rotate where order clearly matters (concept monads, exposure sequences, brand lists tied to shelf position).
Most people calculate Length of Interview (LOI) from start to finish. Fraudsters know that, and they work around it. They:
If you only look at the overall LOI, they can land in the middle of the distribution and look “fine.”
A better approach is to look at LOI excluding demographics—either the time from the end of the screener to the last non-demo question, or (if you have per-question timestamps) timing across core sections while ignoring demo pages.
You’re not punishing fast, thoughtful readers. You’re targeting people who treat your survey like a race, then idle at the end to disguise it.
This one is simple, powerful, and gives you a built-in feedback loop.
At the end of the survey, ask respondents what the survey was about. Ideally:
Many fraudsters can’t comfortably read English and know real-time translation or obvious machine text can get them flagged, so they rely on answer patterns and generic, copy-paste open-ends. By the end, they may have no real sense of the topic—especially in technical or niche B2B work. Asking them to name the subject forces a basic comprehension check their pattern-based approach can’t fake.
You’ll see authentic respondents giving short but clear answers like “pricing for cloud software” or “a new snack brand,” and fraudsters throwing out generic or off-topic noise: “It was about my opinion,” “the survey is good,” or something obviously pasted from somewhere else. Your real respondents also hand you UX insights—where the survey was confusing, repetitive, too long, or unclear.
None of these are magic bullets, and none of them replace device-level checks, behavioral monitoring, and/or scoring, or post-survey QC.
But use them where they fit your methodology. Done thoughtfully, these micro-moves keep you in the researcher lane in a way that turns your survey hostile for fraudsters’ SOPs while protecting the authentic respondent experience.
One note on behalf of your sample partners: if you know a respondent is fraudulent, term them real-time. It saves your partners from clawing back incentives.
Tired of chasing bad respondents? Contact OpinionRoute to see how we can help you secure your surveys.