
Your Survey Engine Has No Idea What’s Happening Inside the Survey Other Than the Answers.
By: Rich Ratcliff, Chief Trust Officer
I was speaking with a member of our Whitehat team, and I asked them a simple question: “Do you know when you’re being watched inside a survey?“
Without hesitation, “yes”.
Then they told me how they know: Some surveys are very obvious, and others require a look at the page/code. They rarely have to worry about any monitoring.
How easy is it? “I open the survey, switch back and forth between tabs, let Google or AI help me understand the subject, pull the answer, and move on. The survey has no idea. It just sees answers.”
That’s not a sophisticated attack. That’s a Tuesday. And it’s happening across studies right now — undetected, unremarkable, and indistinguishable from a legitimate respondent by every metric the survey was designed to capture; what they answered, not how. That is what needs to change.
The Fraud Playbook
A pragmatic fraudster doesn’t need specialized tools. They need a survey environment that only watches what respondents say and ignores everything about what respondents do. In that environment, switching tabs to consult Google/AI, pasting generated responses for an open-end field, or navigating the entire survey without ever moving a mouse – none of it trips anything. The survey collects the response and moves on.
The signals that would expose this behavior exist. They’re the same tech e-commerce platforms use to catch bots and that banks use to flag account takeovers. The problem isn’t that they’re out of reach. The problem is that market research never built a room with windows.
The Recommendation
Acknowledge the skills gap and partner up – this is not a DIY moment.
The previous recommendations in this series have been relatively simple pivots: tighten a threshold here, add a logic branch there, check this at the end. This one is different in an important way. It’s not that the fix is complicated — it’s actually just as simple. The difference is that this is one area where you cannot attempt to do it yourself and call it done.
Think about it this way: when your engine is making a horrible noise, you don’t start taking it apart to find the issue. You take it to someone who has the tools, the training, and the diagnostic systems to tell you exactly what’s wrong. Survey behavioral monitoring is the same conversation.
There are approximately 150 measurable behavioral signals available inside a browser session — keystroke cadence, response time, characters per second, mouse movement patterns, scroll behavior, tab visibility, window focus events, and more. Each one requires precise instrumentation, calibrated thresholds, and an understanding of how signals interact with device type, connection quality, and survey context. Market research agencies are not cybersecurity firms. Survey programmers are not web behavioral analysts. Attempting to self-instrument even a fraction of these signals without that expertise will produce misread data, misfired logic, and QC decisions made against the wrong inputs — which is arguably worse than not measuring at all.
The right path is not to build it. It’s to plug in technology from partners who already have. Embed their tech, receive a clean signal or risk flag in return, and let the survey programmer do what they’re actually good at: if risk_flag = TRUE, terminate. One logic branch. That’s the whole task, don’t overcomplicate this.
Five Questions Every MR Agency Should Be Asking
- Does our current survey environment capture any behavioral telemetry beyond the ‘what’ into the ‘how’?
- Can that third-party behavioral tech be embedded into our surveys without rebuilding our existing platform or feeling like we’re going through a dev sprint?
- Are behavioral risk signals deliverable as a real-time flag that our survey programmer can act on during fielding — not just in post-processing?
- What is our current protocol if a study closes and behavioral fraud is identified after the fact? Are we depending solely on the ‘what’?
- If a client asked us today to demonstrate how we monitor respondent behavior inside the survey, what would we show them?
Closing Thought
This isn’t a pitch for our platform — although we would love to speak with you about data quality. It’s a pitch for something…anything..that is professionally built. Behavioral monitoring inside the survey is not optional anymore – it’s a responsibility to your clients. And if the cost of a partner solution feels hard to justify, try pricing it against the alternative: handing a client insights full of fraud because you attempted to forensically identify it after the survey closed. That’s not a data quality conversation. That’s a client you don’t keep.
Take action today: talk to our team about embedding proven behavioral monitoring into your surveys and start protecting your data — and your clients — now.
Click here for part 1 of Recommendations from a Fraudster series.
Share This Post:
Insights & News
News and Perspectives for the constantly evolving market researcher.
About OpinionRoute
Learn more about the team committed to redefining survey insights delivery.
