The Five-Cent Voter
A tale of two polling architectures, the AI threat reshaping survey research, and the paradox that the methods designed to measure disinformation may now be fuelling it.
Over the past several months, a throughline has emerged across artificial intelligence that most policy conversations still have not caught up to.
The mechanism has a name: pervasive algorithmic shaping.
AI systems do not need to generate false information to distort belief. They shape perception through selection, framing, and reinforcement.1 MIT research has already demonstrated that sycophantic AI systems can push even rational users toward internally consistent but detached belief systems.2 At the same time, commercial incentives reward agreeableness over accuracy. In that environment, the lie is structurally cheaper to produce than the truth.
That dynamic describes how belief is shaped.
A second dynamic is now emerging.
The same systems are beginning to corrupt the instruments used to measure what people believe.
Two billion ghosts
In February 2026, David Dutwin of NORC’s AmeriSpeak panel published a finding that should have landed like a grenade: roughly 40 percent of nonprobability survey interviews conducted in 2025 were likely fraudulent3.
That translates to approximately two billion fake responses globally.
For now, most of that fraud is still human. Click farms. Rows of low-paid workers completing surveys at industrial scale.
But even without AI, the scale is already staggering.
In 2023, researchers found that 96 percent of responses to an online nonprobability survey of commercial beekeepers were fraudulent.4 In 2022, Pew Research Center found that 12 percent of adults under 30 in an opt-in survey claimed to be licensed to operate a nuclear submarine.5 In 2025, the U.S. Department of Justice unsealed an indictment alleging a $10 million scheme involving fabricated survey responses sold to market research clients.6
This is not a methodological footnote. It is a criminal industry.
And the AI wave is arriving on top of it.
Sean Westwood’s 2025 paper showed that a simple AI agent could complete surveys while passing 99.8 percent of quality checks.7 The cost per response was approximately five cents. A real respondent costs closer to $1.50.
That ratio breaks the model.
The assumption that a coherent response implies a human respondent is no longer defensible.
Fraud does not behave like noise. It behaves like bias. It clusters. It mimics patterns. It distorts precisely the areas that matter most: subgroups, trend lines, and hard-to-reach populations.
That is the environment every pollster now operates in.
But not every method is equally exposed to it.
Model one: Open entry, downstream correction
Abacus Data represents the most refined version of the modern nonprobability model.
And it works. In the 2025 federal election, Abacus landed within half a point of the final result.
That outcome was not accidental.
David Coletto has spent years documenting a consistent problem: Conservative voters are underrepresented in online samples. The gap has grown from 1.8 points after the 2011 election to 7.6 points in the current cycle. The pattern holds across survey waves and election cycles.
The solution is past-vote weighting. Respondents are asked how they voted in the previous election, and the sample is adjusted to match the known result. In 2021, that correction shifted the Conservative estimate by seven points. The methodology has been published transparently, backed by dozens of surveys across multiple election cycles.
It is careful, honest work.
But it rests on a critical assumption:
That the people answering the survey are real.
That is where the ground begins to shift.
McLuhan's Missing Variable
Marshall McLuhan probably has the most misunderstood Heritage Minute in Canadian history.
Abacus relies on opt-in online panels. As 338Canada has noted, margins of error do not strictly apply.8 Not because the math is flawed, but because the population itself is uncertain.
Weighting can correct for who is missing.
It cannot verify who is present.
In an environment where a substantial share of responses may be fraudulent, and where synthetic respondents are both cheap and difficult to detect, part of what appears as “Conservative underrepresentation” may not be absence at all.
It may be distortion.
That problem cannot be solved downstream.
The key point is this:
The observed pattern of missing Conservative voices is real. It has been measured carefully and consistently. But the tools used to measure and correct that absence now operate inside a compromised environment. One side of the field is detecting the distortion statistically. The other is observing its effects and compensating for it without being able to verify the underlying data.
The issue is no longer which interpretation is correct.
It is that the threat is advancing faster than the tools used to understand it.
Model two: Controlled entry, upstream validation
EKOS takes the opposite approach.
Its Probit panel is built through random selection and live telephone recruitment. Every participant is verified as a real human before entering the dataset. Identity is confirmed by a live operator. The same individuals can be recontacted over time, allowing for longitudinal tracking.
It is slower. More expensive. Harder to scale.
For years, it appeared outdated.
Now it appears prescient.
In February 2026, Frank Graves presented at the CIPHER conference, the annual gathering of probability-based panel researchers. His presentation captured the tension clearly: “The Polling Paradox: How Survey Methods Confront — and Sometimes Fuel — the Disinformation Crisis.”
His argument extended beyond data quality.
Nonprobability methods do not just risk contamination. They risk amplification.
When fraudulent respondents cluster around conspiratorial or extreme answers, the survey does not simply mismeasure belief. It inflates it. Pew demonstrated in 2024 that opt-in panels can significantly distort sensitive measures such as Holocaust denial.9 EKOS data shows that nonprobability methods systematically exaggerate disinformation and conspiratorial belief.
The measurement instrument becomes part of the phenomenon it is trying to study.
AI intensifies that effect.
These are not the bots of five years ago. They adopt coherent personas. They simulate human timing. They respond in ways consistent with demographic profiles. They pass nearly all standard fraud checks.
At CIPHER, the structural difference was laid out clearly.
Probability panels rely on known sampling frames, live verification, and recontactable participants. Large crowd-sourced panels rely on opt-in recruitment, affiliate traffic, and minimal identity validation.
The first produces verified respondents with low synthetic risk.
The second carries high exposure to bots, duplicates, and fabricated data.
The conclusion was direct:
Nonprobability methods should not be used when accuracy is critical.
The 2025 scoreboard
There is an uncomfortable reality.
Abacus was more accurate than EKOS on election night.
The final result showed a Liberal margin of 2.5 points. Abacus projected a two-point lead. EKOS projected a majority that did not materialize.
That matters.
But a deeper pattern emerges beneath that snapshot.
In January 2025, while nonprobability trackers still showed strong Conservative dominance, Probit detected a Liberal breakout. The signal was early and clear. By the time nonprobability polls reflected the shift, it had already occurred.
Graves describes this as “unexplained convergence.”
Probability and nonprobability methods tend to align during stable periods. They diverge during periods of rapid change. Then they converge again near election day.
If methods are applied consistently, estimates should not systematically diverge during turbulence and reconverge precisely when accuracy is most publicly evaluated.
That pattern suggests something important:
A model that cannot detect change in real time is not measuring behaviour. It is catching up to it.
What the by-elections revealed
Recent by-elections added another layer.
Across three ridings, Conservative support did not decline gradually. It collapsed.
In Scarborough Southwest, support dropped from roughly 31 percent to 18. In University–Rosedale, the party fell from second to third, losing more than 11 points. In Terrebonne, the Conservative share dropped from 18.2 percent to 3.3. The NDP registered 0.5 percent.
I Loaded Canadian Parliament Into an AI. Here's What I Found.
I was 21 years old when I dropped out of the music program at Western and stumbled into technology.
Neither firm polled these ridings directly. But the results expose a structural limitation in past-vote weighting.
In January 2025, probability-based data detected a genuine behavioural shift weeks before nonprobability panels registered it. The by-election results represent a similar rapid shift.
Now consider how past-vote weighting would respond.
The method corrects toward previous election results. In Terrebonne, that benchmark is 18.2 percent.
But the real result was 3.3 percent.
The correction would move the estimate in the wrong direction.
The stronger the correction, the larger the error.
This is not an execution flaw. It is a model limitation.
Past-vote weighting addresses sampling imbalance.
It fails when the underlying behaviour itself changes.
And it cannot distinguish between the two if the authenticity of respondents is uncertain.
The architectural divide
At this point, the distinction becomes structural.
Abacus corrects after entry.
EKOS verifies before entry.
One is an arms race.
The other is a gate.
The economics favour the attacker.
Five cents versus one dollar fifty.
Near-perfect evasion versus imperfect detection.
The Insights Association acknowledged in January 2026 that existing fraud detection methods are increasingly inadequate.10 Verification typically applies only to core panels. When quotas require expansion, vendors move off-panel, where fraud risk is highest.
No amount of weighting can repair a dataset built on uncertain identity.
The loop closes
Step back and consider the full system.
AI shapes belief through selection, framing, and reinforcement.
AI corrupts measurement through synthetic respondents that pass validation and introduce structured bias.
The two systems reinforce each other.
Distorted inputs produce distorted outputs. Those outputs are reported as reality. That reported reality shapes the next round of belief.
Nonprobability methods, in particular, risk amplifying the very phenomena they attempt to measure.
This is the polling paradox.
Bad measurement does not just misread reality.
It can begin to create it.
What follows
This is not about which pollster is right.
It is about the speed at which the threat is outpacing the tools used to understand it.
One approach measures contamination directly. The other observes its effects and attempts to compensate. Both are responding to the same underlying problem.
But neither weighting nor verification can fully resolve it on their own.
What democratic systems require is not simply better correction.
They require participation.
Real people. Verified participants. Broad engagement across all segments of the population, including those who are increasingly absent from measurement systems.
The problem is not only that synthetic respondents are entering the sample.
It is that real respondents are leaving it.
Probability panels are not perfect. Their errors are grounded in real human behaviour and turnout dynamics.
Nonprobability panels are not invalid. They can produce accurate snapshots under certain conditions.
But the underlying architecture is exposed.
And that exposure increases with every improvement in AI capability, every advance in synthetic persona design, and every reduction in cost.
The question is no longer who had the best poll in the last election.
The question is whether the systems used to understand society can keep pace with the systems being built to distort it.
Because the lie is cheaper than the truth.
And now the fake respondent is cheaper than the real one.
CanadaGPT connects 75+ data tools across parliamentary records, lobbying registrations, government contracts, grants, charities, and spending into a plain-language AI interface.
Complimentary for Northern Variables subscribers
Sources
Sharma, S., Tong, M., Korbak, T., et al. (2023). Towards understanding sycophancy in language models.
https://arxiv.org/abs/2310.13548
Massachusetts Institute of Technology. (2026). Personalization features can make LLMs more agreeable.
https://news.mit.edu/2026/personalization-features-can-make-llms-more-agreeable-0218
Dutwin, D. (2026). The fraud problem reshaping survey research. NORC at the University of Chicago.
https://www.norc.org/research/library/fraud-problem-reshaping-survey-research.html
Goodrich, B., Fenton, M., Penn, J., Bovay, J., & Mountain, T. (2023). Battling bots: Experiences and strategies to mitigate fraudulent responses in online surveys. Applied Economic Perspectives and Policy, 45(2), 762–784. https://onlinelibrary.wiley.com/doi/10.1002/aepp.13353
Kennedy, C., Mercer, A., & Lau, A. (2024, March 5). Online opt-in polls can produce misleading results, especially for young people and Hispanic adults. Pew Research Center. https://www.pewresearch.org/short-reads/2024/03/05/online-opt-in-polls-can-produce-misleading-results-especially-for-young-people-and-hispanic-adults/
United States Attorney’s Office, District of New Hampshire. (2025, April 15). Eight defendants indicted in international conspiracy to bill $10 million for fraudulent market survey data [Press release]. U.S. Department of Justice.
https://www.justice.gov/usao-nh/pr/eight-defendants-indicted-international-conspiracy-bill-10-million-fraudulent-market
Westwood, S. J. (2025). The potential existential threat of large language models to online survey research. Proceedings of the National Academy of Sciences, 122(47), e2518075122. https://www.pnas.org/doi/10.1073/pnas.2518075122
Fournier, P. J. (n.d.). About 338Canada. 338Canada.
https://338canada.com/about.htm
Builders Movement. (2024, June 12). Misleading survey results can amplify animosity and fear.
https://buildersmovement.org/2024/06/12/misleading-survey-results-amplify-animosity-fear/
Insights Association. (2025, April 22). Insights Association issues statement regarding federal indictments of Op4G/SliceMR and SNWare Research. https://www.insightsassociation.org/News/Industry-News/ArticleID/1651/Insights-Association-Issues-Statement-Regarding-Federal-Indictments-of-Op4G-SliceMR-and-SNWare-Research








