The Lie Is Winning Because It’s Cheaper
Truth isn’t losing on merit. It’s losing on cost, speed, and scale.
Justin Ling is right.
Not “right” in the way we say it to be polite before disagreeing. Right in the way that forces you to stop pretending.
We lost the battle against misinformation. Maybe we lost it years ago. Ling’s case is not that the people are stupid, or that the public is uniquely gullible, or that we just failed to explain things well enough. His case is that the war is being fought on terrain where truth is structurally disadvantaged—where the cheapest thing to produce is an emotionally resonant lie, and the most expensive thing to do is verify a mundane fact in real time.
The key evidence he uses is the kind that collapses the entire “just educate them” industry. In the Princeton/Northwestern experiment he cites, people could identify which headlines were fake—then shared the fake ones anyway, especially when the headlines made them angry. Outrage worked as social currency: “outrageous if true” lets you signal allegiance without paying an epistemic price.
This is the part the institutions still refuse to metabolize: misinformation isn’t only a failure of belief. It’s a behaviour shaped by incentives. And the incentives are now optimized—by platforms, by politics, by the attention economy—for engagement, not accuracy.
Views, Rage, Repeat: How the Conservative Party Became a Media Powerhouse
Have you ever found yourself arguing with a Conservative online and felt like they were living in an alternate universe? Like no matter how many facts you offer, it’s as if you’re speaking entirely different languages? That’s not a coincidence. It’s by design.
So yes: the debunking treadmill is dead on arrival.
Even in the best case, corrections don’t reliably erase misinformation’s influence on reasoning. There’s a continued influence effect; the ghost of the false claim lingers. And while “backfire effects” are more contested than the internet thinks, the overall story doesn’t change: the system’s velocity eats the correction capacity.
Ling’s prescription—informational retreat, consume less, stop fighting on the feed—is therefore not cowardice. It’s an adaptive strategy for individuals in a toxic environment.
If Justin is right about the battle, that doesn’t mean the only rational response is retreat. It means we’ve been fighting the wrong war.
The false baseline
The comforting myth behind most “misinformation” discourse is that there used to be a world in which truth won, and social media broke it.
Canada’s own governing architecture tells a different story. If truth was naturally accessible, we wouldn’t need an Access to Information Act. We wouldn’t need open government. We wouldn’t need proactive disclosure. These policy movements exist because the default condition of public information is not “available”—it is locked behind process, format, and institutional convenience.
The Access to Information Act promises a right of access and a 30-day response baseline. And yet official review material acknowledges persistent declines in meeting legislated timelines and structural reasons those delays compound (extensions, consultations, unclear definitions of “reasonable,” and more).
Worse: Canada still lacks foundational “truth scaffolding” that other democracies treat as normal, like a robust duty-to-document regime. The ATI review material is explicit: ATIA confers a right of access to records, but it doesn’t create a general legal duty to create them, and record-keeping compliance measurement is patchy.
This is not abstract governance nerd trivia. This is the substrate misinformation exploits: The lie wins by default when the truth requires an afternoon and a spreadsheet.
The war we can still fight
Justin’s argument is that you can’t fix the problem by fact-checking harder. I agree. The cost curve is wrong: slop is cheap; verification is expensive.
So the only plausible structural response is: make verification cheap.
Not “easy” as aspiration; Cheap in an engineering sense.
That means building truth infrastructure: systems that make the verified civic record frictionless to access and tamper-evident to manipulate.
It’s not a promise that society will become honest. It’s a promise that lying about verifiable civic facts becomes detectable.
Pervasive Algorithmic Shaping: The AI Problem Nobody Has Named Yet
Pervasive Algorithmic Shaping: The AI Problem Nobody Has Named Yet
There are three layers.
Layer one: structured disclosure. Canada already has the skeleton of this. The Open Government portal runs on CKAN and supports machine-to-machine access via an API. Proactive disclosure datasets like contracts are published with explicit JSON schemas and API-backed table views.
But “some structured data exists” is not the same as “the civic record is reliably machine-legible.”
What we need is a mandatory, legislated structured disclosure standard for core democratic data: procurement, grants, votes, committee proceedings, lobbying, regulatory decisions—published as versioned, machine-readable records with consistent identifiers and timestamps, where the human-readable pages are rendered from the structured record rather than treated as the canonical artifact.
This is not a moonshot. It’s an extension of patterns the government already uses in pockets.
Layer two: provenance. A structured record can still be quietly edited or removed. Without a tamper-evident publication trail, the “public record” is not actually a record. It’s a current snapshot.
The internet has already solved adjacent problems at scale. Certificate Transparency uses append-only public logs, built on Merkle trees, to make certificate issuance auditable and mis-issuance detectable. Timestamping protocols exist that let you publish a verifiable time-stamp of a hashed datum without revealing the datum itself.
The civic analogue is straightforward: when a government record is published, generate a hash and anchor it in an append-only log (or an equivalent public timestamping mechanism). Publication becomes a cryptographically verifiable event. Updates become additive and attributable, not silent rewrites.
This is the same conceptual move that content provenance systems like C2PA are trying to make for media: provenance doesn’t guarantee truth, but it makes origin and modification legible.
Layer three: consequence. Here Justin’s pessimism is the most important to keep in view. Even if the record is perfect, people can still share lies for social reasons.
But consequence is not only about moral suasion. It’s also about architecture.
When claims are attached to identity and context—sometimes real name, sometimes institutionally verified role, sometimes privacy-preserving credentials—the cost of civic assertion changes. This is exactly what verifiable credential models are designed to support.
This is not a demand that everyone post under their legal name. It’s a demand that high-impact civic assertions (ads, official communications, campaign materials, institutional statements, and eventually AI-generated civic summaries) carry provenance and accountability metadata by default.
Where AI fits (and where it doesn’t)
Here too, Justin is right: AI cannot become the new gatekeeper. A chatbot trained on the open internet is not a truth machine. It’s a remix engine—often useful, often impressive, and structurally prone to laundering uncertainty into plausibility if you ask it questions that exceed its evidence.
But “AI shouldn’t be the oracle” does not mean AI has no role. It means the role is interface, not authority.
AI is how we make the verified record navigable at human speed—if and only if the underlying record is structured and verifiable. Otherwise, the AI becomes an accelerant for the same slop problem Justin is describing.
The correct design principle is not “train AI not to lie.” Alignment drifts. Incentives shift. Models change.
The better principle is: design systems where the absence of attributable fact is visible. Not AI that tells you the truth—AI that transparently signals when a claim has no traceable civic record behind it. The architecture doesn’t verify; it exposes the gap. An assertion either resolves to a structured, tamper-evident source, or it doesn’t—and that failure to resolve is itself meaningful information.
That’s a fundamentally different semantic contract than asking AI not to lie. A system that makes epistemic provenance legible doesn’t depend on the AI being honest—it depends on the record being structured enough that the AI’s citation failure is as informative as its citation success.
The Problem With Confident AI — And How We Built Around It
Hallucination is the original sin of generative AI. Ask a large language model a question it doesn’t know with certainty, and there’s a reasonable chance it will answer anyway — fluently, confidently, and incorrectly. For casual use, that’s an inconvenience. For a platform built around parliamentary accountability, it’s a fundamental design problem.
If that sounds abstract, note that the Canadian state already has a partial precedent: the Directive on Automated Decision-Making ecosystem treats transparency artifacts (Algorithmic Impact Assessments) as publishable governance objects tied to open government processes. We can apply the same “publish structured accountability artifacts” logic to the civic record itself.
So yes, Justin is right—now what?
Justin’s “consume less” advice is still good advice. The feed is not a civic environment; it’s a behavioural modification machine.
But the existence of an individual off-ramp doesn’t remove the public obligation: we still need democratic institutions that can be audited, understood, and held accountable. That requires infrastructure.
The lie wins by default when the truth requires an afternoon and a specialist. The only way to change that is to build a public record that is:
Structured enough for machines to use,
Open enough for citizens to access,
Tamper-evident enough for history to trust,
Legible enough for consequence to return.
We don’t need to “win” the misinformation war by outposting the bad actors.
We need to make the verified civic record so cheap to retrieve—and so hard to quietly rewrite—that a large class of misinformation stops being profitable.
Justin is right about the surrender.
My argument is that surrender can be the beginning of a different fight.








Pat Martin’s 2014 bill tried to fix exactly this. It would have added a duty to document under the Access to Information Act so records had to exist in the first place. The bill died at second reading. That gap is why pulling facts still takes weeks while spin flies free. Your piece shows how the cost curve still favors the easy lie. Full text sits right here. https://www.parl.ca/LegisInfo/en/bill/41-2/c-567