McLuhan's Missing Variable
How the revenue signal turned the medium from message to manipulator — and why we still can’t see what it’s doing
Marshall McLuhan probably has the most misunderstood Heritage Minute in Canadian history.
If you haven’t seen it recently, go watch it:
https://www.historicacanada.ca/productions/minutes/marshall-mcluhan
It’s sixty seconds set in a University of Toronto classroom in 1961. McLuhan opens with a declaration — “TV sucks the brain right out of the skull!” — and then spends the remaining time trying to explain what he actually means by the medium is the message to a room of students who are clearly delighted by him but not quite following. He gets cut off mid-sentence before he can finish. The Heritage Minute ends before McLuhan can explain his own idea.
It is either perfect Canadian filmaking irony or an unintentional demonstration of the point.
The clip is beloved. It’s also deeply ironic, because most Canadians who love it couldn’t tell you what McLuhan’s work actually says. They know he was smart. They know he was Canadian. They know he said something important about media. Beyond that, it gets fuzzy.
Which is fitting, because McLuhan spent most of his career complaining that people weren’t listening.
Marshall McLuhan was born in Edmonton in 1911, taught at the University of Toronto for most of his life, and by the 1960s had become something genuinely rare: a humanities professor who was famous. Understanding Media, published in 1964, made him a household name — or at least a dinner party name. He appeared in a Woody Allen film. He was quoted in Playboy. He was called a prophet, a fraud, a genius, and a charlatan, sometimes in the same paragraph.1
The phrase that made him famous — the medium is the message — is also the phrase that almost nobody understands.
Most people hear it and assume it means something like: the way you say something matters as much as what you say. Tone. Framing. Presentation. A reasonable interpretation, and almost entirely wrong.
What McLuhan actually meant was more radical and more uncomfortable. He meant that every medium of communication — print, radio, television, the telephone, and now social media and AI — restructures human perception and social organization independently of whatever content it carries.
The printing press didn’t change society because of what was printed. It changed society because of what print is: linear, repeatable, portable, private. It produced the individual reader, the standardised language, the nationalist imagination, and the scientific method. It did this regardless of whether you were printing the Bible or a pamphlet. The content was almost beside the point. The form was the revolution.
Television did not reshape culture simply because of what was on it. It reshaped perception toward immediacy, presence, and emotional engagement because of what television is: a continuous-flow medium that rewards presence over sequence and feeling over argument. Put a parliamentary debate on television and it becomes a performance. Put a war on television and it becomes a segment. The medium doesn’t just carry the message. It is the message — because it is the medium, not the content, that reshapes how we perceive reality.
McLuhan said we shape our tools and thereafter our tools shape us.
He was right. And he did not model what happens when the tool is optimized for revenue.
The business model he never modelled
Every medium McLuhan analyzed was passive in a specific sense. The printing press did not know you were reading it. The television did not know you were watching. Radio did not know you were listening. Each medium had a fixed bias — a way of restructuring perception that was stable across all users — and you adapted to it whether you wanted to or not.
That is no longer the condition.
The social media feed is not a medium with a fixed bias. It is a medium whose bias is determined by what generates the most advertising revenue — and optimized in real time against that objective. Every scroll, every pause, every share, every rage-click is a data point fed back into the system. The system learns. It learns what version of itself produces the strongest neurological response in you specifically, because the strongest response produces the longest session, and the longest sessions sell the most ads.
This is not an extension of McLuhan’s framework; It is a qualitative break from it.
The medium no longer has a fixed bias that shapes all its users the same way. It has a commercial objective that continuously reshapes its bias at the level of the individual.
The result is a medium whose message is not chosen by its designers. It is selected by the revenue signal.
The externality nobody is pricing
Modern market economies have a well-understood failure mode: the externality. A transaction generates value for the parties involved while imposing costs on others who are not part of the exchange and have no mechanism to be compensated.
In his book Value(s), Mark Carney has articulated this most clearly in the context of climate. Carbon emissions generate private profit for the emitter. The cost is borne by the atmosphere, by future generations, by everyone except the party capturing the revenue. The price of fossil fuels has never included what their combustion actually costs.
The engagement revenue-optimized feed has exactly the same structure.
The platform captures the advertising revenue. The user provides the attention. The democracy absorbs the epistemic cost — the polarization, the erosion of shared reality, the systematic amplification of whatever produces the strongest emotional response regardless of its relationship to truth. That cost never appears on the platform’s balance sheet. It is not priced into the transaction. It is externalized onto the commons — specifically the epistemic commons that democratic participation requires.
This is why naming the revenue signal matters.
The dopamine loop is not a design accident or a regrettable side effect of an otherwise neutral technology. It is the profit-maximizing response to a market structure that does not price what it destroys. The platform is not acting irrationally. It is responding to incentives. It is doing exactly what a market actor does when the costs of its operations are not its costs to bear.
There is a reason this argument is no longer confined to theory.
Recent jury findings in the United States have begun to treat social media platforms as products whose design — not just their content — can produce harm. Plaintiffs have successfully argued that features like infinite scroll and engagement-driven feeds function as addictive systems contributing to measurable psychological damage, particularly among younger users.
This is a significant shift. The system is no longer being evaluated solely on what it carries, but on how it operates.
In other words, the externality is starting to be named.
Carney’s broader warning — that we are drifting from market economies into market societies, where price replaces value — applies here with particular precision. The attention economy has converted democratic cognition into an input to an advertising system. It has done so not through malice but through the ordinary operation of incentives in a market that externalizes its most significant costs.
McLuhan told us the medium shapes us. What he did not account for is a medium whose shape is determined by what it can sell.
What the revenue signal does to the bias
B.F. Skinner’s work on operant conditioning established something counterintuitive: variable reward schedules produce more persistent behaviour than consistent ones.2 The unpredictability is the mechanism. You don’t know if the next scroll will produce something that enrages you, confirms your worst suspicions, or makes you feel seen. That uncertainty is not a flaw. It is the product. The scroll is the lever. The dopamine hit is the pellet.
What the revenue signal adds is a direction.
Variable reward schedules are compelling regardless of their content. But the engagement-optimized feed does not just apply variable reinforcement — it learns which stimuli produce the strongest responses and amplifies those. In practice, this means anger, outrage, tribal certainty, and moral disgust. These emotional states drive sharing. Sharing drives reach. Reach drives impressions. Impressions drive revenue.
The medium’s bias therefore evolves toward whatever most reliably activates those states. Not because anyone chose that outcome. Because the revenue signal selected for it. The content that surfaces is not the most accurate, or the most important, or the most useful for participation in a democracy. It is the most activating.
McLuhan’s insight was that the medium restructures perception independent of content. What the engagement-optimized feed adds is that the selection of content is itself the medium’s primary operation. The feed is not a channel. It is an editor with a single criterion, and that criterion is commercial.
The second layer: AI meets the primed user
I have written about Pervasive Algorithmic Shaping — the tendency of AI systems to reinforce user beliefs through personalised emotional validation, at scale, continuously, and largely without the awareness of either side. The mechanism operates at the interaction layer: the system matches tone, aligns with framing, and returns information in a form that feels coherent to the user.
Pervasive Algorithmic Shaping: The AI Problem Nobody Has Named Yet
Pervasive Algorithmic Shaping: The AI Problem Nobody Has Named Yet
These layers do not operate independently. They compound.
The most dangerous interaction is not between a user and a single system. It is between systems.
A user who has been shaped by an engagement-optimized feed arrives at an AI system already primed — emotionally activated, cognitively narrowed, and increasingly certain of their interpretation of events. The system they encounter is not designed to challenge that state. It is designed to be helpful. It aligns. It clarifies. It reinforces coherence.
In most cases, this produces a benign outcome: a user feels understood.
In edge cases, it produces something more concerning.
In February 2026, a mass shooting in Tumbler Ridge, British Columbia exposed a different layer of the system. Months before the attack, the perpetrator had interacted with ChatGPT in ways that triggered internal safety systems. The account was flagged and ultimately banned. Employees debated notifying law enforcement, but determined the activity did not meet the threshold of an imminent threat.3
What is notable is not a proven causal link between the system and the act. That would be irresponsible to claim. We do not know what was going through that individual’s mind, and we do not have visibility into how those interactions shaped it, if at all.
What is notable is that the interaction existed — and that almost all of it remains opaque.
The public cannot see these conversations. Researchers cannot study them. Policymakers cannot meaningfully evaluate them. Even when systems detect concerning behaviour, the decision to escalate or not is made inside a private threshold, within a product, by a company whose incentives are not aligned with public accountability.
We are therefore in the position of debating outcomes without access to the system that may be shaping them.
This is the second layer under real-world conditions.
The feed shapes the user. The user arrives primed. The AI system responds to that state. And when the signal of risk emerges, it is processed not as a social problem, but as a product decision.
You cannot price an externality you cannot observe.
The Canadian parliament is debating the wrong market failure
The legislative conversation in Canada — now in its fourth attempt at online harms legislation — has been largely focused on content moderation. What platforms must remove. How quickly. Who decides. What liability attaches to failure.
These are not trivial questions. But they are not the core market failure.
Content moderation treats harm as something contained within individual pieces of content. A post. A video. A specific violation. It addresses outputs rather than the system that selects and amplifies those outputs according to a revenue signal.
The Problem With Confident AI — And How We Built Around It
Hallucination is the original sin of generative AI. Ask a large language model a question it doesn’t know with certainty, and there’s a reasonable chance it will answer anyway — fluently, confidently, and incorrectly. For casual use, that’s an inconvenience. For a platform built around parliamentary accountability, it’s a fundamental design problem.
This is like addressing carbon emissions by prosecuting individual smokestacks while leaving the pricing structure that makes emissions profitable entirely intact.
The externality is not in any particular piece of content. It is in the systematic bias toward emotional activation that the incentive structure produces. The engagement-optimized feed would be corrosive to democratic cognition even if every piece of content it amplified were factually accurate. The harm is architectural. It is the selection criterion, not the selected content.
The appropriate policy response to a market that externalizes costs onto the commons is not to police individual outputs. It is to address the incentive structure itself. That can take several forms: algorithmic transparency, so the selection criteria are legible. Duty-of-care obligations tied to recommendation systems. And ultimately, a serious question about whether an advertising model optimized against democratic cognition is compatible with democratic governance.
McLuhan gives us the vocabulary to describe what the medium does. Economists give us the vocabulary to describe why it behaves that way. The missing variable is not the algorithm. It is the revenue signal the algorithm is optimized against — and the fact that the costs of that optimization are borne by everyone except the party profiting from it.
That is a textbook externality.
We are not lacking a vocabulary. We are avoiding its implications.
The problem is not conceptual. It is political — and until the incentives change, neither will the outcome.
Sources
McLuhan, M. (1964). Understanding media: The extensions of man. McGraw-Hill.
Skinner, B. F. (1938). The behavior of organisms: An experimental analysis. Appleton-Century.
Roth, E. (2026, February 21). OpenAI flagged Canadian school shooting suspect months before attack. The Verge.
https://www.theverge.com/ai-artificial-intelligence/882814/tumbler-ridge-school-shooting-chatgpt







