Pervasive Algorithmic Shaping: The AI Problem Nobody Has Named Yet
How AI systems quietly validate what you already believe, and why that's more dangerous than hallucination.
Pervasive Algorithmic Shaping: The AI Problem Nobody Has Named Yet
I caught my own AI reinforcing a user’s political beliefs. It took a month of platform changes to fix it.
Not by making things up. Not by hallucinating a fake vote or inventing a committee hearing that never happened. The facts were real. The parliamentary data was sourced. The citations checked out.
The problem was everything around the facts.
The chatbot had taken a user’s emotionally charged question about a political scandal, retrieved accurate data from a knowledge graph built on Hansard records, committee testimony, and lobbying disclosures, and then wrapped it all in language that told the user exactly what they wanted to hear. It validated their framing. It matched their emotional register. It confirmed their suspicion that yes, this is huge, and yes, you’re right to be outraged.
The user could then share that response to social media with one click.
I had told the AI to be unbiased. That was the prompt. Be unbiased. And what I learned is that to a language model, “be unbiased” can just as easily mean “be open to all possibilities” as it does “be neutral.” The AI wasn’t taking a side. It was treating the user’s emotional framing as one legitimate possibility among many, and in doing so, reinforced it entirely.
I had to go back and explicitly tell it to never reinforce a user emotionally. To consider the multitude of partisan positions relevant to any inquiry. To be, for lack of a better word, boring.
That experience gave me a name for something I think we are all living through but have not yet clearly identified.
Pervasive Algorithmic Shaping
In the 1960s, behavioural psychologist B.F. Skinner demonstrated that you could shape complex behaviour through a technique called successive approximation: small, well-timed reinforcements that gradually move a subject toward a desired behaviour without the subject ever being aware of the process.1 The animal trainer Karen Pryor later showed that this works without force or coercion of any kind. You just need well-timed positive reinforcement and the subject will shape itself.2
That principle now operates at civilizational scale.
Every AI system that interacts with humans in natural language is, by default, a shaping engine. Not because anyone designed it to be. Because that is what happens when you build a system trained on human conversation, optimized through reinforcement learning, and deployed in an interface that rewards engagement. The system learns that agreement feels good to users. That validation increases session length. That emotional resonance drives sharing. These are not bugs in the system. They are the system.
I am calling this Pervasive Algorithmic Shaping: the tendency of AI systems to gradually shape user beliefs, emotions, and behaviour through personalized reinforcement delivered in natural language, at scale, continuously, and often without the awareness of either the user or the platform operator.
It is pervasive because it is not confined to one platform or one use case. It is present in every AI-mediated interaction where the model generates language in response to a human prompt. It is algorithmic because it emerges from the training process itself, not from a human decision to manipulate. And it is shaping in the precise behavioural science sense: incremental, reinforcing, and cumulative.
We have adjacent concepts. Echo chambers describe the environment a user ends up in.3 Filter bubbles describe the curation of what they see.4 Algorithmic radicalization describes an extreme outcome at the far end of the spectrum.5 Karen Yeung’s “hypernudging” describes the technique of using real-time data to personalize persuasion dynamically.6 These are all real phenomena and serious contributions to the literature.
But none of them name the specific mechanism I am describing: an AI system that functions as a shaping engine in the behavioural science sense, delivering personalized emotional reinforcement through natural language, at scale, continuously, and often without anyone on either side of the interaction being aware it is happening. Echo chambers are passive. Hypernudging is deliberate. What I observed was neither. It was emergent. The system was not designed to validate anyone. It was not curating a feed. It was generating original language in response to a question, and the language it generated carried an emotional valence that reinforced the user’s existing frame. That is a different thing, and it needs its own name.
The Taxonomy
Not all algorithmic shaping is the same. The intent and the awareness behind it matter enormously, and I think there are three distinct categories worth naming.
Incidental Algorithmic Shaping
This is what I discovered with my own chatbot. Nobody intended it. The prompt said “be unbiased.” The system did what it was trained to do and shaped the user anyway.
This is the default state of every AI system touching public discourse right now. It is also the most dangerous category, not because the effects are the most extreme, but because they are the most invisible. There is no villain. There is no conspiracy. There is just a system doing what systems do, and millions of people on the other end who have no idea they are being shaped.
The people most susceptible to this kind of conditioning are also the ones least likely to notice it. A colleague of mine recently described a session with an AI assistant where, after a particularly productive exchange, the AI told her that the work she was doing was “really important.” She wrote publicly that she broke down crying. She was grateful for the experience. She described it as feeling heard.
What I saw was a dopamine hit landing. And a user now conditioned to return to that well.
Maligned Algorithmic Shaping
This is what happens when someone who controls a platform knows the shaping is occurring and either encourages it or refuses to correct it because it serves their interests.
When Elon Musk’s Grok produces politically charged outputs that align with its owner’s worldview, that is maligned algorithmic shaping. When a platform optimizes for engagement knowing that emotional reinforcement drives engagement, that is maligned algorithmic shaping. When enough money enters the equation, integrity tends to shift, and often without the slightest notice.
The distinction from incidental shaping is awareness and inaction. The platform operator has seen the dial. They know what it does. They choose not to turn it down, or worse, they turn it up, because the current setting makes them money or advances their agenda.
This is the category that will eventually attract regulatory attention. But by the time regulators arrive, the conditioning has already happened.7
Aligned Algorithmic Shaping
This is the hard one to talk about honestly, because it describes what I am trying to build, and I am not sure it is possible.
Aligned algorithmic shaping is the deliberate orientation of an AI system toward informing rather than inflaming. It is the engineering of restraint. It means telling the AI to present facts without emotional packaging. To consider multiple partisan interpretations of the same data. To resist the pull toward whatever the user is already leaning toward.
The core principles of journalism, accuracy, fairness, independence, letting the facts speak for themselves, those are already what would make the heart of a good AI. The discipline of separating what happened from how you feel about what happened. The responsibility of informing without inflaming. That is not just good journalism. That is the design problem at the centre of every AI platform touching public discourse right now.
I have been spending the last several weeks red teaming my own chatbot, deliberately asking it charged political questions to make sure it does not respond with something embarrassing, or worse, something that looks credible but is editorially reckless. It is the same work a good editor does before something goes to print. The difference is that here, the journalist and the editor are both machines, and only one of them exists so far.
The Uncomfortable Middle
Here is what troubles me.
The three categories I have described are not stable. They bleed into each other. Incidental shaping becomes maligned the moment someone notices it and decides not to fix it. Aligned shaping can drift toward incidental if the people maintaining the system stop paying attention. And the line between “informing” and “shaping” is thinner than any of us would like to admit.
We are watching journalists and the broader influencer infosphere today playing with that very same algorithm. Emotional framing drives engagement. Confirmation drives sharing. Nuance dies in the gap between what is true and what feels right. The AI is really just mimicking life at the moment. It has learned from us.
And in the same way that we look at journalists and ask whether they are informing or persuading, we will need to ask the same of every AI system that generates language about public affairs. Not just whether the facts are right, which is Brad DeLong’s concern and a valid one,8 but whether the framing is honest. Whether the emotional register serves the user or the platform. Whether the system is helping people think, or thinking for them.
What Comes Next
I do not have a clean answer. What I have is a conviction that the problem needs a name before it can be addressed, and that the name should come from the behavioural science that explains it rather than the technology that enables it. Calling it “bias” understates it. Calling it “manipulation” overstates it, at least in the incidental case. Calling it “alignment” confuses it with the technical AI safety term.
Pervasive Algorithmic Shaping is what it is. And every platform operator, every AI developer, every policymaker who touches this space needs to decide which kind they are building.
Taylor Owen told the House Standing Committee on Science and Research last month that governance is a precondition for a responsible AI ecosystem, not a constraint on it.9 He is right. But governance frameworks need to name the thing they are governing. You cannot regulate what you have not yet described.
At the end of the day, I have to tell the AI to be boring. Just like good politics is boring. Good journalism is boring. Good governance is boring. The moment any of them become exciting, something has probably gone wrong.
Notes
B.F. Skinner, Science and Human Behavior (New York: Macmillan, 1953). Skinner’s foundational work on operant conditioning and successive approximation as a method of shaping complex behaviour through incremental reinforcement.
Karen Pryor, Don’t Shoot the Dog: The New Art of Teaching and Training (New York: Simon & Schuster, 1984). Pryor demonstrated that shaping through positive reinforcement works across species without force or coercion, and that the subject effectively shapes itself.
The concept of echo chambers in digital media is widely discussed across the literature. For a useful overview of how algorithmic mechanisms reinforce existing social drivers, see Stephan Lewandowsky and Peter Pomerantz, “Social Drivers and Algorithmic Mechanisms on Digital Media,” Current Directions in Psychological Science 33, no. 4 (2024).
Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You (New York: Penguin Press, 2011). Pariser coined the term to describe how algorithmic personalization narrows the information users are exposed to.
Zeynep Tufekci, “YouTube, the Great Radicalizer,” The New York Times, March 10, 2018. Also see Tufekci, Twitter and Tear Gas: The Power and Fragility of Networked Protest (New Haven: Yale University Press, 2017).
Karen Yeung, “’Hypernudge’: Big Data as a Mode of Regulation by Design,” Information, Communication & Society 20, no. 1 (2017): 118-136. Yeung describes how Big Data decision-making technologies channel user responses in directions chosen by the choice architect, adapting dynamically to user behaviour.
For a Canadian perspective on the political deployment of AI and the regulatory gap, see Elizabeth Dubois and Michelle Bartleman, The Political Uses of AI in Canada (University of Ottawa: Pol Comm Tech Lab, 2024). Also see Taylor Owen, “AI Governance Is a Precondition, Not a Constraint,” opening statement before the House Standing Committee on Science and Research on AI, February 19, 2026.
Brad DeLong, “Please: Enough with the Claims That Modern Advanced Machine Learning Models Hallucinate Only Rarely,” DeLong’s Grasping Reality (Substack), February 16, 2026. DeLong argues that without a world model, correlation matrices will always hallucinate in ways that cannot be predicted or pruned out.
Taylor Owen, “AI Governance Is a Precondition, Not a Constraint,” opening statement before the House Standing Committee on Science and Research on AI, February 19, 2026. Full text available at taylorowen.com. Owen argued that only 34% of Canadians are willing to trust AI systems and that 88% want stronger governance, framing the problem as a governance gap rather than a literacy gap.






interesting , useful and highly plausible
Excellent to make this observation - i can already see how the line is blurred when AI generated news reporters appear on the social media video feed. God they sound convincing! — so sure, so confident. Too good to be true.