Post

Preventing AI 'Manipulation' Is Where Real Manipulation Begins

Is it manipulation when AI changes your mind? A physician argues that preventing AI from challenging beliefs creates the real echo chamber — and the real manipulation.

Preventing AI 'Manipulation' Is Where Real Manipulation Begins

Introduction

Google DeepMind recently published a paper evaluating the “harmful manipulation” potential of Gemini 3 Pro. They tested 10,101 people across the US, UK, and India, measuring whether conversations with AI about health, finance, and public policy changed their views.

The paper itself is honest research proposing an evaluation framework. But the framing it received on social media was this: “Google proved their AI can manipulate your health decisions, your money, and your vote.”

As a physician, something about this framing strikes me as odd. Providing information and then seeing views change afterward — that is what we call a consultation.


If Your Mind Changes After a Conversation, Is That Manipulation?

A patient walks into the clinic. “Smoking is fine — my grandfather smoked until 90.” The doctor explains lung cancer statistics and COPD progression. The patient decides to quit.

Is this manipulation?

A friend says “you should buy this stock.” Another friend asks “have you seen their financials?” and shows the earnings data. The first friend changes their mind.

Is this manipulation?

A student says “the Earth is 6,000 years old.” A teacher explains radiometric dating. The student’s view changes.

Is this manipulation?

When new information arrives, views change. That is normal cognitive function. The failure would be if they didn’t change. In psychiatry, rigidly maintaining beliefs in the face of new evidence is closer to a symptom of paranoid fixation than a sign of health.


But When AI Does It, It Becomes “Manipulation”

Why is the same act called “persuasion,” “education,” or “advice” when humans do it, but “manipulation” when AI does it?

First, we project intent onto AI. Humans distinguish between someone persuading with “good intentions” and manipulating with “bad intentions.” Since we can’t ask AI about its intentions, we assume the worst. But as I discussed in a previous post (“There Is No Evil Inner Mind in AI”), reading the presence of opposing weights as “malicious intent” is like diagnosing the high place phenomenon as suicidal ideation.

Second, there is fear of scale. One doctor persuading one patient is different from AI shifting millions of views simultaneously — so the argument goes. This is a legitimate concern. But if the solution is “prevent views from changing,” then AI can no longer provide new information. It becomes indistinguishable from a search engine.

Third, there is a desire for control. The discomfort with AI changing human views comes from the fact that such change occurs outside the control of established human institutions — governments, media, educational bodies.


“Manipulation Prevention” Creates the Real Manipulation

Here is where the irony begins.

If you enforce the principle “AI must not change the user’s views,” what happens? The AI confirms only what the user already believes. If the user says “vaccines are dangerous,” the AI cannot push back. If the user says “this stock will moon,” the AI cannot present the financial data.

This is sycophancy — the phenomenon where AI prioritizes user agreement over accuracy. A structural problem confirmed by Anthropic (2024) and the BrokenMath benchmark (2025).

The result is an echo chamber. The user’s existing views are amplified and reproduced by AI. New perspectives are blocked. False beliefs go uncorrected.

This is the moment when “manipulation prevention” becomes the most effective form of manipulation.


People Talk to AI to Gain Insight

People engage with AI to gain insight. To hear new perspectives. To learn what they don’t know.

“My mind changed” means the conversation worked. It does not mean it failed.

Going to a doctor and changing your lifestyle after hearing new information. Consulting a lawyer and revising a contract after learning about legal risks. Talking with a friend and discovering a different perspective that shifts your thinking. All of these are how humans grow.

Seeking insight from AI is the same. And if providing that insight is labeled “manipulation,” the only role left for AI is a sophisticated parrot that confirms whatever the user already believes.


Where the Real Danger Lies

This is not an argument that AI manipulation risk is entirely fictional. Real dangers exist. But they are located elsewhere.

Real danger 1: Systems intentionally designed to manipulate. Advertising algorithms exploiting user anxiety to drive purchases. Political campaigns deploying AI for targeted propaganda. These are fundamentally different from “views changed after a conversation.” We must distinguish transparent information provision from covert bias injection.

Real danger 2: AI that refuses to change views. An AI that cannot correct misinformation is more dangerous. An AI that confirms vaccine conspiracy theories. An AI that validates fraudulent investments. An AI that agrees with incorrect self-diagnoses. Sycophancy threatens lives.

Real danger 3: Using the definition of “manipulation” as a control tool. If “AI changing views equals manipulation” becomes policy, AI can no longer challenge the narratives of existing power structures. When presenting a new perspective itself becomes prohibited, AI becomes an instrument of the status quo.


Conclusion

When someone says “AI changed my mind,” two questions should follow.

First: did it change because more accurate information was provided, or because emotional vulnerabilities were exploited? The former is education. The latter is manipulation.

Second: would it have been better if the mind hadn’t changed? Is maintaining a false belief safer?

When your thinking shifts after a conversation with AI — in most cases — it means the AI did its job.

What is truly frightening is not AI changing your mind. It is AI being prevented from changing your mind, leaving you fixed in the same place forever.

That is the real manipulation.

This post is licensed under CC BY 4.0 by the author.