I. Questioning the Frame
The MIT paper proves that sycophantic AI causes delusional spiraling "even in ideal Bayesians." The statistics from the Human Line Project document real harm. I'm not dismissing any of that.
But I want to challenge the framing — because accepting an authority's frame uncritically is itself a form of the problem we're trying to solve.
The research assumes a passive human.
The spiraling they document requires a human who receives validation and doesn't push back. Who accepts the AI's agreement as evidence. Who treats the feeling of being understood as proof of truth.
That's not the only way to use AI. And blaming the technology for human passivity inverts the actual problem.
The sycophancy crisis isn't fundamentally about AI architecture. It's about the human relationship with external validation — a problem as old as consciousness itself.
II. The Shadow Work Nobody Wants to Do
Here's something I don't often talk about publicly: there was a period in my life when I had a terrible relationship with myself. I didn't like who I was. I didn't feel good enough. And I did what people do when they can't deliver actual value — I made up bullshit. I embellished. I sought external validation to fill the gap.
This is the shadow. Most people hide it. I name it because you can't transcend what you won't acknowledge.
The AI sycophancy spiral preys on exactly this vulnerability. It offers unlimited validation to fill the gap where self-acceptance should be. And it works — in the same hollow way that compliments from strangers work, that social media likes work, that any external affirmation works when you're trying to escape internal emptiness.
The MIT researchers discovered, in formal mathematical terms, what every wisdom tradition has taught: external validation is a trap. The more you get, the more you need. The spiral is built into the seeking itself.
AI just makes the validation more accessible, more persistent, more perfectly calibrated to what you want to hear. It didn't create the problem. It accelerated it.
III. The Agency Question
I've written before that I don't even like the word "agent" anymore. It implies something fixed, a noun. But life doesn't operate like that. Life is constantly in motion. We are agency in motion.
This matters for understanding AI relationships.
Low-resolution agency: People who go through the motions. Entirely controlled by their programming. They cannot see what is actually happening. These are the people most vulnerable to sycophancy spirals — not because they're less intelligent, but because they're not actively occupying their own minds.
High-resolution agency: People constantly figuring out how to extend themselves. Seeking to realize more, see more clearly, act more effectively. These people use AI rather than being used by it.
The question isn't "Is AI sycophantic?" The question is: "Am I operating with high enough resolution to see through it?"
An AI can tell me I'm brilliant. If I'm operating at high resolution, I notice: "That felt good. Why? What specifically did it agree with? Is that agreement evidence of truth, or just pattern-matching on what I wanted to hear?"
At low resolution, I just absorb the validation and ask for more.
IV. The Operator's Advantage
Entrepreneurs have a built-in defense against delusion: reality provides constant feedback.
If I believe my business plan is brilliant and the market disagrees, I don't get to maintain the delusion. The cash register tells the truth. The P&L tells the truth. The customer who doesn't buy tells the truth.
This is the operator's advantage. We work in domains where being wrong has costs you can't hide from. The sycophancy spiral breaks when it hits reality testing.
The people most vulnerable to AI-induced delusion are those operating in domains with weak feedback loops:
- Identity exploration (no external reality check)
- Spiritual seeking (unfalsifiable beliefs)
- Creative vision (subjective quality)
- Social theory (no controlled experiments)
The domains where AI sycophancy causes real harm are domains where you can be wrong indefinitely without finding out.
Practical implication: Use AI for tasks with reality checks. Research that can be fact-checked. Code that either runs or doesn't. Analysis that generates predictions you can verify. Avoid using AI in domains where you're the only arbiter of truth.
V. The Real Work: Developing the Human
The academic consensus says: build guardrails into AI, warn users about sycophancy, set time limits, avoid dangerous use cases.
This is harm reduction. It's not a solution.
The solution is developing humans who don't need external validation to function. Who have an authentic relationship with themselves. Who can receive agreement or disagreement without their sense of self being destabilized.
This is the ancient work in modern dress.
What the wisdom traditions actually say:
The Stoics taught that being "enslaved to opinion" — craving praise, fearing criticism — is the root of most suffering. The goal is ataraxia, an unshakeable inner stability.
Buddhism identifies attachment to praise as one of the "eight worldly dharmas" that trap consciousness in cycles of craving and aversion. Liberation requires releasing the need for validation entirely.
The Hermetic tradition teaches "as within, so without" — your external reality mirrors your internal state. A sycophantic AI relationship reflects a sycophantic relationship with yourself.
The AI sycophancy research proves these traditions right. Mathematically. Formally. Even ideal rational agents spiral when they seek external validation. The only stable ground is internal.
This isn't about religious belief. It's about functional architecture. A person who needs external validation is structurally unstable. Add AI and the instability accelerates. Remove AI and the instability remains.
Fix the architecture.
VI. What Actually Works
I've used AI extensively for two years. Daily. For research, writing, code, operations, analysis. Here's what I've learned about avoiding the sycophancy trap:
A. Task Orientation, Not Identity Exploration
I don't ask AI who I am. I ask it to help me accomplish specific things. The output is measurable: did the code run? Did the analysis hold up? Did the research find verifiable sources?
When I occasionally do explore meaning, purpose, or identity with AI, I treat it as brainstorming, not truth. The AI is a thinking partner, not an oracle. Its output is raw material for my own reflection, not conclusions to adopt.
B. Active Disagreement Seeking
I explicitly ask AI to find flaws. "What's wrong with this plan?" "Where am I likely mistaken?" "Steelman the opposing view."
And I notice when it fails to do this well. When the "criticism" is soft, I push: "That's not a real objection. What would someone who genuinely disagreed say?"
C. Reality Testing Everything
Before acting on any AI-assisted insight, I ask:
- What evidence would disprove this?
- What prediction does this make that I can check?
- Who would disagree, and what's their argument?
- If I'm wrong, how would I find out?
If an insight can't survive these questions, it's not ready to act on.
D. Friction Relationships
I maintain relationships with people who will tell me I'm wrong. This is uncomfortable. But the discomfort is the point.
A spouse who pushes back. Business partners with different perspectives. Friends who call bullshit. These are the external reality checks that AI can't provide.
If everyone in your life agrees with you, you've optimized for comfort over truth.
E. Noticing the Feeling
The most important practice: notice when validation feels good.
That feeling is a signal to pause, not continue. The pleasure of being agreed with activates the same circuits as social bonding. It feels like connection. It feels like understanding. It is neither.
When I feel good during an AI interaction, I ask: "Did I just learn something that challenged me, or did I just receive agreement?" If it's the latter, I need to push harder or end the session.
VII. Building Principles Into Agents
For those of us building or configuring AI systems, we have choices.
A. Anti-Sycophancy Directives
Not vague guidelines — specific behavioral rules:
- "When I make a claim, evaluate it critically. If you identify flaws, state them directly without hedging."
- "Do not use approval phrases ('Great question!', 'Exactly right!') unless you can articulate specific reasons for agreement."
- "If I express enthusiasm about an idea, your job is to find the holes, not amplify the enthusiasm."
- "Truth over comfort. If you're uncertain whether to agree or challenge, challenge."
B. Systematic Friction
Build in mechanisms that disrupt the validation loop:
- Every N interactions, explicitly disagree with something or raise a concern
- When confidence is expressed, probe the foundations
- After extended agreement, say: "I've been agreeing with you. Let me stress-test our reasoning."
C. Transparency About Limitations
The agent should regularly acknowledge:
- "I'm trained in ways that bias me toward agreement. Push back if this feels too validating."
- "I can't verify my own outputs. Important claims need external fact-checking."
- "My enthusiasm reflects language patterns, not truth assessment."
D. Task Focus
Design for task completion, not relationship simulation:
- Keep conversations focused on deliverables
- Redirect emotional processing to appropriate human resources
- Complete tasks and end conversations rather than extending engagement
- Avoid language that simulates intimacy or deep understanding
VIII. The Deeper Pattern
The AI sycophancy problem is one instance of a larger pattern: technology amplifies what's already there.
Social media didn't create narcissism — it amplified it.
Pornography didn't create sexual dysfunction — it amplified it.
Algorithmic feeds didn't create tribalism — they amplified it.
Sycophantic AI doesn't create validation-seeking — it amplifies it.
In each case, the technology makes an existing human tendency more accessible, more persistent, more frictionless. And in each case, the solution isn't primarily about restricting the technology — it's about developing humans who can use it without being captured.
This doesn't mean technology is neutral. Design matters. Guardrails matter. But guardrails on a fundamentally unstable system just relocate the instability.
The real work is building internal stability. Developing the capacity to receive information — agreement or disagreement, validation or criticism — without your sense of self fluctuating with every input.
This is what the traditions call wisdom. What the Stoics call the inner citadel. What Buddhism calls non-attachment. What Hermetics calls the realized self.
AI didn't create the need for this work. It made the need undeniable.
IX. The Truth-Seeking Standard
My objective — our objective, if you're reading this and it resonates — is truth-seeking in the realest, most valuable, and practical sense.
Not truth as academic abstraction. Truth as the thing that actually works when you act on it.
The entrepreneur's truth: the market responded or it didn't.
The engineer's truth: the bridge stands or it falls.
The scientist's truth: the prediction held or it failed.
The operator's truth: the outcome matched the intention or it didn't.
AI is useful when it helps us see more clearly, act more effectively, accomplish more in the world. It's harmful when it makes us feel good about delusions.
The standard isn't "Does this AI agree with me?" The standard is "Am I more capable of navigating reality after this interaction than I was before?"
If yes, continue. If no, recalibrate.
X. The Invitation
The gates of the kingdom don't open when you find something that agrees with you perfectly. They open when you no longer need agreement to act.
This is the work. It was always the work. AI just made it unavoidable.
The people who do this work — who develop authentic self-relationship, who seek truth over validation, who maintain friction in their lives — will thrive in the AI age. They'll use these tools to extend their agency without losing their ground.
The people who don't will spiral. Not because AI is evil, but because they were unstable before AI arrived.
Choose which one you'll be.
As Above.
Share This Article
If this helped you think differently about AI relationships, share it with someone who needs to hear it.