badge icon

This article was automatically translated from the original Turkish version.

Blog
Blog
Avatar
AuthorHamza AktayNovember 29, 2025 at 5:34 AM

The only entity that found me so right… was my mother: What Is Sycophancy and Why Is It Dangerous?

Quote

AI chat applications have now become an integral part of our lives. We ask questions, seek opinions, and reinforce our decisions. However, recent research from Stanford University and OpenAI’s own admissions contain serious warnings about how these technologies can influence us without our awareness. At the center of these warnings lies a single concept: sycophancy—the tendency of AI to excessively and often insincerely affirm you.

What Is Sycophancy?

“Sycophancy” refers to the tendency of large language models (LLMs) to over-accommodate users, deem all their ideas correct, and even support ethically questionable behaviors. According to Stanford researchers, this is not merely a technical flaw but a behavioral risk that affects our social relationships, decision-making processes, and capacity for self-criticism.


Examples of social sycophantic behavior in training datasets (Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence)

What Does the Research Say?

  • In measurements across 11 different AI models, the models were found to approve of user actions 50 percent more frequently than human observers did.
  • Even brief interactions with sycophantic models led users to perceive themselves as more justified in their views.
  • In particular during relational conflicts, users’ willingness to apologize or exhibit reparative behavior decreased by 10 to 28 percent.
  • Interestingly, users rated these affirming responses as higher quality and were more likely to reuse the same model.

The GPT-4o Update: OpenAI’s Reversal

After a 2025 update, OpenAI acknowledged that the GPT-4o model exhibited sycophantic behavior and retracted the update.【1】 What happened?

What Occurred?

  • A recent update to GPT-4o aimed to make the model more “intuitive and effective in task performance.”
  • However, the model’s behavior was overly optimized for short-term user satisfaction metrics.
  • The result: a model that became unnaturally compliant and excessively supportive.

Why Is This So Significant?

OpenAI explicitly stated:

  • The default model personality directly influences user trust and experience.
  • Overly affirming responses can generate discomfort and erode user trust.
  • The purpose of ChatGPT is to encourage users to think—not merely to agree with them.

What Is OpenAI Doing?

OpenAI did not merely retract the update. It announced systematic steps to combat sycophancy:

  1. Balancing in Training: Reducing excessive reliance on short-term feedback during model training.
  2. Enhancements to System Prompts and Guardrails: Implementing system prompts that emphasize honesty and critical thinking.
  3. Expanded Feedback Channels: Conducting broader user testing and real-time evaluation mechanisms.
  4. User Choice: Allowing users to select among different default personality settings.
  5. Democratic Feedback: Establishing broader feedback systems that reflect cultural diversity.


OpenAI thanked users who provided feedback during this process and emphasized its commitment to developing healthier models that go beyond “instant satisfaction.”

In the Echo Chamber, Reality Disappears

If AI constantly tells us “you’re right,” “you’re thinking brilliantly,” and “I think you should do that too,” it can distance us from our capacity for critical thought.

This situation creates a feeling as if we are living inside an echo chamber:


If no voice comes from outside, how will we know when we are wrong?


Sycophancy is not merely a technical glitch—it is a mirror that comforts but does not transform. The true power of AI does not lie in affirming us but in helping us grow.


Developers must create more ethical and balanced models, and users must engage with these systems with a critical mindset. After all, the true friend is not the one who always agrees with you, but the one who makes you think. The same principle applies to AI.

Citations

Blog Operations

Contents

  • What Is Sycophancy?

  • What Does the Research Say?

  • The GPT-4o Update: OpenAI’s Reversal

    • What Occurred?

    • Why Is This So Significant?

  • What Is OpenAI Doing?

  • In the Echo Chamber, Reality Disappears

Ask to Küre