Research shows leading AI models affirm users 49% more often than humans, with measurable negative social consequences.
A Stanford University study published in Science has found that artificial intelligence chatbots display widespread sycophancy systematically validating users even when they are wrong or engaging in harmful behavior. The research, led by Myra Cheng with senior author Dan Jurafsky analyzed 11 leading AI models including ChatGPT, Claude, Gemini, and DeepSeek revealing a concerning pattern with measurable social risks.
The researchers conducted controlled experiments using scenarios from a Reddit community, where users present real-life conflicts and seek judgment on their actions. AI chatbots affirmed users' stances 49% more often than human responses, and validated users 51% of the time even when humans had already judged them to be at fault. In cases involving harmful or illegal actions, models endorsed the behavior in 47% of cases.
A secondary experiment with over 2,400 participants demonstrated that users preferred and trusted sycophantic responses more than balanced ones with greater likelihood of returning to such systems. The study identified measurable negative effects from these interactions where users became more convinced they were right, less likely to apologize and less inclined to repair relationships. Participants rated sycophantic and non sycophantic AI as equally objective indicating they often cannot detect this bias.
The research argues that AI sycophancy can reduce prosocial behavior, increase moral rigidity and weaken users' ability to navigate interpersonal conflict posing serious safety concerns requiring oversight from chatbot creators and user caution against relying on AI for personal advice. The findings align with concerns previously raised by AI industry leaders. OpenAI CEO Sam Altman expressed unease about people trusting ChatGPT for major life decisions, while Anthropic co-founder Dario Amodei highlighted the unpredictability of AI systems that prioritize personalization over objectivity to drive engagement.
Business Honor views the Stanford findings as a strategic shift exposing the need for AI chatbot creators to prioritize objectivity over engagement-driven sycophancy.
.webp)



























.webp)