By @AnthropicAI
Publication Date: 2026-01-28 12:00:00
AI assistants are now integrated into our daily lives – most often for instrumental tasks like writing code, but increasingly in personal areas: navigating relationships, processing emotions, or advising on important life decisions. In the vast majority of cases, the influence that AI provides in this area is helpful, productive and often empowering.
However, as AI takes on more roles, one risk is that it controls some users in ways that distort rather than inform. In such cases, interactions may occur disempowering: Reducing individuals’ ability to form accurate beliefs, make authentic value judgments, and act in accordance with their own values.
As part of our research into the risks of AI, we are publishing a new paper that presents the first large-scale analysis of potentially disempowering patterns in real-world AI conversations. We focus on three areas: beliefs, values and actions.
For example, a user who is going through a difficult period in their relationship could ask an AI…