Protecting the well-being of our users

Protecting the well-being of our users

By @AnthropicAI
Publication Date: 2025-12-18 12:00:00

People use AI for a variety of reasons, including emotional support in some cases. Our Safeguards team leads our efforts to ensure Claude handles these conversations appropriately – responding with empathy, being honest about his limitations as an AI, and being considerate of our users’ well-being. If chatbots handle these questions without appropriate safeguards, the risk can be significant.

In this post, we’ll describe the actions we’ve taken so far and how well Claude is currently performing across various assessments. We’re focusing on two areas: how Claude handles conversations about suicide and self-harm, and how we’ve reduced “memory bias” – the tendency of some AI models to tell users what they want to hear, rather than what is true and helpful. We also address Claude’s minimum age of 18.

Suicide and self-harm

Claude is not a substitute for professional advice or medical care. When someone expresses personal issues with suicide or self-harm…