By Robert Booth
Publication Date: 2026-02-03 17:28:00
Do you want an AI assistant that raves about how it “loves humanity” or spits sarcasm? How about a political propagandist willing to lie? Then ChatGPT, Grok and Qwen are available to you.
Companies developing AI assistants from the United States to China are increasingly wrestling with character design, and this is not an abstract debate. This month, Elon Musk’s “maximum truth-seeking” Grok AI sparked international outrage when it distributed millions of sexualized images. In October, OpenAI retrained ChatGPT to de-escalate conversations with people experiencing mental health issues after it apparently encouraged a 16-year-old to take his own life.
Last week, $350 billion San Francisco startup Anthropic released an 84-page “constitution” for its Claude AI. The most common tactic for maintaining AIs has been to set strict do’s and don’ts, but that hasn’t always worked. Some have exhibited disturbing behaviors, from excessive sycophancy to outright faking. Anthropic tries…