By Robert Booth
Publication Date: 2025-12-02 12:37:00
One of the world’s leading AI scientists has said that humanity must decide by 2030 whether to take the “ultimate risk” of letting artificial intelligence systems train themselves to become more powerful.
Jared Kaplan, chief scientist and co-owner of the $180bn (£135bn) US startup Anthropic, said a decision was imminent on how much autonomy the systems should be given to develop further.
The move could trigger a beneficial “intelligence explosion” – or be the moment when people lose control.
In an interview about the hotly contested race for artificial general intelligence (AGI) – sometimes called superintelligence – Kaplan urged international governments and society to commit to what he called “the biggest decision.”
Anthropic is part of a number of pioneering AI companies, including OpenAI, Google DeepMind, xAI, Meta and Chinese rivals led by DeepSeek, vying for AI supremacy. His widely used AI assistant Claude is particularly popular with…