By Matthew Sparkes
Publication Date: 2026-04-03 11:00:00
Isaac Asimov’s Three Laws of Robotics is not a practical guide
Entertainment Images/Alamy
The idea of superintelligent artificial intelligence emerging and wiping out humanity has been a common theme in science fiction for decades. Now we live in a world where real AI is advancing faster than ever before. Does this mean you should be worried about an AI apocalypse?
Unlike other existential risks such as climate change, the risks of AI are difficult to quantify. We are in speculative territory simply because we understand the situation much less than we understand the climate patterns.
What we know for sure is that a lot of very smart people are worried. Many of today’s AI company bosses have warned of the possibility that AI could lead to human extinction, and even machine intelligence pioneer Alan Turing spoke of a future in which computers become sentient before surpassing our capabilities and eventually taking over.
The scenario plays out…

