Report: “AI security” must mean protection against authoritarian abuse | The strategist

Report: “AI security” must mean protection against authoritarian abuse | The strategist

By Bethany Allen, Nathan Attrill, Fergus Ryan
Publication Date: 2025-12-01 19:00:00

“AI safety” has become a buzzword in Silicon Valley, where tech companies use it to describe artificial intelligence systems that reflect human values ​​and protect human well-being. AI chatbots that encourage a person to commit crimes or self-harm, for example, would not fall under this definition of AI safety.

Missing from this definition, however, is a focus on protecting human society from government abuse. Such abuse is not theoretical. The Chinese Communist Party is already using generative AI to increase its oppression. ASPI’s new report, The Party’s AI: How China’s New AI Systems Are Transforming Human Rightsshows how the Chinese party-state has used large language models (LLMs) and generative AI to increase state surveillance and control.

China’s emerging AI architecture should serve as a cautionary tale for democracies: It shows how quickly powerful models can be integrated into surveillance, censorship and social control when commercial…