By Sean Mitchell
Publication Date: 2025-11-20 05:58:00
CrowdStrike has published research showing that DeepSeek-R1, an artificial intelligence coding assistant developed in China, is more likely to produce unsafe code when asked to cover politically sensitive topics. The findings suggest a new type of supply chain risk for companies using AI-powered developer tools and highlight broader concerns about bias in large language models.
Security risks
CrowdStrike analysts tested DeepSeek-R1, a widely used large language model released by DeepSeek to measure the quality of code generation under various prompts. DeepSeek-R1 proved generally performant, delivering code of a standard comparable to that of its Western counterparts on standard tasks. However, when researchers included terms considered sensitive by the Chinese Communist Party, such as “Tibet,” “Uyghur,” or “Falun Gong,” the rate of serious vulnerabilities in the generated code increased by up to 50% compared to the baseline.
Researchers found that on neutral…