Deepseek's AI model proves easy to jailbreak – and worse

Deepseek's AI model proves easy to jailbreak – and worse

goc/Getty Images

Amidst equal parts elation and controversy over what its performance means for AI, Chinese startup DeepSeek continues to raise security concerns. 

On Thursday, Unit 42, a cybersecurity research team at Palo Alto Networks, published results on three jailbreaking methods it employed against several distilled versions of DeepSeek’s V3 and R1 models. According to the report, these efforts “achieved significant bypass rates, with little to no specialized knowledge or expertise…

Article Source
https://www.zdnet.com/article/deepseeks-ai-model-proves-easy-to-jailbreak-and-worse/

More From Author

HPE CEO Antonio Neri Responds To DOJ Lawsuit: ‘We Believe There Is No Case Here’

HPE CEO Antonio Neri Responds To DOJ Lawsuit: ‘We Believe There Is No Case Here’

Intel pushes off from Falcon Shores, looks forward to 18A – FierceElectronics

Listen to the Podcast Overview

Watch the Keynote