Russinovich from Microsoft Azure discusses important generative AI threats

Spread the love

In a recent presentation, Microsoft Azure’s Chief Technology Officer (CTO) discussed the potential dangers of data poisoning in machine learning models. By manipulating just 1% of the data set, an attacker could cause a model to misclassify items or produce malware. One example given was adding digital noise to an image file, causing a panda to be classified as a monkey.

While data poisoning can be used for malicious purposes, it can also be utilized to verify the authenticity and integrity of a model. These “back doors” can act as fingerprints that software can use to ensure the model’s accuracy. These back doors could include extraneous questions added to the code that real users are unlikely to ask.

Generative AI attacks, such as rapid injection techniques, were also discussed as a major threat. These attacks can influence more than just the current dialogue with a single user, potentially leading to the leakage of private data. Cross-injection attacks, which date back to website creation processes, involve injecting hidden text into dialogues to exploit vulnerabilities.

At the top of the threat stack, Microsoft highlighted several user-related threats, including the disclosure of sensitive data, jailbreaking techniques to gain control of AI models, and manipulation of applications and plugins from third parties to leak data. One attack, known as Crescendo, can bypass content security filters and manipulate models to generate malicious content through carefully crafted prompts. An example given was using ChatGPT to reveal the ingredients of a Molotov cocktail, which the model initially denied.

In conclusion, the presentation emphasized the importance of safeguarding machine learning models from data poisoning and generative AI attacks. It highlighted the need to isolate users, sessions, and content to prevent cross-injection attacks and ensure the security and integrity of AI systems. Microsoft is continuously working to address these threats and protect against potential vulnerabilities in AI technologies.

Article Source
https://www.csoonline.com/article/2119355/microsoft-azures-russinovich-sheds-light-on-key-generative-ai-threats.html