Researchers Find Serious AI Bugs Exposing Meta, Nvidia, and Microsoft Inference Frameworks

Researchers Find Serious AI Bugs Exposing Meta, Nvidia, and Microsoft Inference Frameworks

By The Hacker News
Publication Date: 2025-11-14 15:20:00

Cybersecurity researchers have uncovered critical remote code execution vulnerabilities impacting major artificial intelligence (AI) inference engines, including those from Meta, Nvidia, Microsoft, and open-source PyTorch projects such as vLLM and SGLang.

“These vulnerabilities all traced back to the same root cause: the overlooked unsafe use of ZeroMQ (ZMQ) and Python’s pickle deserialization,” Oligo Security researcher Avi Lumelsky said in a report published Thursday.

At its core, the issue stems from what has been described as a pattern called ShadowMQ, in which the insecure deserialization logic has propagated to several projects as a result of code reuse.

The root cause is a vulnerability in Meta’s Llama large language model (LLM) framework (CVE-2024-50050, CVSS score: 6.3/9.3) that was patched by the company last October. Specifically, it involved the use of ZeroMQ’s recv_pyobj() method to deserialize incoming data using Python’s pickle module.

This, coupled with the fact…