CVE-2024-11041
Published: 20 March 2025
Description
Adversaries may exploit remote services to gain unauthorized access to internal systems once inside of a network.
Security Summary
CVE-2024-11041 affects vllm-project vllm version v0.6.2, specifically the MessageQueue.dequeue() API function. This vulnerability arises from the function's use of pickle.loads to directly parse data received over sockets, enabling deserialization of untrusted input. Classified under CWE-502 (Deserialization of Untrusted Data), it results in a remote code execution (RCE) condition.
An attacker can exploit this vulnerability remotely by sending a malicious pickle payload to the MessageQueue endpoint. If the targeted vllm instance processes the payload via dequeue(), it will execute arbitrary code on the victim's machine, potentially granting full system compromise depending on the host's privileges and environment.
For mitigation details, refer to the advisory on Huntr at https://huntr.com/bounties/00136195-11e0-4ad0-98d5-72db066e867f, which reported the issue.
This vulnerability is notable in the context of AI/ML infrastructure, as vLLM is a high-performance serving engine for large language models, potentially exposing deployments in production LLM inference setups.
Details
- CWE(s)
Affected Products
AI Security Analysis
- AI Category
- APIs and Models
- Risk Domain
- Supply Chain and Deployment
- OWASP Top 10 for LLMs 2025
- None mapped
- MITRE ATLAS Techniques
- None mapped
- Classification Reason
- vLLM is a high-throughput serving engine for large language models (LLMs), providing APIs for model inference and serving, making it fit the 'APIs and Models' category. The vulnerability is in its MessageQueue API, confirming AI relevance.
MITRE ATT&CK Enterprise Techniques
Why these techniques?
The vulnerability enables remote code execution by sending malicious pickle payloads over sockets to MessageQueue.dequeue(), facilitating exploitation of remote services.