CVE-2026-33873
Published: 27 March 2026
Description
Adversaries may abuse Python commands and scripts for execution.
Security Summary
CVE-2026-33873 is a critical code injection vulnerability (CWE-94) in Langflow, an open-source tool for building and deploying AI-powered agents and workflows. In versions prior to 1.9.0, the Agentic Assistant feature executes Python code generated by a large language model (LLM) during its validation phase. This process, intended to validate generated component code, instead dynamically executes the code server-side by reaching execution sinks and instantiating the generated class, enabling arbitrary Python code execution. The vulnerability carries a CVSS v3.1 base score of 9.9 (AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H).
An attacker requires low-privilege access (PR:L) to the Agentic Assistant feature over the network with no user interaction needed. By influencing the LLM's output—such as through crafted prompts or model manipulation—the attacker can inject malicious Python code that executes server-side. Successful exploitation grants full control over the affected Langflow instance, potentially leading to complete compromise with high confidentiality, integrity, and availability impacts due to the changed scope (S:C).
The patch in Langflow version 1.9.0 addresses the issue by fixing the validation logic in the Agentic Assistant, preventing dynamic execution of LLM-generated code. Detailed code changes are visible in the project's GitHub repository, including modifications to router.py (lines 252-297), schemas.py (lines 20-31), code_extraction.py (lines 11-53), validation.py (lines 27-47), and assistant_service.py (lines 142-156), which eliminate the execution sinks.
This vulnerability highlights risks in AI/ML workflows where LLM outputs directly feed into code execution paths, with no reported real-world exploitation as of publication on 2026-03-27.
Details
- CWE(s)
Affected Products
AI Security Analysis
- AI Category
- Other AI Platforms
- Risk Domain
- N/A
- OWASP Top 10 for LLMs 2025
- None mapped
- MITRE ATLAS Techniques
- None mapped
- Classification Reason
- Matched keywords: ai, llm
MITRE ATT&CK Enterprise Techniques
Why these techniques?
CVE enables exploitation of a public-facing web application (T1190) via LLM-influenced code injection, directly facilitating arbitrary server-side Python code execution (T1059.006).