CVE-2026-33654
Published: 27 March 2026
Description
nanobot is a personal AI assistant. Prior to version 0.1.6, an indirect prompt injection vulnerability exists in the email channel processing module (`nanobot/channels/email.py`), allowing a remote, unauthenticated attacker to execute arbitrary LLM instructions (and subsequently, system tools) without any interaction…
more
from the bot owner. By sending an email containing malicious prompts to the bot's monitored email address, the bot automatically polls, ingests, and processes the email content as highly trusted input, fully bypassing channel isolation and resulting in a stealthy, zero-click attack. Version 0.1.6 patches the issue.
Mitigating Controls (NIST 800-53 r5)AI
Validates email inputs to prevent malicious prompts from being ingested and processed as trusted content by the LLM.
Restricts information inputs from the email channel to block unauthorized or malformed prompts that bypass channel isolation.
Enforces flow control policies to maintain channel isolation, preventing untrusted email content from reaching LLM instructions and system tools.
Security SummaryAI
CVE-2026-33654 is an indirect prompt injection vulnerability in the email channel processing module (nanobot/channels/email.py) of nanobot, a personal AI assistant. The flaw affects versions prior to 0.1.6 and has a CVSS v3.1 base score of 9.8 (AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H), with associated CWEs-94 (Code Injection), CWE-290 (Authentication Bypass), and CWE-1336 (Inequivalent Security Check in Implementation).
A remote, unauthenticated attacker can exploit this vulnerability by sending an email containing malicious prompts to the bot's monitored email address. The bot automatically polls, ingests, and processes the email content as highly trusted input, fully bypassing channel isolation. This enables execution of arbitrary LLM instructions and subsequently system tools without any interaction from the bot owner, resulting in a stealthy, zero-click attack with high confidentiality, integrity, and availability impacts.
The GitHub security advisory (GHSA-4gmr-2vc8-7qh3) states that version 0.1.6 patches the issue. Security practitioners should upgrade to this version to mitigate the vulnerability.
Details
- CWE(s)
Affected Products
AI Security AnalysisAI
- AI Category
- Other AI Platforms
- Risk Domain
- LLM/Generative AI Risks
- OWASP Top 10 for LLMs 2025
- None mapped
- MITRE ATLAS Techniques
- None mapped
- Classification Reason
- Matched keywords: ai, prompt injection, llm
MITRE ATT&CK Enterprise TechniquesAI
Why these techniques?
Vulnerability enables remote unauthenticated exploitation of public-facing email processing module via crafted malicious prompts, leading to arbitrary LLM and system tool execution.