Cyber Posture

CVE-2026-33654

CriticalPublic PoC

Published: 27 March 2026

Published
27 March 2026
Modified
08 April 2026
KEV Added
Patch
CVSS Score 9.8 CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
EPSS Score 0.0023 46.0th percentile
Risk Priority 20 60% EPSS · 20% KEV · 20% CVSS

Description

nanobot is a personal AI assistant. Prior to version 0.1.6, an indirect prompt injection vulnerability exists in the email channel processing module (`nanobot/channels/email.py`), allowing a remote, unauthenticated attacker to execute arbitrary LLM instructions (and subsequently, system tools) without any interaction…

more

from the bot owner. By sending an email containing malicious prompts to the bot's monitored email address, the bot automatically polls, ingests, and processes the email content as highly trusted input, fully bypassing channel isolation and resulting in a stealthy, zero-click attack. Version 0.1.6 patches the issue.

Mitigating Controls (NIST 800-53 r5)AI

prevent

Validates email inputs to prevent malicious prompts from being ingested and processed as trusted content by the LLM.

prevent

Restricts information inputs from the email channel to block unauthorized or malformed prompts that bypass channel isolation.

prevent

Enforces flow control policies to maintain channel isolation, preventing untrusted email content from reaching LLM instructions and system tools.

Security SummaryAI

CVE-2026-33654 is an indirect prompt injection vulnerability in the email channel processing module (nanobot/channels/email.py) of nanobot, a personal AI assistant. The flaw affects versions prior to 0.1.6 and has a CVSS v3.1 base score of 9.8 (AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H), with associated CWEs-94 (Code Injection), CWE-290 (Authentication Bypass), and CWE-1336 (Inequivalent Security Check in Implementation).

A remote, unauthenticated attacker can exploit this vulnerability by sending an email containing malicious prompts to the bot's monitored email address. The bot automatically polls, ingests, and processes the email content as highly trusted input, fully bypassing channel isolation. This enables execution of arbitrary LLM instructions and subsequently system tools without any interaction from the bot owner, resulting in a stealthy, zero-click attack with high confidentiality, integrity, and availability impacts.

The GitHub security advisory (GHSA-4gmr-2vc8-7qh3) states that version 0.1.6 patches the issue. Security practitioners should upgrade to this version to mitigate the vulnerability.

Details

CWE(s)

Affected Products

nanobot
nanobot
0.1.4 · ≤ 0.1.4

AI Security AnalysisAI

AI Category
Other AI Platforms
Risk Domain
LLM/Generative AI Risks
OWASP Top 10 for LLMs 2025
None mapped
MITRE ATLAS Techniques
None mapped
Classification Reason
Matched keywords: ai, prompt injection, llm

MITRE ATT&CK Enterprise TechniquesAI

T1190 Exploit Public-Facing Application Initial Access
Adversaries may attempt to exploit a weakness in an Internet-facing host or system to initially access a network.
Why these techniques?

Vulnerability enables remote unauthenticated exploitation of public-facing email processing module via crafted malicious prompts, leading to arbitrary LLM and system tool execution.

Confidence: HIGH · MITRE ATT&CK Enterprise v19.0

References