CVE-2026-35030
Published: 06 April 2026
Description
LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. Prior to 1.83.0, when JWT authentication is enabled (enable_jwt_auth: true), the OIDC userinfo cache uses token[:20] as the cache key. JWT headers produced by…
more
the same signing algorithm generate identical first 20 characters. This configuration option is not enabled by default. Most instances are not affected. An unauthenticated attacker can craft a token whose first 20 characters match a legitimate user's cached token. On cache hit, the attacker inherits the legitimate user's identity and permissions. This affects deployments with JWT/OIDC authentication enabled. Fixed in v1.83.0.
Mitigating Controls (NIST 800-53 r5)AI
Directly requires timely identification, reporting, and correction of the specific software flaw in JWT token caching that enables authentication bypass.
Ensures the system implements robust identification and authentication for users, preventing bypass via flawed OIDC userinfo caching.
Mandates proper management of JWT authenticators to avoid vulnerabilities in their processing, such as predictable cache key collisions.
Security SummaryAI
CVE-2026-35030 is a high-severity authentication bypass vulnerability (CVSS 9.1, CWE-287) in LiteLLM, an open-source proxy server and AI gateway for calling LLM APIs in OpenAI or native formats. The issue affects versions prior to 1.83.0 when JWT authentication is explicitly enabled via the `enable_jwt_auth: true` configuration option, which is not enabled by default. In these configurations, the OIDC userinfo cache improperly uses the first 20 characters of the JWT token (`token[:20]`) as the cache key. JWT headers generated by the same signing algorithm produce identical first 20 characters, enabling predictable cache key collisions.
An unauthenticated network attacker can exploit this vulnerability with low complexity and no privileges required by crafting a malicious JWT token whose first 20 characters match those of a legitimate user's cached token. Upon a cache hit, the attacker inherits the legitimate user's identity and associated permissions, potentially gaining unauthorized high-confidentiality and high-integrity access to the LiteLLM proxy and downstream LLM APIs.
The vulnerability is addressed in LiteLLM version 1.83.0, which fixes the cache key generation to prevent collisions. Security practitioners should upgrade to v1.83.0 or later and review configurations to confirm JWT/OIDC authentication usage, as most deployments remain unaffected due to the non-default setting. Additional details are available in the GitHub Security Advisory at https://github.com/BerriAI/litellm/security/advisories/GHSA-jjhc-v7c2-5hh6.
Details
- CWE(s)
Affected Products
AI Security AnalysisAI
- AI Category
- APIs and Models
- Risk Domain
- N/A
- OWASP Top 10 for LLMs 2025
- None mapped
- MITRE ATLAS Techniques
- None mapped
- Classification Reason
- Matched keywords: ai, llm, openai
MITRE ATT&CK Enterprise TechniquesAI
Why these techniques?
The vulnerability is an authentication bypass in a public-facing proxy server (LiteLLM), enabling unauthenticated network attackers to impersonate legitimate users and gain unauthorized access, directly mapping to T1190: Exploit Public-Facing Application.