Cyber Posture

CVE-2024-12720

High

Published: 20 March 2025

Published
20 March 2025
Modified
01 August 2025
KEV Added
Patch
CVSS Score 7.5 CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
EPSS Score 0.0032 54.9th percentile
Risk Priority 15 60% EPSS · 20% KEV · 20% CVSS

Description

Adversaries may exploit software vulnerabilities that can cause an application or system to crash and deny availability to users.

Security Summary

CVE-2024-12720 is a Regular Expression Denial of Service (ReDoS) vulnerability in the Hugging Face transformers library, specifically within the tokenization_nougat_fast.py file's post_process_single() function. The issue arises from a regular expression that exhibits exponential time complexity due to excessive backtracking when processing specially crafted input, resulting in high CPU usage and potential application downtime. This affects version v4.46.3, which was the latest at the time of disclosure, and is rated with a CVSS v3.1 base score of 7.5 (AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H), mapped to CWE-1333.

The vulnerability can be exploited by any remote attacker with network access, requiring low attack complexity, no privileges, and no user interaction. By supplying malicious input to the vulnerable function, an attacker triggers the ReDoS condition, causing severe resource exhaustion and denial of service that disrupts application availability.

Mitigation is available via a patch in the transformers repository at commit deac971c469bcbb182c2e52da0b82fb3bf54cccf. Security practitioners should update to a version incorporating this fix. The issue was disclosed through Huntr, with bounty details at https://huntr.com/bounties/4bed1214-7835-4252-a853-22bbad891f98.

This vulnerability is particularly relevant to AI/ML workflows, as the transformers library is a core component for natural language processing models, underscoring the need to validate inputs in ML tokenization pipelines.

Details

CWE(s)
CWE-1333

Affected Products

huggingface
transformers
≤ 4.48.0

AI Security Analysis

AI Category
NLP and Transformers
Risk Domain
Other ATLAS/OWASP Terms
OWASP Top 10 for LLMs 2025
None mapped
MITRE ATLAS Techniques
None mapped
Classification Reason
The vulnerability is in the Hugging Face Transformers library, specifically in tokenization_nougat_fast.py, which is a core component for NLP and Transformer-based models, confirming it as AI-related in the NLP and Transformers category.

MITRE ATT&CK Enterprise Techniques

T1499.004 Application or System Exploitation Impact
Adversaries may exploit software vulnerabilities that can cause an application or system to crash and deny availability to users.
Why these techniques?

ReDoS vulnerability in tokenization function enables application resource exhaustion (high CPU, downtime) via crafted input, facilitating endpoint DoS through application exploitation.

References