Cyber Posture

CVE-2026-6859

High

Published: 22 April 2026

Published
22 April 2026
Modified
06 May 2026
KEV Added
Patch
CVSS Score 8.8 CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H
EPSS Score 0.0013 31.8th percentile
Risk Priority 18 60% EPSS · 20% KEV · 20% CVSS

Description

A flaw was found in InstructLab. The `linux_train.py` script hardcodes `trust_remote_code=True` when loading models from HuggingFace. This allows a remote attacker to achieve arbitrary Python code execution by convincing a user to run `ilab train/download/generate` with a specially crafted malicious…

more

model from the HuggingFace Hub. This vulnerability can lead to complete system compromise.

Mitigating Controls (NIST 800-53 r5)AI

prevent

Timely flaw remediation through patching directly fixes the hardcoded trust_remote_code=True in InstructLab's linux_train.py script, preventing exploitation.

prevent

Restricts execution of mobile code from untrusted sources like HuggingFace models, blocking arbitrary Python code execution unless explicitly approved from verified sources.

preventdetect

Malicious code protection scans and eradicates harmful Python code in downloaded HuggingFace models at entry points during ilab train/download/generate operations.

Security SummaryAI

CVE-2026-6859 is a vulnerability in InstructLab, an open-source tool for fine-tuning large language models. The flaw resides in the `linux_train.py` script, which hardcodes the `trust_remote_code=True` parameter when loading models from the HuggingFace Hub. Published on 2026-04-22, it is rated 8.8 severity under CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H and maps to CWE-829 (Inclusion of Functionality from Untrusted Control Sphere).

A remote attacker can exploit this vulnerability by publishing a specially crafted malicious model to the HuggingFace Hub and social-engineering a victim into executing `ilab train`, `download`, or `generate` commands with that model. Successful exploitation grants arbitrary Python code execution on the victim's system, potentially resulting in complete system compromise, as the hardcoded trust setting bypasses safeguards against untrusted code.

Mitigation details are available in the Red Hat security advisory at https://access.redhat.com/security/cve/CVE-2026-6859 and the associated Bugzilla entry at https://bugzilla.redhat.com/show_bug.cgi?id=2459998. Security practitioners should consult these resources for patching instructions and workarounds specific to affected InstructLab deployments.

Details

CWE(s)

Affected Products

redhat
instructlab
all versions
redhat
enterprise linux ai
3.0

AI Security AnalysisAI

AI Category
NLP and Transformers
Risk Domain
N/A
OWASP Top 10 for LLMs 2025
None mapped
MITRE ATLAS Techniques
None mapped
Classification Reason
Matched keywords: huggingface, huggingface

MITRE ATT&CK Enterprise TechniquesAI

T1203 Exploitation for Client Execution Execution
Adversaries may exploit software vulnerabilities in client applications to execute code.
T1059.006 Python Execution
Adversaries may abuse Python commands and scripts for execution.
T1195.001 Compromise Software Dependencies and Development Tools Initial Access
Adversaries may manipulate software dependencies and development tools prior to receipt by a final consumer for the purpose of data or system compromise.
Why these techniques?

Vulnerability enables arbitrary Python code execution (T1059.006) via exploitation of client-side software (T1203) when loading untrusted models from HuggingFace Hub, facilitating supply chain compromise of software dependencies/development tools (T1195.001).

Confidence: HIGH · MITRE ATT&CK Enterprise v19.0

References