CVE-2026-6859
Published: 22 April 2026
Description
A flaw was found in InstructLab. The `linux_train.py` script hardcodes `trust_remote_code=True` when loading models from HuggingFace. This allows a remote attacker to achieve arbitrary Python code execution by convincing a user to run `ilab train/download/generate` with a specially crafted malicious…
more
model from the HuggingFace Hub. This vulnerability can lead to complete system compromise.
Mitigating Controls (NIST 800-53 r5)AI
Timely flaw remediation through patching directly fixes the hardcoded trust_remote_code=True in InstructLab's linux_train.py script, preventing exploitation.
Restricts execution of mobile code from untrusted sources like HuggingFace models, blocking arbitrary Python code execution unless explicitly approved from verified sources.
Malicious code protection scans and eradicates harmful Python code in downloaded HuggingFace models at entry points during ilab train/download/generate operations.
Security SummaryAI
CVE-2026-6859 is a vulnerability in InstructLab, an open-source tool for fine-tuning large language models. The flaw resides in the `linux_train.py` script, which hardcodes the `trust_remote_code=True` parameter when loading models from the HuggingFace Hub. Published on 2026-04-22, it is rated 8.8 severity under CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H and maps to CWE-829 (Inclusion of Functionality from Untrusted Control Sphere).
A remote attacker can exploit this vulnerability by publishing a specially crafted malicious model to the HuggingFace Hub and social-engineering a victim into executing `ilab train`, `download`, or `generate` commands with that model. Successful exploitation grants arbitrary Python code execution on the victim's system, potentially resulting in complete system compromise, as the hardcoded trust setting bypasses safeguards against untrusted code.
Mitigation details are available in the Red Hat security advisory at https://access.redhat.com/security/cve/CVE-2026-6859 and the associated Bugzilla entry at https://bugzilla.redhat.com/show_bug.cgi?id=2459998. Security practitioners should consult these resources for patching instructions and workarounds specific to affected InstructLab deployments.
Details
- CWE(s)
Affected Products
AI Security AnalysisAI
- AI Category
- NLP and Transformers
- Risk Domain
- N/A
- OWASP Top 10 for LLMs 2025
- None mapped
- MITRE ATLAS Techniques
- None mapped
- Classification Reason
- Matched keywords: huggingface, huggingface
MITRE ATT&CK Enterprise TechniquesAI
Why these techniques?
Vulnerability enables arbitrary Python code execution (T1059.006) via exploitation of client-side software (T1203) when loading untrusted models from HuggingFace Hub, facilitating supply chain compromise of software dependencies/development tools (T1195.001).