Mend.io Vulnerability Database
The largest open source vulnerability database
What is a Vulnerability ID?
New vulnerability? Tell us about it!
CVE-2026-25960
March 09, 2026
vLLM is an inference and serving engine for large language models (LLMs). The SSRF protection fix for CVE-2026-24779 add in 0.15.1 can be bypassed in the load_from_url_async method due to inconsistent URL parsing behavior between the validation layer and the actual HTTP client. The SSRF fix uses urllib3.util.parse_url() to validate and extract the hostname from user-provided URLs. However, load_from_url_async uses aiohttp for making the actual HTTP requests, and aiohttp internally uses the yarl library for URL parsing. This vulnerability in 0.17.0.
Affected Packages
https://github.com/vllm-project/vllm.git (GITHUB):
Affected version(s) >=v0.1.0 <v0.17.0
Fix Suggestion:
Update to version v0.17.0
vllm (PYTHON):
Affected version(s) >=0.15.1 <0.17.0
Fix Suggestion:
Update to version 0.17.0
Do you need more information?
Contact Us
CVSS v4
Base Score:
7.1
Attack Vector
NETWORK
Attack Complexity
LOW
Attack Requirements
NONE
Privileges Required
LOW
User Interaction
NONE
Vulnerable System Confidentiality
HIGH
Vulnerable System Integrity
NONE
Vulnerable System Availability
LOW
Subsequent System Confidentiality
NONE
Subsequent System Integrity
NONE
Subsequent System Availability
NONE
CVSS v3
Base Score:
7.1
Attack Vector
NETWORK
Attack Complexity
LOW
Privileges Required
LOW
User Interaction
NONE
Scope
UNCHANGED
Confidentiality
HIGH
Integrity
NONE
Availability
LOW
Weakness Type (CWE)
Server-Side Request Forgery (SSRF)
EPSS
Base Score:
0.02