Mend.io Vulnerability Database
The largest open source vulnerability database
What is a Vulnerability ID?
New vulnerability? Tell us about it!
CVE-2025-25183
February 07, 2025
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python's built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability.
Affected Packages
vllm (PYTHON):
Affected version(s) >=0.0.1 <0.7.2
Fix Suggestion:
Update to version 0.7.2
Do you need more information?
Contact Us
CVSS v4
Base Score:
2.1
Attack Vector
NETWORK
Attack Complexity
HIGH
Attack Requirements
NONE
Privileges Required
LOW
User Interaction
PASSIVE
Vulnerable System Confidentiality
NONE
Vulnerable System Integrity
LOW
Vulnerable System Availability
NONE
Subsequent System Confidentiality
NONE
Subsequent System Integrity
NONE
Subsequent System Availability
NONE
CVSS v3
Base Score:
2.6
Attack Vector
NETWORK
Attack Complexity
HIGH
Privileges Required
LOW
User Interaction
REQUIRED
Scope
UNCHANGED
Confidentiality
NONE
Integrity
LOW
Availability
NONE
Weakness Type (CWE)
Improper Validation of Integrity Check Value
EPSS
Base Score:
0.36