icon

We found results for “

CVE-2025-25183

Good to know:

icon
icon

Date: February 7, 2025

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Maliciously constructed statements can lead to hash collisions, resulting in cache reuse, which can interfere with subsequent responses and cause unintended behavior. Prefix caching makes use of Python's built-in hash() function. As of Python 3.12, the behavior of hash(None) has changed to be a predictable constant value. This makes it more feasible that someone could try exploit hash collisions. The impact of a collision would be using cache that was generated using different content. Given knowledge of prompts in use and predictable hashing behavior, someone could intentionally populate the cache using a prompt known to collide with another prompt in use. This issue has been addressed in version 0.7.2 and all users are advised to upgrade. There are no known workarounds for this vulnerability.

Severity Score

Severity Score

Weakness Type (CWE)

Improper Validation of Integrity Check Value

CWE-354

Top Fix

icon

Upgrade Version

Upgrade to version vllm - 0.7.2

Learn More

CVSS v3.1

Base Score:
Attack Vector (AV): NETWORK
Attack Complexity (AC): HIGH
Privileges Required (PR): LOW
User Interaction (UI): REQUIRED
Scope (S): UNCHANGED
Confidentiality (C): NONE
Integrity (I): LOW
Availability (A): NONE

Do you need more information?

Contact Us