Mend.io Vulnerability Database
The largest open source vulnerability database
What is a Vulnerability ID?
New vulnerability? Tell us about it!
CVE-2025-46560
April 30, 2025
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to ​​inefficient list concatenation operations​​, the algorithm exhibits ​​quadratic time complexity (O(n²))​​, allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.
Affected Packages
https://github.com/vllm-project/vllm.git (GITHUB):
Affected version(s) >=v0.8.0 <v0.8.5
Fix Suggestion:
Update to version v0.8.5
vllm (PYTHON):
Affected version(s) >=0.8.0 <0.8.5
Fix Suggestion:
Update to version 0.8.5
Do you need more information?
Contact Us
CVSS v4
Base Score:
7.1
Attack Vector
NETWORK
Attack Complexity
LOW
Attack Requirements
NONE
Privileges Required
LOW
User Interaction
NONE
Vulnerable System Confidentiality
NONE
Vulnerable System Integrity
NONE
Vulnerable System Availability
HIGH
Subsequent System Confidentiality
NONE
Subsequent System Integrity
NONE
Subsequent System Availability
NONE
CVSS v3
Base Score:
6.5
Attack Vector
NETWORK
Attack Complexity
LOW
Privileges Required
LOW
User Interaction
NONE
Scope
UNCHANGED
Confidentiality
NONE
Integrity
NONE
Availability
HIGH
Weakness Type (CWE)
Inefficient Regular Expression Complexity
EPSS
Base Score:
0.20