icon

We found results for “

CVE-2025-48944

Good to know:

icon

Date: May 30, 2025

vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern" and "type" fields when the tools functionality is invoked. These inputs are not validated before being compiled or parsed, causing a crash of the inference worker with a single request. The worker will remain down until it is restarted. Version 0.9.0 fixes the issue.

Severity Score

Severity Score

Weakness Type (CWE)

Improper Input Validation

CWE-20

Top Fix

icon

Upgrade Version

Upgrade to version vllm - 0.9.0

Learn More

CVSS v3.1

Base Score:
Attack Vector (AV): NETWORK
Attack Complexity (AC): LOW
Privileges Required (PR): LOW
User Interaction (UI): NONE
Scope (S): UNCHANGED
Confidentiality (C): NONE
Integrity (I): NONE
Availability (A): HIGH

Do you need more information?

Contact Us