We found results for “”
CVE-2026-22773
Good to know:
Date: January 10, 2026
vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially crafted 1x1 pixel image. This causes a tensor dimension mismatch that results in an unhandled runtime error, leading to complete server termination. This issue has been patched in version 0.12.0.
Severity Score
Related Resources (6)
Severity Score
Weakness Type (CWE)
Allocation of Resources Without Limits or Throttling
CWE-770Top Fix
Upgrade Version
Upgrade to version vllm - 0.12.0;vllm - 0.12.0;vllm - 0.12.0;https://github.com/vllm-project/vllm.git - v0.12.0
CVSS v3.1
| Base Score: |
|
|---|---|
| Attack Vector (AV): | NETWORK |
| Attack Complexity (AC): | LOW |
| Privileges Required (PR): | LOW |
| User Interaction (UI): | NONE |
| Scope (S): | UNCHANGED |
| Confidentiality (C): | NONE |
| Integrity (I): | NONE |
| Availability (A): | HIGH |
Vulnerabilities
Projects
Contact Us


