CVE-2025-66448
December 01, 2025
vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.
Affected Packages
https://github.com/vllm-project/vllm.git (GITHUB):
Affected version(s) >=v0.1.0 <v0.11.1Fix Suggestion:
Update to version v0.11.1vllm (PYTHON):
Affected version(s) >=0.0.1 <0.11.1Fix Suggestion:
Update to version 0.11.1Related Resources (5)
Do you need more information?
Contact UsCVSS v4
Base Score:
7.5
Attack Vector
NETWORK
Attack Complexity
HIGH
Attack Requirements
NONE
Privileges Required
LOW
User Interaction
PASSIVE
Vulnerable System Confidentiality
HIGH
Vulnerable System Integrity
HIGH
Vulnerable System Availability
HIGH
Subsequent System Confidentiality
NONE
Subsequent System Integrity
NONE
Subsequent System Availability
NONE
CVSS v3
Base Score:
7.1
Attack Vector
NETWORK
Attack Complexity
HIGH
Privileges Required
LOW
User Interaction
REQUIRED
Scope
UNCHANGED
Confidentiality
HIGH
Integrity
HIGH
Availability
HIGH
Weakness Type (CWE)
Improper Control of Generation of Code ('Code Injection')
EPSS
Base Score:
0.21