Mend.io Vulnerability Database
The largest open source vulnerability database
What is a Vulnerability ID?
New vulnerability? Tell us about it!
CVE-2026-27893
March 26, 2026
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode "trust_remote_code=True" when loading sub-components, bypassing the user's explicit "--trust-remote-code=False" security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.
Affected Packages
vllm (PYTHON):
Affected version(s) >=0.10.1 <0.18.0
Fix Suggestion:
Update to version 0.18.0
vllm (PYTHON):
Affected version(s) >=0.10.1 <0.18.0
Fix Suggestion:
Update to version 0.18.0
Do you need more information?
Contact Us
CVSS v4
Base Score:
8.7
Attack Vector
NETWORK
Attack Complexity
LOW
Attack Requirements
NONE
Privileges Required
NONE
User Interaction
PASSIVE
Vulnerable System Confidentiality
HIGH
Vulnerable System Integrity
HIGH
Vulnerable System Availability
HIGH
Subsequent System Confidentiality
NONE
Subsequent System Integrity
NONE
Subsequent System Availability
NONE
CVSS v3
Base Score:
8.8
Attack Vector
NETWORK
Attack Complexity
LOW
Privileges Required
NONE
User Interaction
REQUIRED
Scope
UNCHANGED
Confidentiality
HIGH
Integrity
HIGH
Availability
HIGH
Weakness Type (CWE)
Protection Mechanism Failure
EPSS
Base Score:
0.03