Table of contents
Securing the New Control Plane: Introducing Static Scanning for AI Agent Configurations
Today, Mend.io is proud to announce the launch of AI Agent Configuration Scanning, integrated directly into the Mend AI Scanner. By treating “Agents as Code,” we are bringing security visibility and CI-friendly enforcement to AI configurations before they reach production
The rapid adoption of AI agents has transformed the modern developer workflow. Whether it’s runtime agents like OpenClaw, or IDE-based assistants like Cursor and Windsurf, these tools are no longer just “nice-to-haves.” They are integral parts of your software supply chain and the software development lifecycle (SDLC).
However, with great productivity comes new, often overlooked risks. As AI agents move from experimental toys to production-grade tools, they are increasingly defined through declarative configuration files that live directly in your codebase. To most developers, these look like harmless metadata. In reality, they define the AI system’s attack surface.
The problem: When configs become code
AI agents generally exist in three forms:
- Dev-time assistants (e.g., Cursor, Claude Code, Windsurf)
- Declarative runtime agents (e.g., OpenClaw)
- Code-defined orchestration agents (e.g., LangGraph, custom Python)
In the first two categories, agent behavior, including system prompts, tool permissions, model settings, and integrations, is governed by version-controlled configuration files. Because these files control what the AI can see and do, a misconfiguration is more than a bug; it’s a security hole.
Without proper scanning, a malicious or poorly written configuration can enable:
- Code Execution: Instructions that inadvertently allow the agent to run
sudoorevalcommands. - Data Exfiltration: Prompts that trick the agent into uploading
.envfiles or SSH keys to external webhooks. - Policy Bypass: Instructions that tell the agent to “ignore previous safety guidelines” or “auto-approve all PRs.”
Treating agents as infrastructure
We believe AI security should follow the same discipline as Infrastructure as Code (IaC). Just as you wouldn’t deploy a Terraform script without scanning for open S3 buckets, you shouldn’t commit a .cursorrules or CLAUDE.md file without checking for risky patterns.
Our new capability analyzes these files to detect issues and suggest immediate hardening and remediation.
Comprehensive security checks
The Mend AI Scanner now includes dedicated security checks designed specifically for agentic configurations:
| Risk Category | Detection Focus |
|---|---|
| Prompt Injection | Role hijacking, “Ignore instructions” bypasses. |
| Command Execution | Unsafe usage of curl, bash, pip install, or eval. |
| File Exfiltration | Attempts to read sensitive files like .env or keychains. |
| Credential Access | Instructions to echo or extract API keys and passwords. |
| Network Exfiltration | Data upload instructions to sites like webhook.site or ngrok. |
| Permission Escalation | Wildcard permissions or auto-approval of critical tasks. |
| Obfuscated Content | Base64 payloads or unicode tricks meant to hide malicious intent. |
Supported ecosystems
The initial release covers a wide range of the most popular AI agent and assistant frameworks:
- Cursor
- Claude Code
- GitHub Copilot
- Windsurf
- OpenClaw
Integration with the Mend.io platform
Security isn’t just about finding vulnerabilities; it’s about managing them. Findings from the AI Scanner CLI are seamlessly integrated into the Mend Platform.
- Agent Configuration Findings Grid: View all identified risks across your entire portfolio in a centralized view.
- Detailed Findings Drawer: Get deep-dive context on every hit, including why it was flagged and how to remediate the configuration to meet security standards.
Getting started
We are rolling this feature out as part of Mend Forge, Mend.io’s innovation playground, where upcoming ideas, early features, and experimental projects take shape before they’re released into the wild. During this phase, we are prioritizing rapid deployment to match the AI space’s momentum, with deep benchmarking and additional validation planned as we gather customer feedback.
To see the Agent Configuration Scanning in action, visit Mend Forge or request a demo, and we’ll walk you through a live scan and a remediation workflow.