Guides
Protect AI models, data, and systems
Test for behavioral risks in conversational AI
Mitigating risks and future trends
AppSec types, tools, and best practices
Automating dependency updates
Manage open source code
Keep source code safe
Improve transparency, security, and compliance
Pre-production scanning and runtime protection
Secure containerized applications
Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor
Mend.io now integrates with Cursor to secure AI-generated code in real time
AI Security Guide: Protecting models, data, and systems from emerging threats
Learn how to protect AI systems with practical strategies and security frameworks.
Shadow AI: Examples, Risks, and 8 Ways to Mitigate Them
Uncover the hidden risks of Shadow AI and learn 8 key strategies to address it.
The Growing Challenge of Shadow MCP: Unauthorized AI Connectivity in Your Codebase
MCP adoption is surging across industries, fundamentally reshaping how systems connect to AI models.
Why AI Red Teaming Is the Next Must-Have in Enterprise Security
Learn why red teaming is key to securing today’s enterprise AI systems.
Best AI Red Teaming Providers: Top 5 Vendors in 2025
AI Red Teaming providers are specialized companies that simulate adversarial attacks on AI systems to uncover vulnerabilities, biases, and harmful behaviors before these systems are deployed.
Best AI Red Teaming Companies: Top 10 Providers in 2025
AI Red Teaming companies help software and security teams to better understand how their AI application behaves under adversarial attacks.
Top AI Red Teaming Solutions and How to Choose
Learn what AI red teaming solutions solve, how they work, and how to choose the right fit.
Vector and Embedding Weaknesses in AI Systems
Learn how to secure embeddings against poisoning, leakage, and inversion attacks.
AI Governance in AppSec: The More Things Change, The More They Stay the Same
Learn how AppSec teams can extend existing security and compliance practices seamlessly to AI.
Introducing Mend AI Premium
Robust AI governance and threat detection with Mend AI Premium.
Securing AI vs AI for security: What are we talking about?
This post breaks down the differences between securing AI, secure AI use, AI for security, and AI safety.
2025 OWASP Top 10 for LLM Applications: A Quick Guide
An overview of the top vulnerabilities affecting large language model (LLM) applications.
All About RAG: What It Is and How to Keep It Secure
Learn about retrieval-augmented generation, one complex AI system that developers are using.
Shining a Light on Shadow AI: What It Is and How to Find It
Find out more about shadow AI and the risks of leaving it uncovered.
Hallucinated Packages, Malicious AI Models, and Insecure AI-Generated Code
Worried about attackers using AI models to write malicious code? Here are three other ways AI model use can lead to attacks.