Guides
Protect AI models, data, and systems
Test for behavioral risks in conversational AI
Mitigating risks and future trends
AppSec types, tools, and best practices
Automating dependency updates
Manage open source code
Keep source code safe
Improve transparency, security, and compliance
Pre-production scanning and runtime protection
Secure containerized applications
Best AI Red Teaming Companies: Top 10 Providers in 2025
AI Red Teaming companies help software and security teams to better understand how their AI application behaves under adversarial attacks.
Top AI Red Teaming Solutions and How to Choose
Learn what AI red teaming solutions solve, how they work, and how to choose the right fit.
Vector and Embedding Weaknesses in AI Systems
Learn how to secure embeddings against poisoning, leakage, and inversion attacks.
AI Governance in AppSec: The More Things Change, The More They Stay the Same
Learn how AppSec teams can extend existing security and compliance practices seamlessly to AI.
Introducing Mend AI Premium
Robust AI governance and threat detection with Mend AI Premium.
Securing AI vs AI for security: What are we talking about?
This post breaks down the differences between securing AI, secure AI use, AI for security, and AI safety.
2025 OWASP Top 10 for LLM Applications: A Quick Guide
An overview of the top vulnerabilities affecting large language model (LLM) applications.
All About RAG: What It Is and How to Keep It Secure
Learn about retrieval-augmented generation, one complex AI system that developers are using.
Shining a Light on Shadow AI: What It Is and How to Find It
Find out more about shadow AI and the risks of leaving it uncovered.
Hallucinated Packages, Malicious AI Models, and Insecure AI-Generated Code
Worried about attackers using AI models to write malicious code? Here are three other ways AI model use can lead to attacks.
Quick Guide to Popular AI Licenses
Not all "open" AI licenses are truly open source. Learn more about the most popular licenses on Hugging Face.
Responsible AI Licenses (RAIL): Hereβs What You Need to Know
Learn about this family of licenses that seek to limit harmful use of AI models.
How Do I Protect My AI Model?
Learn essential strategies to secure your AI models from theft, denial of service, and other threats, covering copyright issues, risk management, and secure storage practices
OWASP Top 10 for LLM Applications: A Quick Guide
Discover the OWASP Top 10 for LLM Applications in this comprehensive guide. Learn about vulnerabilities, & prevention techniques.
What You Need to Know About Hugging Face
Stay informed about the risks and challenges of AI models with Hugging Face. Learn how to identify and secure AI-generated code.
Learning From History: AI Gender Bias
Learn about AI gender bias in large language models, how historical data impacts AI, & implications for women in health & car safety fields.
See whatβs next for AI Security Testing and AppSec.