Mend.io AI Security

Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor - Blog PR image Cursor Mend

Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor

Mend.io now integrates with Cursor to secure AI-generated code in real time

Read More
Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor - Linkedin AI Security 1

AI Security Guide: Protecting models, data, and systems from emerging threats

Learn how to protect AI systems with practical strategies and security frameworks.

Read More
Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor - Blog image cover Shadow AI

Shadow AI: Examples, Risks, and 8 Ways to Mitigate Them

Uncover the hidden risks of Shadow AI and learn 8 key strategies to address it.

Read More
Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor - Shadow MCP blog graphic

The Growing Challenge of Shadow MCP: Unauthorized AI Connectivity in Your Codebase

MCP adoption is surging across industries, fundamentally reshaping how systems connect to AI models.

Read More
Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor - Red Teaming blog graphic

Why AI Red Teaming Is the Next Must-Have in Enterprise Security

Learn why red teaming is key to securing today’s enterprise AI systems.

Read More
Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor - Blog image Red teaming providers 1

Best AI Red Teaming Providers: Top 5 Vendors in 2025

AI Red Teaming providers are specialized companies that simulate adversarial attacks on AI systems to uncover vulnerabilities, biases, and harmful behaviors before these systems are deployed.

Read More
Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor - Best AI Red Teaming Companies Top 10 Providers in 2025@2x

Best AI Red Teaming Companies: Top 10 Providers in 2025

AI Red Teaming companies help software and security teams to better understand how their AI application behaves under adversarial attacks.

Read More
Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor - Blog image Red teaming solutions 1

Top AI Red Teaming Solutions and How to Choose

Learn what AI red teaming solutions solve, how they work, and how to choose the right fit.

Read More
Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor - Vector and embedding weakness blog post

Vector and Embedding Weaknesses in AI Systems

Learn how to secure embeddings against poisoning, leakage, and inversion attacks.

Read More
Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor - AI governance blog post

AI Governance in AppSec: The More Things Change, The More They Stay the Same

Learn how AppSec teams can extend existing security and compliance practices seamlessly to AI.

Read More
Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor - Mend AI Premium

Introducing Mend AI Premium

Robust AI governance and threat detection with Mend AI Premium.

Read More
Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor - Securing AI and AI Security blog post

Securing AI vs AI for security: What are we talking about?

This post breaks down the differences between securing AI, secure AI use, AI for security, and AI safety.

Read More
Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor - owasp top 10 llm application vulnerabilities

2025 OWASP Top 10 for LLM Applications: A Quick Guide

An overview of the top vulnerabilities affecting large language model (LLM) applications.

Read More
Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor - All About RAG blog post

All About RAG: What It Is and How to Keep It Secure

Learn about retrieval-augmented generation, one complex AI system that developers are using.

Read More
Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor - Blog shining a light on shadow AI

Shining a Light on Shadow AI: What It Is and How to Find It

Find out more about shadow AI and the risks of leaving it uncovered.

Read More
Securing AI code at the source: Mend.io now integrates with Cursor AI Code Editor - blog AI and malicious packages

Hallucinated Packages, Malicious AI Models, and Insecure AI-Generated Code

Worried about attackers using AI models to write malicious code? Here are three other ways AI model use can lead to attacks.

Read More