Table of contents

Moonshot AI governance breakdown: Lessons from the Cursor/Kimi K2.5 incident

Moonshot AI governance breakdown: Lessons from the Cursor/Kimi K2.5 incident - Cursor Incident

What happens when a $29 billion company forgets to rename a model ID, and what it means for every organization using open-source AI.

On March 19, 2025, Cursor, the AI-powered coding tool valued at $29 billion and generating an estimated $2 billion in annual recurring revenue, launched Composer 2, its newest and most powerful coding model. The announcement was bold: Cursor claimed Composer 2 was built through “continued pre-training of a base model combined with reinforcement learning,” positioning it as a proprietary, in-house breakthrough.

Less than 24 hours later, a developer named Fynn was debugging Cursor’s OpenAI-compatible API endpoint when something unexpected appeared in the response: accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast.

That model ID wasn’t a Cursor internal name. It was a near-literal description of what Composer 2 actually was: Kimi K2.5, the open-weight model from Chinese AI company Moonshot AI, fine-tuned with reinforcement learning.

The developer’s response was matter-of-fact: “so composer 2 is just Kimi K2.5 with RL. at least rename the model ID.”

What the license actually says

Moonshot AI’s Kimi K2.5 is released under a Modified MIT License, seemingly permissive, but with one critical addition that Moonshot AI wrote specifically because they anticipated exactly this scenario.

The license states that any commercial product or service that either:

  • Has more than 100 million monthly active users, or
  • Generates more than $20 million in monthly revenue

…must prominently display “Kimi K2.5” in its user interface.

At its current revenue run rate, Cursor surpasses the $20 million monthly threshold by roughly 8x. The license requirement was clear, enforceable, and apparently ignored.

Yulun Du, Head of Pretraining at Moonshot AI, publicly confirmed that Composer 2’s tokenizer was “completely identical to our Kimi tokenizer,” calling it “almost certainly the result of further fine-tuning of our model.” He directly tagged Cursor’s co-founder and asked: “Why aren’t you respecting our license, or paying any fees?”

This isn’t just a “Cursor problem”

The Cursor/Moonshot incident is a symptom of three massive gaps in how most companies currently handle AI:

  1. No inventory of the AI models in use. Cursor’s engineering team presumably knew what model they built on. But did Legal? Did Compliance? Since Moonshot AI is a Chinese company, many companies in regulated industries could be in violation of the data sovereignty requirements that legally prevent them from sending any sensitive data or source code to high-risk jurisdictions like China. Without a systematic AI-BOM, most companies simply don’t know if they are inadvertently breaking these laws.
  2. Lack of “Shadow AI” Visibility. Just as “Shadow IT” saw employees using unauthorized SaaS apps, “Shadow AI” sees developers pulling in new models or AI-powered plugins to stay productive. If your security team can’t see which models are being called by your IDE or your CI/CD pipeline, you are flying blind.
  3. The Performance vs. Transparency Trade-off. High-performing models are often the least transparent. It is important to note that reinforcement learning was used to boost Moonshot’s performance, often at the direct cost of transparency. This “black box” nature is precisely why continuous red teaming is required; you cannot rely on the provider’s documentation alone to understand how a model might behave when exposed to your proprietary data.

How to prevent the next governance failure

To avoid being caught off guard by the next model update or third-party integration, organizations need to move toward Automated AI Governance.

  • Establish an AI-BOM: Treat AI models like open-source libraries. You need a living document that lists every model, its version, its provider, and its data-handling policies.
  • Vulnerability and Risk Tracking: Tools must now scan for more than just “bugs.” They need to evaluate the jurisdictional risk of the model provider and the security posture of the model itself. Because reinforcement learning can introduce unpredictable edge cases, red teaming should be an integrated part of your risk assessment workflow to ensure the model doesn’t leak sensitive patterns.
  • Policy Enforcement: If your company policy forbids sending data to specific regions, that policy should be enforced automatically at the developer tool level, not just buried in a PDF on the company intranet.

What Mend.io does differently

At Mend.io, we’ve built our platform specifically to account for the reality that AI models are now first-class components of the software supply chain and carry their own distinct risk profile.

AI model inventory and AI-BOM: Mend AI automatically discover models, frameworks, agents, and RAG pipelines embedded in your applications, including “Shadow AI” that teams didn’t officially sanction. Every discovered model is tracked in a continuously updated AI Bill of Materials, giving security, legal, and compliance teams real-time visibility into what’s actually in use.

Vulnerability and risk tracking for AI models: The risk profile of an AI model doesn’t end at licensing. The Mend.io platform also scans for known security vulnerabilities in OSS models and evaluates risks from malicious packages, bringing the same rigor to AI components that mature organizations already apply to open-source software.

Policy enforcement and governance: Mend.io enables organizations to define and enforce policies around AI component usage, blocking or escalating based on license risk, known vulnerabilities, malicious model detection, and compliance gaps. It’s not just visibility; it’s control.

License compliance for OSS AI models: Mend.io’s platform includes license compliance capabilities specifically for open-source AI models sourced from Hugging Face, Kaggle, and other repositories. License terms are surfaced and tracked automatically, including the kind of modified MIT conditions that tripped up Cursor. When a model’s license has specific commercial usage thresholds or attribution requirements, Mend.io flags them before they become a legal liability.

The takeaway

Cursor didn’t intend to expose a hidden model ID to anyone who poked at their API. And they likely didn’t intend to violate Kimi K2.5’s license terms. But intent doesn’t matter when the violation is public, the license is clear, and your annual revenue is $2 billion.

As your organization accelerates AI adoption, whether you’re building AI-powered products or integrating open-weight models into internal tools, the question isn’t whether your AI components carry license and compliance obligations. They do. The question is whether you have the visibility and governance infrastructure to know about them before a developer, journalist, or regulator finds out for you.

That’s exactly what we built at Mend.io

Increase visibility and control over the AI components in your applications

Recent resources

Moonshot AI governance breakdown: Lessons from the Cursor/Kimi K2.5 incident - Blog AI driven project classification

Introducing AI-powered Contextual Project Classification: From severity scores to business risk

Find your most sensitive code and prioritize fixes.

Read more
Moonshot AI governance breakdown: Lessons from the Cursor/Kimi K2.5 incident - System Prompt Weakness Detection blog post

Introducing System Prompt Hardening: production-ready protection for system prompts

Secure your AI applications with system prompt hardening.

Read more
Moonshot AI governance breakdown: Lessons from the Cursor/Kimi K2.5 incident - Blog AI compliance

AI Compliance: 5 Key Frameworks, Challenges, and Best Practices

Discover how to manage bias, privacy, and shadow AI risks.

Read more

AI Security & Compliance Assessment

Map your maturity against the global standards. Receive a personalized readiness report in under 5 minutes.