Table of contents

You can’t rely on open source for security β€” not even when AI is involved

You can’t rely on open source for security β€” not even when AI is involved - Blog Cover Open Source Security with AI

Open source libraries, packages, and models power nearly every product team today. They accelerate development, democratize innovation, and let teams stand on the shoulders of giants. But there’s a dangerous assumption creeping into engineering orgs: that open source — or AI trained on open source — will keep your software safe.

That assumption is wrong. Open source gives you speed and community, not guaranteed security. And when teams start treating AI as an automated security auditor or a source of “secure” code, they add a new set of failure modes on top of the old ones.

Here’s why that matters and what modern engineering teams should do instead.

Why open source alone is not a security strategy

1. Maintainers aren’t a security SLA.
Many projects are maintained by small teams or single volunteers. A critical bug or exploit can sit unfixed for months. Security requires consistent ownership and SLAs — things that open source projects rarely provide by default.

2. Hidden transitive risk.
Your app’s direct dependency might be fine, but its dependencies’ dependencies? Not so much. Transitive packages multiply the attack surface and are often overlooked during triage.

3. Supply chain attacks are real and rising.
From typosquatted packages to malicious updates, package ecosystems have proven vulnerable. An attacker doesn’t always need to break cryptography — they only need to introduce a tiny, useful-looking change in a widely used library.

4. Outdated or abandoned code.
Popular projects get forked, deprecated, or receive fewer security reviews over time. “Works in development” doesn’t mean “safe in production.”

5. Licensing and provenance blindspots.
Beyond code quality, there are legal and provenance issues. Without clear provenance, it’s hard to prove a dependency is safe to ship.

Why “AI” doesn’t magically solve these problems

AI tools are incredible at pattern matching and at generating or suggesting code. But that doesn’t mean AI eliminates the security problem, it changes it. And, in some cases, magnifies it.

  • Training data contains vulnerabilities. Models trained on public code can reproduce insecure patterns, leaking vulnerable snippets or insecure idioms.
  • Hallucinations and invented fixes. Code-suggesting models can hallucinate plausible-looking code that doesn’t actually solve the problem or introduces new vulnerabilities.
  • Provenance is blurry. If an AI suggests a dependency or patch, who vouches for its source or audit trail? You still need provenance and an audit.
  • Poisoning and adversarial risks. Models and datasets aren’t immune to manipulation. Attackers can influence training inputs or model behavior in subtle ways.
  • Scale of change outpaces vetting. AI accelerates change. Faster change without stronger guardrails increases the chance an insecure change reaches production.

In short: AI can be a powerful assistant, but it’s not a replacement for rigorous security processes and systems built to manage open-source and supply-chain risk.

What teams should do instead?

Treat open source and AI as speed multipliers, not security guarantees. Operationalize security with systems and processes that assume components are potentially untrusted.

1. Build and maintain an SBOM (software bill of materials).
Know what you run — direct and transitive dependencies. An SBOM is the foundation for all sane remediation work.

2. Prioritize by real impact, not raw CVE count.
Not all vulnerabilities matter equally to your product. Prioritize fixes based on usage, exposure, and business impact.

3. Shift-left with policy enforcement in CI/CD.
Block risky changes from reaching production with automated checks and policy gates that are practical, not noisy.

4. Automate safe remediations — but validate them.
Automation should create PRs, run tests, and include human-in-the-loop checks when necessary. Never treat an automated change as “done” until CI, tests, and owners sign off.

5. Monitor supply-chain signals continuously.
Watch for sudden changes in package ownership, unusual releases, or new transitive dependencies. Early detection beats late reaction.

6. Maintain provenance and accountability.
Require commit signing, verified artifacts, and clear ownership so you can trace changes and apply SLAs for critical components.

Where Mend.io helps — put advice into practice

An interesting video from The Prime Time is making the rounds this week that breaks down the recent Moltbook incident. The video’s premise is right: relying on community goodwill or on the promise of AI will leave gaps. Mend.io is built for exactly that gap, turning detection and insight into accountable, prioritized work so teams don’t just know about risk, they fix it.

With Mend.io, teams can:

  • Discover and visualize your full dependency graph (direct + transitive) so nothing is invisible.
  • Prioritize vulnerabilities by real application impact, not only by CVE severity. This reduces firefighting and focuses engineering effort where it matters most.
  • Convert findings into tracked remediation work that integrates with developer workflows, automated PRs, assignment, and lifecycle tracking so fixes actually land.
  • Enforce CI/CD policies and continuously monitor the supply chain so risky changes never slip into production without review and verification.

These capabilities transform the video’s “good practice” into scalable operations. The goal isn’t to distrust open source, it’s to manage it systematically.

A short checklist to start today

  1. Generate an SBOM for one critical service.
  2. Run a dependency risk scan and map the top 10 highest-impact issues.
  3. Implement one CI policy to block risky transitive updates.
  4. Automate a PR for a low-risk remediation and observe the testing & rollout path.
  5. Establish an SLA for dependency remediation with clear ownership.

Open source and AI give teams enormous advantages, but they were never meant to be your sole security team. Treat them as part of a broader, accountable system: inventory, prioritize by impact, automate remediation responsibly, and enforce trustworthy policies. Do that, and you get the speed of open source without betting your product on it.

Proactive AppSec starts here

Recent resources

You can’t rely on open source for security β€” not even when AI is involved - Announcement post Azi Cohen

Mend Leadership Update: Building on Our Momentum for the Next Phase of Growth

An update on Mend.io's leadership as we enter the next phase of growth.

Read more
You can’t rely on open source for security β€” not even when AI is involved - ServiceNow Blog Featured image

Why AppSec and Network Risk Management Must Be Unified in the Modern Enterprise

See how Mend.io’s ServiceNow integration unifies application, network, and operational risk.

Read more
You can’t rely on open source for security β€” not even when AI is involved - blog post npm fake font packages

NPM User Flooding Registry with Fake Font Packages

Analysis of an npm account flooding the registry with malformed font packages.

Read more

AI Security & Compliance Assessment

Map your maturity against the global standards. Receive a personalized readiness report in under 5 minutes.