Mend.io Blog

Npm supply chain attack: sophisticated multi-chain cryptocurrency drainer infiltrates popular packages

NPM Supply Chain Attack: Sophisticated Multi-Chain Cryptocurrency Drainer Infiltrates Popular Packages

LATEST
Learn more

Filter & Search

Hallucinated packages, malicious ai models, and insecure ai-generated code - blog ai and malicious packages

Hallucinated Packages, Malicious AI Models, and Insecure AI-Generated Code

Worried about attackers using AI models to write malicious code? Here are three other ways AI model use can lead to attacks.

Read More Read More
Hallucinated packages, malicious ai models, and insecure ai-generated code - large devsecops demands cultural change blog

In Modern AppSec, DevSecOps Demands Cultural Change

Learn about the cultural change needed for DevSecOps in modern AppSec. Discover how collaboration and innovation can improve security.

Read More Read More
Hallucinated packages, malicious ai models, and insecure ai-generated code - quick guide to popular ai licenses

Quick Guide to Popular AI Licenses

Not all "open" AI licenses are truly open source. Learn more about the most popular licenses on Hugging Face.

Read More Read More
Hallucinated packages, malicious ai models, and insecure ai-generated code - broken nvd 1

NVD Update: Help Has Arrived

There's hope yet for the world's most beleaguered vulnerability database.

Read More Read More
Hallucinated packages, malicious ai models, and insecure ai-generated code - threat hunting report img

Threat Hunting 101: Five Common Threats to Look For

Learn more about supply chain threats and where to find them.

Read More Read More
Hallucinated packages, malicious ai models, and insecure ai-generated code - guide to the rail family of ai licenses

Responsible AI Licenses (RAIL): Here’s What You Need to Know

Learn about this family of licenses that seek to limit harmful use of AI models.

Read More Read More
Hallucinated packages, malicious ai models, and insecure ai-generated code - broken nvd 1

NVD Update: More Problems, More Letters, Some Questions Answered

We're not saying the NVD is dead but it's not looking good.

Read More Read More
Hallucinated packages, malicious ai models, and insecure ai-generated code - getting started with dependency management

Getting Started with Software Dependency Management

Discover the benefits of keeping your software dependencies up-to-date. Learn how to manage dependencies effectively.

Read More Read More
Hallucinated packages, malicious ai models, and insecure ai-generated code - mend and sysdig launch joint solution for container security

Mend.io and Sysdig Launch Joint Solution for Container Security

Learn how the Mend.io and Sysdig integration boosts container security by combining runtime insights and vulnerability prioritization.

Read More Read More
Hallucinated packages, malicious ai models, and insecure ai-generated code - how do i protect my ai model blog

How Do I Protect My AI Model?

Learn essential strategies to secure your AI models from theft, denial of service, and other threats, covering copyright issues, risk management, and secure storage practices

Read More Read More
Hallucinated packages, malicious ai models, and insecure ai-generated code - owasp oss risk top ten blog

Quick Guide to the OWASP OSS Risk Top 10

Learn about the top 10 risks of open source software, beyond just CVEs. From known vulnerabilities to unapproved changes.

Read More Read More
Hallucinated packages, malicious ai models, and insecure ai-generated code - large blog

Why an SBOM Audit is Vital to Application Security and Compliance

Learn the importance of an SBOM for enhancing application security and compliance. Explore best practices for creating and auditing SBOMs effectively.

Read More Read More
Hallucinated packages, malicious ai models, and insecure ai-generated code - what makes containers vulnerable

What Makes Containers Vulnerable?

Learn about the vulnerabilities that containers bring to your applications and how to address them to keep attackers at bay.

Read More Read More
Hallucinated packages, malicious ai models, and insecure ai-generated code - nvd backlog triggers public response from cybersec leaders

NVD’s Backlog Triggers Public Response from Cybersec Leaders

The National Vulnerability Database's backlog triggers a public response from cybersecurity leaders. Concerns raised, open letter to Congress

Read More Read More
Hallucinated packages, malicious ai models, and insecure ai-generated code - owasp top 10 llm application vulnerabilities

OWASP Top 10 for LLM Applications: A Quick Guide

Discover the OWASP Top 10 for LLM Applications in this comprehensive guide. Learn about vulnerabilities, & prevention techniques.

Read More Read More
Hallucinated packages, malicious ai models, and insecure ai-generated code - hugging face blog

What You Need to Know About Hugging Face

Stay informed about the risks and challenges of AI models with Hugging Face. Learn how to identify and secure AI-generated code.

Read More Read More

Subscribe to our Newsletter

Join our subscriber list to get the latest news and updates

Thanks for signing up!