Vulnerability Research: Here’s How it Works at Mend.io

Security research spans a broad range of domains—from analyzing advanced malware and ransomware behavior to uncovering the latest distributed denial-of-service (DDoS) techniques. At Mend, our research is focused specifically on vulnerabilities in open-source software. This specialized focus shapes the tools, methods, and mindsets our researchers bring to the table.
Since October is Cybersecurity Awareness Month, it’s a great time to share how our vulnerability research team operates. Raising awareness of how these vulnerabilities are found and validated not only supports better open-source hygiene, but also helps security professionals understand how research contributes to safer code across the ecosystem.
Why Open Source Vulnerability Research Matters
Open source is the backbone of modern software. According to the Synopsys Open Source Security and Risk Analysis, over 90% of codebases include open-source components. But these components are not always rigorously maintained or secured. Vulnerabilities in popular packages can affect thousands of downstream projects.
That’s why we focus our research efforts on uncovering these issues early, validating their exploitability, and contributing insights to vulnerability databases and developer communities. Unlike traditional exploit research, we rarely deal with assembly code or binary reverse engineering. Instead, our goal is to find weaknesses in source code before they can be exploited in the wild.
Two Core Approaches to Vulnerability Discovery
Static Code Analysis Using Security Tools
One of our primary research practices involves leveraging static application security testing (SAST) tools. These tools systematically scan code to identify insecure patterns, logic flaws, or input validation issues without executing the code.
Beyond detection, SAST tools allow us to build a deeper understanding of code flow across complex and distributed applications. In today’s software stacks, a single application may contain hundreds of thousands of lines of interdependent code. Tools like these help accelerate the analysis process, allowing researchers to pinpoint high-risk areas with greater accuracy.
This systematic method is foundational to how we scale our research efforts across widely used open-source ecosystems.
Manual, Experience-Led Investigation
While tooling provides a strong baseline, manual review remains essential. Our researchers often rely on intuition, deep domain expertise, and institutional memory to identify vulnerabilities that automated tools might miss.
This investigative method is especially powerful in edge cases. For instance, one of our team members might recognize a subtle change in a library update that could reintroduce a known issue, even if scanners don’t yet flag it. Or a colleague might notice that a community patch doesn’t fully cover the threat models discussed at a recent industry conference.
Manual review enables us to apply nuanced judgment—something that even the most sophisticated tools still struggle to replicate.
Verifying Real-World Impact
Regardless of whether a vulnerability is discovered through automation or manual review, we believe validation is critical. To ensure an issue can be realistically exploited, we test it dynamically.
This involves executing the vulnerable code in a controlled environment and attempting to inject malicious payloads to assess potential impact. Our goal is not only to confirm the presence of a flaw but to demonstrate how it can be used in practice—whether for privilege escalation, data exfiltration, or remote code execution.
Verification also informs how we communicate findings to the broader community, allowing us to offer mitigation advice backed by real-world behavior, not just static assumptions.
Emerging Frontiers: AI and Big Data in Vulnerability Research
The software landscape is evolving rapidly, and so are our research methods. We’re exploring how artificial intelligence and large-scale data processing can accelerate vulnerability discovery in open-source codebases.
Machine learning models can help identify code patterns that have historically led to vulnerabilities. Combined with data from known CVEs, GitHub commits, and community forums, these techniques could augment both the scale and depth of vulnerability research in the years to come.
There’s early work being done in this space, including efforts from the OpenSSF and experimental research on automated bug triage using AI models. While still nascent, this area holds promise for scaling the manual intuition of expert researchers through data-informed insights.
Looking Ahead
This overview presents just a glimpse into how vulnerability research works at Mend. The approaches we use—combining automated tools, hands-on expertise, and dynamic validation—reflect a philosophy rooted in both speed and responsibility.
As the use of open source continues to grow, so too does the importance of early, accurate vulnerability detection. Through this work, we hope to contribute not only to stronger software, but to a more transparent and resilient development ecosystem.
For more on how vulnerability data gets structured and shared, check out resources like the MITRE CVE Program and the National Vulnerability Database (NVD). And if you’re interested in our approach to supply chain security, stay tuned for more from the Mend research team this Cybersecurity Awareness Month.