The CVSS (Common Vulnerability Scoring System) is the leading standard when it comes to rating the severity of vulnerabilities facing software components. Organized by the Forum of Incident and Response Teams (FIRST), CVSS is aimed at providing the community of security professionals with a standard upon which they can understand the vulnerabilities in the software components that they are using for their products.
Under CVSS v3, vulnerabilities are rated according to a set of standard quantitative metrics. First is the Base Score assessment of how exploitable the vulnerability is, asking questions about the attack vector (AV), attack complexity (AC), privileges required (PR), and user interaction (UI), and more. Second, it allows for changes over time with the Temporal metrics, taking into account how mature the exploit code is and the fixes available. Finally, it looks at Environmental metrics like the security requirements that affect the basic aspects of security (confidentiality, integrity, and availability) for the affected data or system, as well as any modifications that should be made to the base score depending on the impact of the vulnerability.
CVSS v2 broke down vulnerability severities into three categories, using the Base Scores: Low, Medium, and High. As is the norm in the development community, there were complaints aplenty as to how CVSS v2 was not up to the task. This led to the development of CVSS v3, which attempts to add a little more nuance, including a Critical group for those vulnerabilities that fall into the 9.0-10 rating bracket, as well as new metrics for scoring like Scope (S) and User Interaction (UI).
Ultimately, this system is aimed at helping teams assess their level of risk and prioritize their remediation operations accordingly. This is a key aspect of any system. Think about storm warnings or other systems where there are gradations of risk and responses. Without such a rubric where responders can react with varying levels of reactions, the basic running of day-to-day operations would be untenable since everyone would be stuck on putting out fires. This would lead to a rapid burnout and is unsustainable. Therefore, we know that having this kind of rating structure is essential, even if it is imperfect and in constant need of fine tuning and tweaks.
The good folks over at FIRST appear to be attempting to address the issue of security teams bending under the weight of more alerts to vulnerabilities than they can reasonably handle.
Simply put, when so many alerts were rated as High under CVSS v2, a vulnerability with a 7.0 could be lumped in with a 9.5 or 10, making it hard to prioritize. In CVSS v3, they have added in the new metrics like Scope and User Interaction, hoping to provide more relevant data to inform security teams about their actual level of risk and help them prioritize.
However, we would argue that while these metrics are interesting, they miss the mark when it comes to understanding how open source components can impact the security of a product.
We know that many organizations are dealing with so many alerts that they only make time to deal with the supposedly Critical level alerts. This leaves many other potentially risky, High-rated vulnerabilities not remediated and a continued threat to the product. FIRST is trying to make for a clearer picture, but runs the risk of encouraging organizations to ignore vulnerabilities that deserve their attention.
What I would argue is that it is more important for security professionals to understand whether or not the functionalities of a given vulnerable component are actually effective, and therefore have an impact on their proprietary code, rather than going just on how dangerous a vulnerability could be.
We know that when developers take an open source component from sites like GitHub or Maven Central, they are looking for a specific function or feature from a certain library. However, that component contains a number of different libraries through its dependencies, all cobbled together over time by different developers who have worked on the component.
For their part, the developer does not care if there are additional functionalities in the component. She just takes the whole component “as is” and incorporates it into her proprietary code. This is simply a more efficient way of working, and a far cry better than the old “cut and paste” method of the past.
What we need to understand though is that not all functionalities in a component are in fact effective, meaning that just because a component is deemed to be vulnerable, it does not mean that the proprietary code is at risk. Based on our research into Java projects, we know that at least 70% of functionalities are not actually effective. The reported vulnerabilities are indeed present — not to be confused with false positives — and should be addressed, but are not an immediate concern.
The problem for security teams and developers arises when their components are being checked for known vulnerabilities, since they can be flagged as risky if the component contains a single functionality that has an associated vulnerability.
Only Mend offers the capability to understand whether a functionality is effective, and can show developers exactly where and how a vulnerable effective functionality impacts their code through performing a trace analysis. We call this technology Effective Usage Analysis, and consider it to be the next generation of Software Composition Analysis (SCA).
This knowledge of effective vs ineffective vulnerabilities significantly changes how we think about prioritization. Even if a component has a CVSS v3 score of 10, it does not mean that it needs to be pushed to the front of the line if it is deemed to be ineffective. Is this Critical component risky? Yes, but it does not impact our proprietary code, and therefore should not be at the top of our to do list.
Prioritization based on real knowledge of where your organization is most at risk can play a big role the proper allocation of resources, and even in motivating your team. By giving them the automated tools that give them a clear answer on what is impacting their product, and what is not, they can direct their efforts in the most important places, and know that their time is being well spent.
FIRST will continue to make adjustments moving forward on the CVSS standards, and should be applauded for their efforts. But from a practical point of view, until organizations really understand which open source components are really having an impact on their proprietary code’s security, these will just be incremental measures.