Way back in the day (in software production speak that means three years ago), professionals in our ecosystem were still going back and forth about free and open source software vs. proprietary. Which is better? Which is safer? Which will cost you less in the long run?
For the most part, that raging debate has quieted down.
At this point, we can all agree that organizations have a lot to gain by using open source components, and the questions that are keeping many R&D, legal and security teams awake these days are how to stay secure and compliant.
One aspect that many development teams seem to be marginalizing or even overlooking is an open source component’s quality and what qualifies a component to be of high quality over one that does not.
How can we ensure that the components we are using will provide stability and consistency under pressure?
In a webinar with Microsoft and Forrester, Mend polled a group of nearly 200 R&D leaders and influencers from a variety of tech organizations about their open source practices and concerns. One of the questions that we asked participants was what most concerned them about using open source. The results tell us a lot about how the community views priorities regarding open source usage. Nearly 53% said they worry about security issues, 38% said licensing was most concerning, and merely 8% admitted to worrying about quality.
The concern over security is quite understandable, and it’s good to see that application security, prevention, and remediation are on everyone’s mind.
However, it’s not clear why quality is underestimated or even overlooked by so many industry leaders. Does this mean they have complete faith in the open source development community to create only the best code or in their teams’ ability to locate and identify the best open source components?
With all the open source repositories out there, how many developers can really discern the quality of the open source component that they are using?
The reality is that there are no common standards for assessing an open source component’s quality, and the collaborative nature of open source can make it challenging to assess.
As Preethi Thomas of Red Hat points out, in open source, “community involvement is voluntary, people’s skills, levels of involvement, and time commitments can vary,” making quality assurance a challenge. Usually, when choosing a component most developers will go with what they know, and if they’re not sure they might ask a friend. Considering the results of our poll — quality most probably won’t be the first thing on their checklist before choosing a component.
But if we really want to ensure that we incorporate stable and reliable open source projects — what are some of the things we can look at?
This seems like a no brainer: the more people using an open source component, the more reasons to trust it. Except…who said Heartbleed?
The notorious security vulnerability in OpenSSL was identified two whole years after its inclusion in the project, although OpenSSL was being used by an estimated 2/3 of web servers world-wide by that time.
Since then, a lot of code has been forked on GitHub, and while the open source community and its contributors arguably learned their lessons, I think this proves that the popularity of a project doesn’t promise its quality. Popularity may be one indication of an open source component’s quality, but if we don’t want to find ourselves racing through production with a buggy component, it’s probably not the only parameter we should go by.
In the aftermath of Heartbleed, many in the security community pointed out the fact that OpenSSL, the world’s most widely-used online security solution, was manned by two guys named Steve. Since then, nonprofits and giants like Microsoft, IBM, Facebook, and Google are helping ensure that open source projects are better maintained with increased resources and support.
Does that mean that today, the amount of contributors to an open source project tells us about its quality?
The short answer to this question is maybe. Established, long-living projects are supported by large communities of dedicated developers, many of whom have been contributing for years.
That said, going by this parameter alone will only get us so far. Sometimes more people just means more problems — especially when we’re talking about QA. Having a large community is important, but it doesn’t necessarily promise seamless and efficient processes.
What’s better than a large community? An active one.
If you want to talk numbers, a good one to look at is the number of commits in an open source project over a period of time. The number of commits tells us how many developers are actually working towards improving the code, ensuring that the latest version is a trusted one.
So you found a popular project, with a big community that is continually working to provide users with a better version. Is that enough to promise a high quality component?
This brings us to the unpleasant subject of bugs. Like the rest of the open source project, bugs and their fixes are public, and can give us a clear view of how diligent the community is at locating issues and providing fixes.
A dev team leader’s nightmare is a long list of bugs, but when we’re looking at an open source project, it’s best to check under the hood. When in doubt — go back to Linus’s Law, that stated that given enough eyeballs, all bugs are shallow. A look at an open source project’s bug tracker, the rate in which bugs are opened and resolved, can tell us a lot about the projects quality. We know that in any development project there will be bugs. What’s more important is how shallow they will be. Are there enough eyeballs? Are they doing their job?
The bug tracker for the open source project you are considering boasts a community with x-ray vision and speed-of-light fixes. Sounds good, right? No arguments here.
But we recommend that you continue to look closer. How many high severity bugs are open? How long have they been there? Are people working to fix it? This is another parameter that will help you ensure that an open source community is active and committed to delivering users the best product.
At Mend, when we rate the quality of an open source library, we base our scoring on the aggregated value for each open source library version based on three of the parameters I mentioned earlier:
Source Control Activity – how many commits as an indicator of its level of activity.
Fix rating – the number of bugs fixed in each specific version.
Bug Statistic – the amount and severity of open bugs for each specific version.
Check out our dashboard:
It’s safe to say that questions about the quality of open source components are resolved. Open source components are a great way to build strong products.
However, we not all components were created equal, and in order to ensure our products’ quality when using open source, there’s a whole set of parameters that we need to look at.
Luckily, the open source community’s collaborative ethos ensures that everything you need to know about an open source project is out in the open. Who else is using it? Who’s developing it? How active is the project in finding and remediating issues?
But checking with the community is a lot of manual work, and developers just don’t enough time for such extensive research on each open source component.
When our developers started using the Web Advisor we weren’t expecting anything to change since they were extremely educated about open source components metrics. Imagine our surprise when we started getting tons of questions from our own developers about the quality index in the Web Advisor.
The process we went through with our own developers, taught us a lot about how we should go about this with our own customers’ developers. It specifically showed us that an open source project quality is not in the developer’s mindset and that we actually need to train them and explain why it’s important to ensure there’s an active community behind any project. This training was even added to the onboarding process.
Another surprise we had was when our team leader reported a significant drop in alerts created by our own tool through our Jenkins plugin. The reduction in alerts, which is associated with our developers using the Web Advisor, proved that offering information about open source components in the developers’ native environment (the browser) while they were selecting the right components enabled them to make better choices the first time around, thus saving a lot of wasted time and resources.
Now, it’s so much easier for our developers to choose the right libraries in StackOverflow, Maven Central and more by quickly clicking on the extension right in their browsers. It encourages them to code more confidently knowing they can trust the components they are using from the get-go.