One of the most important steps of securing your code base, your software, and your applications, is to update the dependencies they rely on. In principle, maintaining software health with updates demands that you use recent versions of any software and dependencies. Recent updates are less likely to be exploited and attacked via publicly known vulnerabilities than older versions, because with the latter, malicious actors have had more time to hunt for weaknesses. However, updating dependencies does present some risks as well as benefits. In this blog, let’s look at both, and what can be done to optimize your software and application security.
There are two primary risks when you’re updating dependencies. The first is more common but less disruptive. The second is much less common but much more catastrophic.
In terms of breaking the build, make sure you have very good test coverage so that you can test every new version and see that each of them isn’t breaking anything. At Mend, what we enable people to do is to use what’s called a crowdsource test with our Merge Confidence feature within our free Mend Renovate tool. This minimizes risk when updating dependencies by identifying undeclared breaking releases based on analysis of test and release adoption data across our user base. It enables us to take into consideration all of the tests our users apply, and we aggregate those and give a percentage of those updates that pass and those that fail. Using a feature like this means that you don’t have to rely on your tests alone. Instead, you can rely on everybody who is using the Mend Renovate application, which covers over 500,000 repos on GitHub alone.
For every dependency update, Merge Confidence opens up a new branch inside the repo. Then, when you’re opening up a new branch and creating a pull request to merge it in, it runs all of your tests, and you see if the pipeline is passing or failing your tests and those of other users. Having done this, it aggregates the percentage of pipelines that are passing the tests, it serves these results as a metric inside the pull requests that someone can review and see. So, for example, if only fifteen percent of pipelines are passing, that’s obviously a very bad indicator for this update. You can’t be confident about using it, so it isn’t a dependency you would want to take on without some serious manual review, or to be prudent, you would simply avoid it altogether.
On the other hand, if ninety-nine percent are passing, then that shows the update is safe and sound to use. Even if adoption numbers aren’t high, if the dependency is passing everybody’s tests, then the indications are that it’s not breaking anything, the change isn’t problematic and you can use it reasonably without too much manual reviewing. To make an even more informed decision, you can also check out the release notes about the dependency that Merge Confidence displays, so you can see its performance.
In addition to taking the action I’ve described; extra care is recommended when handling malicious updates. Whenever possible, it’s vital to detect and block malicious open source packages at the earliest opportunity before your developers can download them and before they can pollute your codebase with malicious activity. You can achieve this by deploying a malware scanner for open source packages, such as Mend Supply Chain Defender.
It’s also prudent to wait to take on updates, and don’t do so as soon as they’re released. How long you wait depends on your organization’s policies, and how safe it wants to be. Waiting ten days or twenty days is an acceptable time. By then, it’s likely to be known whether the update or package is secure. And if you can automate the process, all the better.
I like to think about automating dependency updates like cleaning your home with the Roomba robotic vacuum cleaners. If you have the Roomba running all the time, then it’s going to clean your apartment so that it’s livable or sustainable. However, it’s not going to make it completely clean. To complement this, you’re still going to need to wash your floors once every so often, depending on how big your home is and how thoroughly you want it cleaned.
It’s the same thing with auto-merging or updating dependencies. If you’re auto-merging all the small patches for dependency updates, it’s a great way to stay up-to-date and keep your repo in a sustainable condition. That’s because you’re making it nimble enough so that you can handle an urgent problem like a zero-day vulnerability if one arises. You won’t find yourself wading through many months’ worth of update backlogs before reaching the urgent update. So, I like to think of auto-merging patches or minor dependency updates as happening in the background like cleaning your apartment with the Roomba, until you’re ready to get your mop and bucket out and thoroughly clean, which is the equivalent of reviewing major dependency updates and seeing if there are any new features that you want to include in your project. If you want to take on a major update, you need to solve all the backward incompatibilities. So, if API names have changed, for example, then you would need to manually change the names of the APIs that you’re calling. That remains a manual process.
There are three main benefits to updating dependencies:
1. Vulnerability prevention. We recently examined npm CVEs, and we discovered that, in 2021, over ninety percent of them weren’t in the most recent version of dependencies. So, in principle, if you ensure that you always have all the most recent versions of dependencies, then you’ll automatically prevent ninety percent of the newly disclosed vulnerabilities. Some have no fix available, often in unmaintained projects. These aren’t going to work in very active projects, most of the time. So, the biggest benefit is that you’re avoiding vulnerabilities and this, of course, saves developers and the security team a lot of work.
2. New features. When you update dependencies, you get access to the software’s latest features and the latest APIs, as well as fresh bug fixes to protect your software. So, you’re simultaneously getting revised and updated capabilities while you’re keeping your software as secure as possible against the newest vulnerabilities and threats.
3. Protection against zero-day vulnerabilities. Maintaining dependency updates means that you’re better prepared to respond to urgent and unexpected security alerts, and you can be confident that your response will be fast, and effective and won’t itself break your code. If you’re regularly updating dependencies then you can simply apply security patches and you can do it immediately. On the other hand, if you don’t update dependencies, or do so only sporadically, then when there’s a sudden breach, it’s much more of a scramble to locate the breach and protect your code against it. In this scenario, it becomes a crisis, requiring urgent triage. Let’s say you haven’t updated your dependencies for a year and then you’re faced with a serious breach like Log4j. Suddenly you have to implement a year’s updates throughout hundreds of your applications, and you need to do it fast, without thorough testing to make sure nothing breaks. And you remain vulnerable while this is underway. The process is much slower and more prone to problems than if you frequently and regularly update your dependencies. Put simply, it’s best practice, which enables you to react quickly, decisively, and unproblematically.
1. Know and understand your dependencies The first thing about keeping dependencies up to date is that you have to know what a dependency is. Developers picture dependencies as open source packages or third-party libraries, but in reality, a dependency is anything that you use in your application that you didn’t create yourself. This includes open source packages, but it can also be Docker images that you’re basing your deployment on. It can also be code you’re using that was written by other teams. It can be Infrastructure as Code that you’re running in your application as a dependency. It can be Kubernetes manifest files and it can also be, of course, the source files, but this is much less common
2. Avoid unexpected dependency upgrades by using a lock file. A lock file locks all of your dependency versions in place, including the direct dependencies, such as those in a regular package file, but also the transitory or indirect dependencies. This way, you’re not getting any unexpected upgrades.
An alternative to using a lock file is to specify the range of updates that you will use. This means you set your system to use any update to dependencies between two points. For instance, you can specify that you will use any version of a dependency update between versions 1.1 and 2.0. If the most recent version now is 1.1 and version 1.2 comes out, you’ll automatically use that, because you have specified that this version is acceptable, so npm results will take the most recent version that fits the criteria that you define.
However, there are two problems with this. You could break something accidentally and you might not necessarily even know it because the dependent upgrade was automatic. Or something malicious might lurk in the update and you haven’t had time to review it and fix it.
So, locking files is a good practice, in general, because any time somebody builds an application, then they’ll be using the same dependencies that you’re using. Let’s say they clone the repo and build the same dependencies. It will work uniformly. There should be no issue with it working on one machine but not another.
However, this isn’t for transitive dependencies, just for the package files for the direct dependencies. If, for example, a downstream developer uses one hundred other libraries and all of them are pinning their dependency versions, the developer could end up with ten, or fifteen versions of the same dependency, and that’s cumbersome and confusing. It’s bad from an application-size perspective and also in terms of just like dependency management. It’s a huge burden. You don’t want to do that. On the other hand, if you’re writing something that’s not meant to be used downstream, such as on a web App. Then there are no problems
Therefore, it’s definitely best practice to use a lock file, and if you’re writing a web app, the best practice is to pin dependencies. For a library, you’ll probably want to consider using ranges to be more user-friendly
3. Use SBOMs. Then, all your components will be visible, and you will know which components will need updating and by when. This will enhance the security and maintainability of your codebase and will help you ensure that your projects are agile. As I mentioned earlier, any poorly maintained project with old dependencies that haven’t been updated in months or years will be behind the curve in terms of updates, which will make it very hard to respond quickly and effectively to sudden breaches that arise, like zero-day vulnerabilities.
4. Choose good dependencies. When you’re introducing a new dependency, it’s important that it’s healthy. So, you want to see that the last release wasn’t a long time ago, and that this project isn’t maintained at all. You want to see that there’s a decent cadence of commits coming into the repo and that it’s an active project. You want to see that issues and pull requests are being opened, and that there’s activity in the repo, not just commits, but also community activity. And security patches should be up to date to be sure that you have a secure dependency. You want to see that the maintainers care about that, and they’re actively applying security patches.
With this in mind, the necessity of updating dependencies can be illustrated in a simple analogy that we can all appreciate and that offers a compelling reason to do it, regularly and proactively:
“Updating dependencies is like going to the dentist. If you only go once every five years, it’s really going to hurt.”