DevSecOps best practices are increasingly being adopted to secure software supply chains. The challenge is finding ways to optimize these processes. Here are seven key considerations to help you adopt a successful and secure DevSecOps methodology.
DevOps has been a recognized methodology for more than a decade, but organizations have been slower to embrace the security aspect. This may be a matter of incentivization, prioritization, and measurement. If your KPIs and the way you measure success isn’t based on security, then it won’t get prioritized.
There’s a large swath of tooling and responsibilities that DevSecOps encompasses and it isn’t always clear where these responsibilities lie. That’s because different companies place these responsibilities on different teams, all of whom are expected to roll into the DevSecOps principles and deliverables.
Certain companies, especially in regulated environments, have no choice but to ensure that security is baked into their processes. But if security isn’t already part of your developer workflows, it may take time to be adopted. What’s needed is clarity about responsibilities and more collaboration, especially if your company traditionally separates security from DevOps.
Collaboration is important because it builds a successful DevSecOps culture across teams. Figure out how to put teams together to solve common problems. An example is introducing changes with pull requests, requiring reviewers from developers and security teams. If you’ve got the bandwidth, assign a security expert to the DevOps team and vice versa, so they understand each other’s challenges and how they go about solving them. Ensure that your DevOps team has KPIs linked to security. It amplifies the importance of security.
Collaboration presents its own issue among security teams: the issue of control. Much of the security integration into DevSecOps requires security teams to relinquish control and let developers and DevOps teams do the integration. That’s not easy for them, but a more collaborative approach requires sharing control, and figuring out the separation of duties, and how to ensure things are properly implemented.
Control is naturally averse to automation. DevOps engineers are by default people who embrace automation. It’s how DevOps has been able to scale and really accelerate. However, security teams have been slow getting into that game and still tend to do a lot of things manually, and reactively. It can be a big lift getting security teams to think of automation as a way to scale and get ahead of security problems, but it’s essential for scaling security reliably and delivering a robust and future-proof security strategy.
Businesses understand the value of agility, of being quick to fix bugs and introduce new features. We have to think about how security can be agile. This is where automation is key.
Automation is what our most mature customers focus on in DevSecOps because it improves the efficiency of your security tools. It means you can trust your DevSecOps. This is important when we consider that typically, security engineers are heavily outnumbered by DevOps engineers, so it makes sense to automate security testing and policies so that you can continue to develop software at pace.
When there’s usually just one security engineer for every fifteen DevOps engineers, automation removes the burden of manual scanning, checking, and remediation, and frees them to focus more on emerging threats and more complex problems. Automation helps security teams ensure that their business can keep moving at the speed it wants to.
Observability is critical to DevSecOps because you can’t secure what you can’t see. Successfully achieving it involves identifying issues and their potential problems earlier. Shift left closer to the source and before components are deployed or go into runtime. Then, you can find and address any issues before they have any impact. This optimizes the efficiency of your DevSecOps.
Nevertheless, after you deploy you may still find new vulnerabilities and problems, which feed back into the pipeline, and you need to establish who in the development circle needs to solve a particular problem. A lot of companies struggle with figuring this out. The answer is what we’ve called shifting smart, which is simply applying the practice of iterative scanning, testing, and remediation throughout the software development lifecycle (SDLC). This elevates the importance of observability.
In Kubernetes for example, observability plays a huge role. If you are dependent on your policy agents that you might have running in your cluster or scanning agents, you want to make sure that those services are available and successfully serving requests. Observability becomes part of the prerequisites of components that should be deployed to your system so that you have the necessary visibility to secure what you can see.
If you see more issues, you risk suffering from alert fatigue. You can avoid this by generating fewer alerts and reducing false positives. Software composition analysis (SCA) tells you whether you’re using a dependency that has a known vulnerability and if you might need to upgrade it. At Mend.io, we make the solution more mature with reachability analysis. This ascertains if a dependency is reachable or exploitable. If it isn’t, then don’t waste valuable time and resources addressing it. It reduces the number of alerts.
Automation also reduces the impact of alerts. Ultimately the reason people get alert fatigue is because alerts trigger them to take some type of action. When remediation is automated, fixing alerts is no longer a burden. A better understanding of context helps you establish what’s critical. You may find a vulnerability with a high CVSS score, but is it in development, production, or in a runtime environment? Is it connected to anything else? Is it public facing? Is the component actually executed? These considerations affect how impactful a vulnerability could be and whether it’s worthwhile addressing. Contextual knowledge helps you identify when a particular risk should be prioritized, so you only focus on what’s necessary.
Regulation isn’t new in highly regulated environments, especially for government-related projects. But regulations will increase the importance of security strategy and accountability in all organizations. Software bills of materials (SBOMs) will be required to ensure that zero trust and software supply chain security are at the forefront of any projects. This new vigilance will drive more widespread adoption and maturation of workflows, especially those that demonstrate secure delivery of applications with increased velocity via automation.
Furthermore, regulatory compliance is a great way for security leaders to get their teams to adopt best practice. Ultimately, it’s a positive way for them to affect change that will help everyone get ahead of security problems.
Perhaps the biggest risk of AI at the moment, but it’s hopefully something short-term, is what’s known as hallucinations in AI. That’s when AI is sometimes very confidently wrong and you can’t always trust it. With large language models (LLMs), in particular with generative AI, think of AI as an advisor and not a solution. View it as a way to make suggestions and generate ideas not necessarily seen before. You can use LLMs like GPT4 to give security teams an option on alerts to generate an idea of how they might solve them. But take note. Presently, proceeding with an AI-generated solution without due diligence could be risky. Some people will implement AI incorrectly and cause spectacular failures. Nevertheless, AI offers many benefits, especially when it comes to speeding up processes like scripting infrastructure as code, for example. My advice is: embrace this technology but be circumspect when using it.