• Home
  • Blog
  • How to Manage Risk Effectively in Cloud-Native Environments

How to Manage Risk Effectively in Cloud-Native Environments

Manage Risk Effectively in Cloud-Native Environments (1)
Manage Risk Effectively in Cloud-Native Environments (1)

We’ve all got our heads in the cloud, or if not yet, we’re well on our way there.

In other words, the process of digital transformation is happening at such a pace that almost all organizations will soon be working in the cloud and using cloud-native technology. Analyst Gartner has predicted that by 2025, over 95% of new digital workloads will be deployed on cloud-native platforms. This represents a 30% growth from 2021.

Any shift in working practices necessitates a change in risk management to handle the new environment in which people work. This is particularly important when moving to a cloud-native environment. That’s because the cloud comes with its own risks and security considerations that should be met, and the shift to cloud-native development directly impacts many aspects of your organization’s security procedures. Consequently, we need to completely rethink our approach to security in cloud-native environments. In this blog, let’s take a look at why these environments are vulnerable, what two of the main risks are, and what you can do about them.

Why are cloud-native environments vulnerable?

Cloud-native environments are very accessible to all different users. This is both their strength and their weakness. A developer can set up a cloud environment by simply going to a cloud services provider like Amazon, spinning up an environment, and registering their email account, subject to the complexity of the environment, the security measures that are in place, and any compliance requirements. Then they can start to work, with the permissions set by the cloud service provider and within the parameters of the organization’s policies.  This is great for collaboration but creates a very open environment for malicious actors to find or create vulnerabilities to exploit or insert damaging code.

Moreover, the amount of development and the speed with which it can be achieved in the cloud is wonderful but it simultaneously poses a challenge. That’s because the more software and applications in the cloud, and the increasing amount of data, products, and services that are generated and housed in the cloud, the bigger potential for vulnerabilities, and for these vulnerabilities to be exploited. New code, new applications, new technologies, and new environments create new possibilities for attacks. Alongside the velocity of development, they require constant vigilance and an agile mindset and methodology to stay ahead of flaws and malicious activity.  Implementing proper security measures and regular vulnerability assessments can mitigate those risks.

Risk 1. The visibility challenge

Perhaps the biggest risk of cloud-native environments is a lack of visibility. By their very nature, these environments are huge and can seem to be amorphous. The volume of code, software, components, and dependencies in any cloud environment is far bigger than on-premises. Plus, cloud environments are distributed systems with cloud-native applications based on microservices. Components, particularly from open source software, come from a large variety of sources and can be linked with or dependent upon others in an increasingly complex network of relationships, both within and across different cloud environments. As a result, the attack surface of cloud-native environments continually expands, leading to the emergence of new attack vectors and vulnerabilities that threat actors can seek to exploit.

Furthermore, the cloud introduces a shared responsibility model that requires its multiple users to be accountable for security and for taking sensible steps to avoid any breaches. However, as it’s a more “open” platform in this respect, it lays itself open to more errors or lapses of judgment. Arguably, the security of any given cloud environment is only as good as its weakest parts and users. This can be tricky to police, especially as there’s so much to try and keep an eye on. It’s no surprise, therefore, that without taking the right approach to security, it’s easy for some anomalies, vulnerabilities, and threats to get overlooked. And so, the first challenge is to gain accurate visibility into your cumulative assets.

Risk 2. Security posture and the issue of containers

The second challenge is to establish an accurate and comprehensive security posture for your cloud-native environment. As we’ve seen, there are a lot of different risk factors that can impact this environment, so it’s not always possible to scan every open source component. Even if you can detect them all, there are other things to address. There are also different language-specific files that you should have the capability to scan. There may sometimes be hidden malware, and developers might forget to remove the relevant API keys. You also need accurate visibility into your container images.

In fact, when it comes to container security, vulnerability management takes a huge turn. Container images have a lot of different layers and components, and it’s important to assess all of them based on the accurate runtime environment. That’s because an open source component that’s running may have a very high severity score, objectively, but when it’s put into the cloud-native environment, running on top of a different base layer, the context changes, and its impact is lowered. Failure to take this into account can lead to false positives that obfuscate visibility and cause security inefficiencies or errors.

As a result, when organizations move their workloads to the cloud, they must re-evaluate their application deployment process. Your security posture — the way your security operates, the tools you use, and the way they work — must consider such a variety of components to scan and the changes they can undergo within the context in which you use them.  Only with this context-based vulnerability management approach can you gain a clear and accurate assessment of actual risk, and an accurate risk prioritization mechanism.

How to achieve a risk prioritization mechanism

To do this successfully, you need to tackle three main challenges. The first is to gain accurate visibility into your assets. The second challenge is to gain accurate visibility into your container images across the different layers and map all the different components,

Then you need to assess each case, based on the contextual environment in which it’s running. This gives you a robust combination of the right visibility, a map of all the different components, and an assessment of the accurate risk posture based on actual usage and the vendor’s security advisories. Consequently, a vulnerability that may be severe or seems severe, may be less so in different contexts and may prove to be a false positive in a particular use case. And instead, you can prioritize your activity to address and remediate genuine threats.

Context-based vulnerability management in action

For instance, while a Common Vulnerability Scoring System (CVSS) base score addresses the theoretical level of CVE (Common Vulnerabilities and Exposures) exploitability and classifies 60% of vulnerabilities as having a high or critical severity, only 2.5% of vulnerabilities have actual exploits in the wild that are available for attackers to use. Unfortunately, relying purely on a public database like NVD (the National Vulnerability Database) isn’t very accurate. There’s often a four to five-week delay before the NVD is updated, which can cause many false positives to arise. So, CVEs that appear more severe than they really are or those that have been fixed elsewhere can still be identified as threats. Then time and resources are wasted on mitigating them when the risk they pose is small to none. To minimize false positives in cases where there are conflicts between the NVD reports of a CVE’s severity and the vendor’s report, the vendor score should take precedence. In many cases, the vendor score will be lower because the context in which the vendor used the component reduced the risk of the CVE through means other than fixing the vulnerable code. Put simply, the right context can help eliminate false positives and requires reconciling multiple sources and some research.

As an example, in a standard software development lifecycle (SDLC), the developer may use an open source software package that is assigned an NVD rating. The NVD rating is based on the potential security exposures of a specific vulnerability. But when the developer builds their container image and moves to deploy to the production environment, they use an Ubuntu base image (UBI) from Red Hat, as defined by their security admin. Consequently, the CVE score changes at each step of the application lifecycle, from high to negligible. The main reason is that each open source vendor has its own security advisory, and they address issues and aim to provide the most secure open source base image. This gives us a new scoring mechanism that is based on actual vendor analysis of the impact of the vulnerability on that image in the context in which it’s used.

Using such vendor-supplied ratings that factor in the usage context reduces the number of false alarms and false positives you need to mitigate. This increases the visibility of vulnerabilities and reduced your risk exposure, which are the keys to minimizing the attack surface and managing risk. 

Best practices for cloud-native security

So, generally, what are the best tactics and strategies for mitigating risks in cloud-native environments?

The key is to be proactive rather than reactive. Don’t wait for problems and crises. Take preventative measures as early as possible in the SDLC. Detect the potential threats in your pipeline and prevent them in storage. Shift security as far left as possible because if you shift all the way left to the repo to the code, it won’t be affected in the cloud.

Also, when you’re scanning, get the full picture, or as much as you can. The first step to implementing security controls is in the CI pipeline. This is as far left as we can go. And introduce as many security gates as possible; one when you’re creating software and applications in the pipeline. Another is directly in your image register because not all images are created from sketch one. Some images are imported from the open source communities and developers who use images that are not allowed, or who are importing images directly to the image.

Another step is right before you deploy. Scan all the artifacts you want during the creation of the container image. Scan the container and all the objects therein – anything originating from sources, code, and from the build on the CI/CD system.

Furthermore, assess the security posture of each container that you’re trying to deploy deployed on your cluster. Assess the security posture from the vulnerability management to the study, which is a static analysis, and also the random configuration. Check not only what’s running, but also how to run it effectively and securely.

With all these considerations in mind, and with the measures in place to address them, you lay the foundations of a sound and robust security strategy for your cloud-native environment.

Learn more about Mend’s support of cloud services

Meet The Author

Omer Dahan

Omer joined Mend in September 2022 as a product manager, following extensive experience in software and systems engineering in both military and commercial contexts. His career to date includes a variety of positions in engineering, product, and startup domains. Omer holds an MSc in Electrical and Electronics Engineering and an MDes in Design for Engineers.

Subscribe to Our Blog