Six Steps to Achieve Zero Trust in Application Security

Zero-Trust in application security best practice
Zero-Trust in application security best practice

The continuing escalation in cyberattacks on large corporations, coupled with an acceleration of digital transformation, has forced organizations to reassess their security strategies and infrastructure. This escalation has driven growth in the adoption of zero-trust application security and compliance. 

The zero-trust approach means that no devices or software should be trusted by default, even if they have permissions and previous verification. Every component should be scanned, analyzed, and tested for vulnerabilities.

I recently had the pleasure of hosting a fascinating discussion with Gevorg Melikdjanjan, Product Manager at our Dutch partner, Logic Technology B.V.,that focused on how to achieve zero-trust in application technology.

Logic Technology is a leader in Europe’s embedded technology market. Its combination of products, support, and consultancy services helps developers and engineers improve their embedded engineering process and accelerate the time-to-market of their technology, using trusted and compliant software tools and hardware components.

Gevorg’s experience and knowledge contributed to an enlightening conversation. In this blog, I summarize the main insights from our discussion, which highlight six of the key ways to build a successful zero-trust strategy.

Step 1. Shift open source security and compliance left

It’s widely acknowledged that between 70 percent and 80 percent of application code is open source. Huge amounts of software and libraries are already pre-written and ready to use. That’s great for speed of integration, but the traditional “trust by default” model poses security risks. If trust is assumed, vigilance is lax, and compromised packages or vulnerable code may get overlooked, causing application vulnerabilities. This is a common problem. The solution is to prevent these issues as early as possible in the software development lifecycle (SDLC) by shifting security left, ideally when writing code or researching new components. Detecting and immediately remediating vulnerabilities early in the SDLC overcomes the inefficient workflows  that development teams experience when they  are forced to detect and remediate separately.

Reducing Enterprise Application Security Risks:

More Work Needs to Be Done

Step 2: Combine SCA for open source and SAST for custom code

Software composition analysis (SCA) does a fine job of analyzing and remediating the 70 percent to 80 percent  of software that’s open source, but it still leaves the 20 percent to 30 percent  of proprietary code unprotected. Static application security testing (SAST) is one of the primary ways to safeguard custom code from vulnerabilities. However, using tools that are segragated by code type really impedes development, sometimes to a point where developers ignore vulnerabilities and updates that could expose the r code base to weaknesses in the future.

Because it’s easier to have a solution that integrates with the entire SDLC, unifying the two solutions provides the greatest security benefit. This gives you control throughout the entire development lifecycle, making sure existing processes aren’t hindered.

Furthermore, most tools focus on identifying issues in your code, but there has been little focus on remediation to fix them. That’s changing, and it’s something Mend has pioneered by offering not only vulnerability detection for SAST, but also automated remediation. This enables your developers to produce quality software more quickly and frees them to focus on new features and the functionality of their software, rather than chasing security issues.

Step 3: Discover new zero-day vulnerabilities or vulnerabilities pre-CVE/NVD

Even with a robust security solution in place, vulnerabilities can arise that aren’t yet known as a common vulnerability and exposure (CVE) or known to the National Vulnerability Database (NVD). As vulnerability volumes escalate, it’s increasingly important to detect and remediate these speedily.

At Mend, we have a threat research team that’s dedicated to finding new vulnerabilities and weaknesses in common frameworks, and rapidly alerting customers to stop the pipeline from creating the artifact. This is done through policy engines. So, for example, if a zero-day vulnerability, such as Log4j is triggered, we can stop all our production pipelines, get the issue fixed very quickly, and safely reopen the pipelines.

Step 4: Avoid common security vulnerabilities in proprietary code

With SAST, we now have the technology to solve issues within the integrated development environment (IDE), usually as developers are writing code. Using this new generation of SAST provides better ease of integration and speed than previously available. Plus, you can reinforce your security processes by improving education and awareness, thereby encouraging increased adoption of security tools. Adoption is accelerated when the software is easy to use, and developers can be confident that it won’t slow down development.

Take Visual Studio Code as an example. Having the integration where you can scan for open source vulnerabilities and proprietary code weaknesses is critical. It makes it easy and quick to scan. Then, providing the remediated code is really the number one goal.

Let’s say that the developer can then see there’s an unsanitized input, maybe a potential SQL injection attack. There could be 70 to 80 different types of common weaknesses. But we also provide the fix in the IDE. Consequently, all that should be necessary is to copy and paste code and re-run the scan, which only takes a matter of seconds. Then you’ll hopefully be able to see that weakness drop away. This illustrates that the combination of ease of integration, speed of scanning, plus developers’ improved awareness of the tool, makes the best practice for securing code.

Step 5: Prioritize vulnerabilities

Historically, most organizations have prioritized vulnerabilities by severity, using CVS scores. They look at metrics to establish factors such as whether privileged elevation is required and what type of network attack it is. Logically, it sounds correct to focus on critical vulnerabilities, but a more important question is whether the vulnerability in question is exploitable. Effective prioritization analyzes the code and identifies not only when there’s a function or procedure that presents a vulnerability, but also whether you’ve imported the library, whether it’s being used, and therefore if each vulnerability is really a priority. Those that aren’t imported, or that you don’t use may be a threat in other contexts, but not yours, so can be deprioritized. As a result, you won’t waste time and resources finding and fixing vulnerabilities that pose little or no threat to your code base.

Some common security exploits in technology with embedded software

TCP/IP data transfer protocol

Technology with embedded software poses its own challenges. A good example of an attack factor in this context is the TCP/IP data transfer protocol. It’s the most widespread data transfer protocol because of its high compatibility. However, there are many TCP/IP vulnerabilities known to hackers because the protocol uses open channels for data transfer. So, attackers could basically abuse this to access, listen to, and modify traffic.

There are other well-known open source projects, such as Linux distributions that are widespread in embedded systems. Probably the most famous Linux exploit is with the Nest thermostat in 2016. A group of UK researchers demonstrated that they could take control of the device by using ransomware to exploit a vulnerability in open source components.


Another famous example is Stuxnet, when a malicious computer worm targeted supervisory control and data acquisition (SCADA) systems and caused damage to a governmental nuclear program by targeting programmable logic controllers (PLCs) which were used to control gas centrifuges that separated nuclear material. The worm used five zero-day exploits: the windows rootkit, the first known PLC rootkit, antivirus evasion techniques, peer-to-peer updates, and stolen certificates from trusted CAs.


A more recently disclosed vulnerability affecting the Polkit component was present on several Linux distributions for over 12 years. The vulnerability is easily exploited and allows non-privileged users to gain full root privileges on a vulnerable host. Polkit is commonly used to control operating system privileges in Linux distributions. Much like Log4j, it’s a good example of a long-standing but unknown vulnerability that can trigger significant issues once a CVE is released into the wild. We don’t know how many times it has been exploited or used, but there has been up to 12 years’ worth of exposure, which is significant.

Yocto projects

Yocto projects are closely tied to Linux, and Yocto is core to a lot of embedded products. A good example is a customer that has developed directional microphones, some of which are used by the military, to detect where snipers are located based on sound. Yocto can use many open source components, and the challenge is to ensure that any vulnerabilities are detected and fixed.

So, how can organizations identify and block these threats as early as possible in the SDLC? How can you be confident that these kinds of software are secure?

Step 6: Deploy SBOMs

The best way to mitigate these types of attacks is by having a software bill of materials (SBOM) — essentially a list of all the software components within these products. This is important because once you know what’s being used, you know the versions and the security status of each component. With this knowledge, you can identify whether a project has any zero-day exploits or vulnerabilities. Then, you can upgrade the components and mitigate any risk as quickly as possible.

Deploying an SCA solution enables you to scan containers and Docker registries and create an SBOM for the application. You’re not limited to a repository or a pipeline. Lots of vendors and embedded technologies will package the application and might use an embedded distribution of Linux or Unix. And we can scan that container as well, to produce the SBOM. Once you have that bill of material, we can then understand exactly what the threat is, and what security weaknesses are in the project.

Mend will defend you

With Mend Supply Chain Defender, you can implement a robust zero-trust approach to your security. It enables you to detect and block malicious open source software from entering your code base, and swiftly removed malicious packages from registries, to protect users from accidentally installing malicious code and from falling victim to software supply chain attacks.

Mend Supply Chain Defender can be deployed by individual developers via a plugin to their package managers. Alternatively, enterprises using JFrog Artifactory and Mend SCA Enterprise can activate Mend Supply Chain Defender in a centralized fashion to protect all projects linked to their JFrog Artifactory registries.

Ask the Experts. How to Achieve Zero Trust in Application Security? Watch our Webinar

Meet The Author

Luke Brogan

Luke has been working in the IT industry for 11 years and has developed wide experience delivering operational and security excellence. Luke has worked across a variety of industries, ranging from application management and infrastructure to IT management. He has worked with industry-leading brands such as BMW Group, Nominet, Arrival, and BeyondTrust.

Subscribe to Our Blog