Compromised: Proactive to Reactive

Chris Lindsey September 3, 2024

In this episode, discover strategies for automating security responses, managing vulnerabilities, and building a robust security team.

This episode of Secrets of AppSec Champions explores the transition from reactive to proactive application security. The discussion covers managing vulnerabilities, the importance of access logins and logging practices, and the need for automation and skilled security professionals.

Guest: Phil Guimond - Principal Information Security Architect at Paramount

Host: Chris Lindsey

Key takeaways from this episode:

  • Proactive Security: Shift from reactive to proactive security by automating responses, staying current on dependencies, and actively identifying vulnerabilities.
  • Skilled Security Teams: Build a strong security team with programming expertise to close gaps and customize solutions.
  • Access Logins and Logging: Implement comprehensive logging and access logins to detect compromises and understand security incidents.
  • Vulnerability Management: Prioritize active projects, address outdated libraries and container images, and manage technical debt.

This episode also touches on important considerations for:

  • Password Hygiene: Encourage strong, unique passwords to prevent account takeovers.
  • Third-Party Libraries: Regularly update and manage third-party libraries to minimize vulnerabilities.
  • Legacy Code: Address and manage legacy code to reduce security risks.
  • Internal Network Security: Implement proper segmentation and access controls to secure internal networks.
  • Cloud Security: Secure cloud environments by controlling access and regularly updating configurations.
  • Incident Response: Develop and practice incident response plans to effectively handle security breaches.

Intro:

Access login is a really good way of detecting when we have a compromise. You could have a whole bunch of, say, 400 logins that suddenly become 200. That shows that these people are eventually able to log in.

And it showed that there was probably a brute force attempt, and you can find all kinds of problems that way. But you need logins. You need visibility before you can have a proactive information security program. Without visibility, you’re going to be constantly fighting fires.

You’re not going to be able to see where it’s coming from. You might be able to treat this cause or that cause, but you won’t be able to treat the root cause. It could be that somebody’s computer was hacked. A developer was hacked.

It could be that you included a third-party library that is pushing out your environment variables to a third-party server from an AJAX post request or something like that. There’s lots of different ways.

If you have no login, you cannot find the root of the problem, and you’re just going to be reacting, reacting, reacting, and you’re going to be stuck.

Conversation:

Hello and welcome to Secrets of AppSec Champions. My name is Chris Lindsey, and today we’re going to be speaking with Phil Guimond.

Today’s conversation is going to be around being compromised and moving from a reactive to proactive application security program. Phil is the principal information security architect at Paramount.

Phil, please introduce yourself.

Hey. Thanks. So, yeah, I’m Phil Guimond. I’m a principal information security architect at Paramount, like you just said.

So, that’s the fancy way of saying I do a little bit of everything. So I work on application security, cloud security, penetration testing, digital forensics, and incident response. And, literally, I’ve started to dabble in artificial intelligence, machine learning, and stuff like that. I also do a lot of programming.

That’s exciting. AI is definitely something that’s big in today’s world, and it’s only going to get bigger. So welcome to the show. So let’s talk about being compromised.

When we were talking previously, kind of leading up to today’s show, we were talking about being compromised. What does that look like? When you’re in the middle of the compromise, what kind of steps can we look at, or what kind of things should we be thinking about trying to get out of it? Because your story is really going to be just absolutely amazing.

And it’s kind of a chicken and an egg thing. The more you try, the harder it becomes. And so, let’s start with that. Phil, want to share your story, go from there?

Yeah. Sure. So that’s a great question.

So when it comes to being compromised, there are different ways.

Sometimes it doesn’t affect your company directly. It could just affect your customer. Because a lot of times, customers everywhere in the industry, no matter where you are, whether you’re in media entertainment or Facebook, social media, or something like that, everybody is doing the same thing across the whole industry. Reusing passwords.

So, when people reuse their passwords, all it takes is somebody to go to Google and search for a breach compilation or a breach dump or something like that, and they download a list of all the usernames and passwords that people are using for a given account.

And what they do is basically attempt to log in to accounts that exist using that username and password combination.

In many cases, your brute force protection and stuff like that is not going to work against it because they’re just trying one set of credentials or maybe a few sets of credentials. So you can fly under the radar when they’re doing that. Of course, if a lot of different accounts are being attempted by a single IP address or a single range of IP addresses, that could be an indication that somebody is trying to compromise. But it could also be that you have a big building.

Right? And it could be a building full of people, employees. Like, maybe you have 30,000 employees in one building. There’s bound to be a lot of accounts trying to be logged into from that particular building.

They could be logged into their Netflix account, their personal emails, and stuff like that. So you really have to have context for what’s going on.

The Reactive Approach to Security Management

So, Phil, a reactive program, right, it feels like you’re just fighting fires, just one fire after another.

That’s actually pretty common in a lot of organizations. It’s pretty common across the industry where you’re basically just constantly fighting fires. You have all of these problems coming in, and you’re dealing with them in a very manual way. I think one of the worst examples of a reactive program that I’ve seen was where we spent the entire day fighting account takeovers.

So we’re going and used some SQL queries, and then we would look for evidence that a user has had their account taken over very recently. However, account takeovers are very complicated. They’re complicated because the problem is on the end-user side. So the users are reusing passwords across all kinds of different websites.

So one of the ways we were detecting this was we were looking, we had an overview in Grafana. It would show us how many logins we had. If we have a spike of logins, if we have a spike of registration attempts, if we have a spike of checks against the active email endpoint.

So let’s say you want to create a user account on a website or for a program that if you try to create an account for which there is already an account, you go to one stage and you basically hit that API endpoint. When you create the registration, and when you’re registering your account, it would say this account is already taken.

So attackers can actually abuse that to see which accounts are actually in use. And then they can attack all of the different accounts and attempt a password, a login attempt, password spraying, and stuff like that. So this is actually very common in the industry. People can just go to Google and download a breach compilation.

You search for breach compilation, paste dump, and you can get people’s usernames and passwords that they have been reusing for a long period of time. So these people are reusing passwords, and we would spend all day just finding the account takeovers, which were happening every day. So the users, the hackers, the attackers would just log in and take over these accounts, change the passwords, and drain the funds.

And they would just keep doing this over and over again.

It was pointless. So we were going through this phase where we had to manually look everything up in the SQL server. So we would open, like, a JetBrains product that allows you to query databases very easily.

We would run all these SQL queries on the database, and we would get a list of people who were potentially, and it wasn’t necessarily 100 percent correct, but they were potentially taken over. So we would spend all day, sometimes, from morning to night solving this problem. And that would happen again the next day, and the next day.

And the next day. For 3 months.

And there were times when it was quiet and we were just fighting this. And after the first few times when this happened, I’m like, we need to find a way to automate some solutions to avoid having this happen constantly.

Strategies for Automating Security Responses

So a few of the ideas I came up with were using the have I been pwned API to see if they are known compromised credentials.

And another one, and this is the simplest one.

We just put CAPTCHA.

We CAPTCHA, a Google CAPTCHA on the login. So they would actually have to go in and select all these pictures. And this is actually before Google came out with the really easy reCAPTCHA where you just click, I’m not a bot.

And you have to select all these squares. It would tire them. So people were actually leaving the site when they tried implementing that.

They were leaving the product. They were not using it because they were tired of having to mess with the CAPTCHA stuff. The business made a decision to keep fighting fires constantly. So at the time, that was the best we could do.

But then shortly after that, Google came up with a reCAPTCHA that prevented that from being a problem. So all you had to do was check a button. So you were basically fighting fires 24/7 in a very manual way. So we tried building a lot of scripts and stuff to combat this.

And I basically put my foot down. They said, hey. Look. We are spending up to 8 hours a day trying to fix the customer’s own problems, but they are causing themselves by reusing passwords.

So we’re resetting the customer’s password, and we’re sending them a reset password link in the email. But more than likely, the customer’s email was also compromised.

With the same password?

Yeah. We’re resetting it because people are reusing passwords across all services.

So that was really annoying. But, that’s one of the biggest ways to know that you’re in a very reactive instead of a proactive information security program.

Other ways would be let’s say you have no way to find out what you have across your app sec stack. Like, what third-party libraries do you have? If you don’t have a way to automatically find your third-party libraries, then when something like Log4j 2 comes out, you have to dig through all of your repositories in GitHub or whatever source control you use across everything, and you have to search for these specific files just to make sure that it’s not being used.

So you’ve got to focus on that.

And it’s very reactive. So in some ways, even some proactive information security programs can be a little bit reactive, and that’s okay.

But if everything is reactive you got big problems.

Well, and then I want to dig just a few minutes on your repository thing. So if I go in and I look at Log4j, I could have the files there. It just may not actually be tied to my application.

Now when I do a search on GitHub or Bitbucket or whatever source code management system you’re using and I’m going through and I’m looking for Log4j, it’s possible that it may be showing up because somebody just uploaded it, but it’s not actually being utilized. And so now all of a sudden, you’re wasting time trying to open up the project. And again, the whole thing about being reactive is you’re spending your feet. You’re wasting cycles doing no real benefit to your program.

I agree. I agree. So it could just be including it, like you said, in some Java file or something, and it’s not being utilized. It’s not being called. But that doesn’t mean later down the road, a few minutes later, a few hours later, that the developer can’t actually enable that because they might actually just start using it. Because they will do they when they’re coding in Java, they’ll be like import, package name, Chrome dot, something. And they may actually end up importing Log4j 2.

So that’s definitely a problem. I would definitely like to make sure that that is either excluded or upgraded. But like you said, you’d be wasting a lot of time chasing details on a lot of different products. So it’s important to focus on things that are actually running in production, because there were a lot of test projects in there.

Managing Legacy Code and Technical Debt

There’s only one locally on your own system. It’s not really accessible to the outside world. So you don’t really need to worry about that too much unless you’re being attacked on some hotel network or something.

You’re right. Because with source code management systems, people upload all kinds of stuff in it. It could be a proof of concept program. It could be something someone just wants to try.

It could be something production. It could be something that has been deprecated and no longer being used for years. And so it’s one of those things where people in source code management systems don’t do a good job at cleaning up after themselves. Think about applications where people are writing things and they’re progressing forward.

But you may have removed the usage or the need for a method or some aspect of an application, but you still left the code in there. And so with that dead code sitting there, do you know that the code’s really being used or not? And so that goes against your security tech debt, and it just makes it harder for your reactive program to become proactive because you just have all this lingering stuff.

There’s a lot of lingering stuff at pretty much every organization out there. And I think it’s a big problem when they’re using those source code management tools that you’re talking about. You import pretty much everything. You import every single repository you have out there, but maybe only 10 percent of that or 15 percent is actually going to be in use and running in production.

The rest is just stuff people built a long time ago. I found an interesting way to deal with that is by building a tagging system. So the GitHub GraphQL API actually allows you to search for a list of repositories within an organization. And if you get a list of all the repositories, then you want to take a look at the commit history.

So one of the things you can do is check to see if anybody has actually committed to that repository in the last X number of days. For example, it could be 90. It could be 120. It could be 360 days, 365.

However you want to do it. But, in general, things that have not been committed to in a year or more are probably not active, so you can ignore that stuff. So what I did was basically I worked on building a tagging system that would basically mark a project as inactive if it hasn’t been updated in a long time.

It would still be there on the source control system. But you would not have to worry about the vulnerabilities right away. You would focus on the stuff that’s being actively developed.

Have you committed to this stuff in the last 60 or 90 days? Great. We’ll mark that as active. Has it been X number of days since it’s been committed to?

We’ll mark that as inactive. That way, you don’t have to focus on all the vulnerabilities that all of these packages that are 90 percent, maybe a thousand tons larger than your actual list of running projects in production. So that helps a lot too.

It does because knowing what’s active is important because, again, if you have a breach or you have something going on or, again, you’re spinning your wheels. Where do I focus? Where are my priorities going to go? I have a Spring4Shell show up.

A vulnerability. Hey. We need to go identify these. And part of the problem is if you’re looking, as you were mentioning, through GitHub or your repository system, they’re going to show up.

And knowing what’s important, what’s active, what’s not active is absolutely key to knowing the prioritization of your problems.

Exactly. That’s a good point. In fact, I think it’s really important to focus only on the stuff that’s being actively run. So let’s say something like Log4j 2 happens again, and systems are being exploited everywhere. Well, if you have this tagging system to focus only on the active project, all you have to do is export that list and work with the developers to try to fix that. Or, depending on what kind of product you’re using, you may be able to fix that automatically.

However, sometimes there is no fix available.

But there’s something being actively exploited. So in that case, it’s important to have something like a web application firewall in place. I don’t really trust web application firewalls, to be honest with you. They’re like a less-than-ideal effort during zero-day events, especially active exploitation campaigns.

They can help a little bit against some of the more script kiddies.

But in some cases, there have been a lot of cases, actually, where even though you have a web application firewall in place, stuff is getting through.

Yeah. Even simple SQL injection scans are getting through. So it’s important to stay as proactive as possible. Sometimes, though, you have to be reactive. It just shouldn’t be everything.

And we have got to choose your battles and figure out where to focus. I was thinking about your account takeover story. And the one thing that just kept reverberating in my mind, when you’re using the same password over and over for all your systems, the problem becomes if your password’s compromised on your email as well, guess what? I go in and I can’t log in.

Okay. Well, I’ll go change my password. Guess what? That password is being sent to an account that is not you.

And so the problem becomes the attacker or the hacker is, hey. They’re on to me. They’re trying to reset their passwords, but they’ll never be able to get through.

And so it’s just kind of a cyclical thing. The other thing that you brought up that I really liked is your labeling idea of the repository because one of the things that could be extremely helpful is not just active or inactive applications, but you could also tag it as is this an internal application or an external-facing application.

You could also tag it as, is this a console-type application where something like a command injection or command execution doesn’t matter. You’re already on the box. Having those labels are extremely valuable and helpful, and I think that’s an excellent idea.

Internal Network Security Challenges

It’s not just the web application. It could be reactive internal network security.

You don’t really see that that much anymore because most people are, well, actually, if you have an office, there’s a lot of companies that have a flat network.

So if you’re in the office, there’s a flat network on there, and everybody’s sitting on the same network, and everybody is reachable.

And the reality is once you’re in that network, you can go east and west.

Not only can you go east or west, but there’s a lot of companies in today’s world that are using the cloud. They’re using AWS. They’re using Google Cloud, and that’s tied directly into the network. Pretty flat.

It’s still outside your four walls, but it’s still actually on your network. And so once you’ve compromised a box or once you’ve compromised a password or once you’ve actually looked at, if you do for any reason have or gain access, this did happen to a couple of companies, I don’t want to share names, where their credentials were compromised and the GitHub repository, all the source code was active. And they were able to pull it down. And guess what was part of that data and hardcoded in the files, the access credentials to get into their cloud environment?

Indeed. Indeed.

Yeah. So I’ve actually seen some pretty interesting incidents where sometimes you have some network engineers who they were under deadlines. They were under tight deadlines to get the network set up as soon as possible.

So they were just doing everything as quickly as they could. They weren’t hearing about segmentation, VLANs, client isolation, and stuff like that. So the problems in the network can actually affect your company, especially when you have an office.

Let me give you an example. Let’s say you have a guest Wi-Fi network. So you set up this guest Wi-Fi, and you give the guest a password to log in.

But that guest network happens to be the same, happens to be part of your corporate network. You’ve got a problem because that means somebody could log in through Wi-Fi and gain access to your networks.

They could probably gain access to your AWS accounts depending on whether they knew about the resources you have in the cloud or not. Because in a lot of these places, you’re going to be whitelisting the IP address used to access those resources.

And I have seen some pretty interesting things where attackers were actually able to gain access to some Hadoop clusters in the cloud simply by having their IP address whitelisted. Even if there was client isolation on that network, and, of course, you have to know where that stuff is, but all you have to do is an nmap scan. If it’s bridged onto your network, good game.

So even if it’s client isolation, the end IP address is still the same as the whitelisted IP address for that particular company. So you have the company users and you have the Wi-Fi users all with the same IP address facing external. So they’re whitelisted.

And these folks were able to hit some Hadoop clusters. Way back in the day, there was an unauthenticated RCE endpoint.

RCE, remote code execution. So you could actually do command injection on an undocumented API endpoint to access the clusters. And then you can just spin up a bunch of nodes with crypto miners on them and all kinds of stuff, and it was a mess. And we found ourselves constantly responding to stuff like that.

And nobody knew where it was coming from, so there was no login. That’s another thing. If you don’t have any answer to a login, you probably have a very reactive information security program. If you have no way to tell where these attacks are coming from, you as a pen tester, I can tell you a million ways to get in through what I know about that network, but we don’t have any proof of it because there’s no login?

So it’s really important to make sure that you log stuff. You log access attempts, access logs and stuff like that. And access login is a really good way of detecting when you have a compromise. You can have a whole bunch of, say, 400 logins that suddenly become 200, and that shows that these people are eventually able to log in.

And it shows that there was probably a brute force attempt, and you can find all kinds of problems that way. But you need logins. You need visibility before you can have a proactive information security program. Without visibility, you’re going to be constantly fighting fires.

You’re not going to be able to see where it’s coming from. You might be able to treat this cause or that cause, but you won’t be able to treat the root cause. It could be that somebody’s computer was hacked. A developer was hacked.

It could be that you included a third-party library that is pushing out your environment variables to a third-party server from an AJAX post request or something like that. There’s lots of different ways. So if you have no login, you cannot find the root of the problem, and you’re just going to be reacting, reacting, reacting, and you’re going to be stuck.

Proactive vs Reactive Security Measures

What’s important is trying to become as proactive as possible.

Well, and something I want to throw in there too about the logs is it’s absolutely vital that your logs are in a position where they can’t be tampered with. Because if I have access to your network, the first thing I’m going to do is go have a little fun, do a little sightseeing, see what’s going on on your network, and then I’m going to clear my path.

Hey. Look. I found the logs. I found where these are being stored. I’m either just going to wipe them altogether. And if you’re just doing basic authentication and basic stuff, that tells me as an attacker or a hacker that everything else probably within your environment is probably set to default or basic.

The knowledge level of whoever set it up was just basic. And when you have someone that’s just basic, most likely, the logs can be tampered with. They can be edited. They can be modified. They could be just simply deleted.

Yeah. I’m happy you brought that up. I’ve actually done that many times. And, yeah, I’ve done it and I’ve seen it happen.

One of the more common ways you see that is in some of the old virtual machines and servers where they had all their access logs stored in var, logs and stuff like that, h w w w, HTTP, whatever, and they have their access logs stored on there. But if you have the right permission, if you were able to escalate to root, and in many cases, it was very easy back in the day because everybody would just put their sudo on the system. And you can just do sudo echo test to see if you actually have the permissions, without entering a password.

And then you can actually just go in and edit the logs. So you can let’s say all of the logs are going to show your internal IP address that’s hitting the server and doing all the stuff. Well, you can actually just omit those entries in their entirety through grep or something and then redirect that and pipe it out and replace the log.

As long as it’s not locked.

Well, and a lot of people will run these as local admin on a Windows box or run them with escalated privileges on a Linux box. And people don’t think about least common privilege needs. It’s one of those things going back. It’s default.

I don’t know how to get this to run because it fails. Oh, I’ll just escalate it to a local admin or escalate the privileges on Linux. Hey. Guess what?

It’s working. I’m running as root, but guess what? I’m in. It’s working.

Yeah. That’s cool. That happens all the time. People like to run things as administrator.

I have seen some pretty crazy things out there. Windows has, like, address based layout, randomization, data execution prevention, and stuff like that. Now if you’ve ever played certain video games on Steam, there have been occasions where the games just keep crashing.

They keep, they just keep crashing over and over again. It’s just a problem with the game’s code. So on the phone, people are telling you to disable data execution prevention and ASLR and stuff like that. They tell you to go in, disable all of this stuff, and people do that.

And they forget that they’ve left that off. So over the year, the machine is just increasingly more vulnerable as new vulnerabilities are discovered.

Maybe there’s a vulnerability in the game itself, but they’re basically telling people to disable security protection just so they can play a game.

And that’s still kind of scary.

Well, think about it this way. What if somebody actually compromises or becomes a developer within that game and puts malicious code in there? And what if we know that this is a game that a lot of network admins love to play?

They may be playing it on their work laptop, sitting in the office on the IP address that can access their clusters. I’ve seen it happen. So the other thing too is you had brought up dependency management. And with dependency management, think about what’s going on right now with the NVD.

Dependency Management in Security

The NVD is kind of in a paused state. And because it’s in a paused state, vulnerabilities are still being detected. They’re still being sent into the NVD.

Hey. There’s this vulnerability.

Due diligence is still working. And so when people are identifying vulnerabilities in third-party tools or libraries, they’re doing the due diligence. They’re sharing the information with the developers. The developers are still fixing it.

The problem is some of your free tools that you’re using for vulnerability detection are just looking at the NVD. And so you’re going to have what’s called a false negative, where you actually have the problem but because of the data. The nice thing about some of the tools out there is the tools, some of the better vendors use multiple sources. So the NVD problem is no longer an issue.

But there’s actually a better way to address it, and that is just update your dependencies. Because if you’re sitting there and you’re focused only on, hey. I’ve got this dependency that needs to be updated because of something, you’re already in a reactive program. If you’re proactive, you’re actually focused on keeping everything up to date.

If you’re always staying current on your dependencies, what’s going to happen when the NVD does say, guess what? There’s a vulnerability. It’s on a version that you’ve already passed on. You’re already running the fix.

One of the things that I have found that works for vulnerability management, it works very well. Like you said, we should always be focused on keeping the dependencies as up to date as possible as often as we can. And if we can’t update them because the project has been abandoned years ago, there’s a lot of projects like that. And maybe we need to find a different one.

But that might not be feasible for a lot of different projects out there. So what you would do is basically fork one of those dependencies and make your own changes to it. That’s something you have to do occasionally. I don’t see it happening too often.

You brought up a good point about the vulnerabilities. Now let’s imagine that you have a project that has 4,000 vulnerabilities. That can quickly overwhelm you if you have to fix all of them.

But if you take a look at the classes of vulnerabilities that exist there, outdated libraries, outdated container images, and stuff like that. Or you have some problems with your infrastructure as code, Terraform modules, or whatever. But in particular, those container images in the third-party libraries are where you’re going to find almost all of the vulnerabilities across your infrastructure.

And if you’ve got 4,000 vulnerabilities in one container image, how do you fix that? Well, there’s probably a container image where if you upgrade it to either the latest version or use a slim version, you basically eliminate several thousands of vulnerabilities in one go. You eliminate all of them. And for a lot of these products, the open-source libraries. Let’s say your open-source libraries are maybe one or two years out of date. You haven’t updated them in quite a while. That’s very common.

You might find that there’s maybe two or three different packages that you’ve included that are vulnerable, and they have hundreds of vulnerabilities on their own. So simply upgrading that to the latest version gets rid of all of those, and you don’t have to focus on them.

Unfortunately, sometimes you can’t upgrade right away because some of those libraries have changes that will break your build. So you have to do regression testing, QA testing, and all of that just to see if it works. Sometimes those projects have to be rolled back, and that’s when things get a little bit more reactive. But in general, these days, a lot of third-party library developers are good at not having that. Now there might be some functions that are no longer being used, and you have to upgrade to the new set of functions that are being included in the library. That takes a lot of time away from the developers. But in general, when you upgrade those libraries, you knock off thousands of vulnerabilities.

Same with the container images, you knock off thousands of them.

So that makes it a lot more easy to manage those vulnerabilities.

And it gets even easier with the tagging system that I was mentioning earlier, where if a project hasn’t been updated in, like, a year, it’s probably not something we’re running. And that there’s a long-running piece of code that is basically considered perfect. I’ve never seen perfect code, so let me know if you find any.

Well, I do have stored procedures and some code that I wrote back in the early 2000s still running in production today, I would be afraid to see what it looks like.

I have grown a little bit in my abilities since those days. You bring up a really good point when you’re talking about the images.

And one of the things that we had mentioned earlier was Kubernetes and pods and the ability to spin up. When you have a good build process within your environment where you’re downloading a good image, where you’re actually being able to upgrade. One of the beautiful things about running Kubernetes or pods or Docker is when you’ve created a good foundation to work from, a good structure, you can just say, look. Those pods, those images are old. We can switch it out real-time with updated versions without any outage, any downtime. You’re able to, it’s almost like changing a flat tire going down the highway.

And you just spin up new pods, new images, new versions, and you just destroy the old. And if you put a time to live that’s reasonable, then all of a sudden you’re always rotating your images. So if you do have an image that does get compromised for any reason, it’s short-lived. And depending on what you’re doing, maybe that’s half an hour, an hour.

It could just be a day. Trying to have pods or environments that are running for days, weeks, months is just a bad design. That’s absolutely great.

The other thing that you’re also dealing with in reactive is gaps.

Because one of the things that, when I think about different leaders that I’ve talked to over the years in their programs, if you’re reactive, you don’t know what your gaps are.

Good luck.

But as you start putting it together and you start putting your program in a position to be successful, now all of a sudden, your gaps are becoming more and more known. We have this environment. We’re not doing anything on it. We’re not monitoring it.

We’re not doing anything. We should probably put some effort into that. And from a development standpoint, as we’re looking at our dependency management, as we’re starting to roll more into a proactive approach where we’re keeping things current. We’re looking at the images.

The Role of Tools in Development

We’re using tools. Tools are great as an assistant to the developer. A tool is not going to solve your problems. A tool will bring your problems to light.

And it’s the developers and training on the developers that really makes the difference. If you have developers who are not security-focused or minded, then the problem becomes those developers are going to just continue writing insecure code. And the tools are going to show, hey. You went from 12 issues to 20 issues to 30 issues to 40. And you may be focused on, hey. Fix this one or fix that one. But if they’re duplicating the code on their side, they’re only just duplicating and extending the problem.

And going back to the compromise that you were talking about earlier where you may have a library that’s compromising you. You may have removed the code to use that library, but somebody grabbed old dead code that’s still in the system and brought it back to life.

That’s a good point. I like the mention of gaps because when you’re in a reactive program, you will find gaps everywhere. You’ll be finding gaps constantly. It’s important to close as many gaps as you can within the budget you have. Of course, sometimes you’ll have to make some sacrifices because no information security program has a perfect budget. So you have to make do with what you’ve got.

But I have found that when you focus on eliminating as many gaps as you can and automating things and tying them into, either a single dashboard, if you can, that single pane of glass that everybody likes to talk about, is actually a pain in the ass.

Yeah.

But as long as the security products that you’re using have a good API, you would be able to get all of the information that you see in the dashboard from the API as well. And then you can just build your own dashboards a little bit and dump that into one place so that everybody can see that. And there are some tools coming up now that are doing that for you. I won’t discuss which ones they are, but they seem to be doing a pretty good job.

One thing that you may have in an environment where you’ve got tons of gaps, where you may have tons of technical debt, tons of security debt, you’re going to have high turnover for your staff.

And there’s a lot of leaders. Here’s the most amazing thing that I think I’ve that’s really just kind of hit home this week for me, and that is a lot of leadership have talked about only having two or three security guys running the whole program.

And when you’re in a reactive fight, you can only go so long. If you’re sitting there stuck on, your compromise story as earlier, spending one, two, three, the whole week, two weeks, three weeks, four weeks, on the same problem and you’re on that treadmill that never ends, the problem becomes, at some point, you’re just like, I’m done. I’m leaving. Somebody else can come clean up this mess.

Balancing Team Size and Effectiveness

Yeah. And if you have a smaller team and you have a team of go-getters, people who want to solve problems, it’s much easier to develop automated solutions to get around that kind of thing.

And that could help you develop a more, a less reactive program. But, of course, you have to have a balance. I have seen information security teams that just get too big, and nobody knows what the other person is doing. And it’s just, around and around. Everybody gets to run around, and nothing gets done. I have seen that happen in the past. And I have also seen much smaller teams contribute to a lot and get a lot more done.

The interesting thing and you and I, our programs that I ran, that you run, as a developer, the beautiful thing is, I have a gap. There I have a need. I’ll just code it.

I’ll make it work because knowing API, knowing development, you can actually just tie things together and just make it work. Security vendors know what they think customers want, and they go after that. They feel like they’ve got a pulse on what’s going on. But, again, there’s a lot of edge cases that you’re going to run into in your program.

And with those edge cases, being able to code around it, to hit APIs, to pull it together. For me, one of the things that I did was I sent an email to every developer on commit, and they knew the security for what they just worked on right in their inbox. There’s no excuse whatsoever.

And some of the tools out there, do that, some don’t.

And for bringing tools or bringing, gosh, I hate to say the word, single pane of glass.

It’s one of those things where it’s nice to be able to see on a dashboard high level, what does my security posture look like? Because if you can pinpoint and say, I have all these criticalities over on this application. This application is external facing.

This may be an API that your mobile device is talking to all the time.

Those are things that you have got to work on. But if it’s something that is internal and even though it’s critical, you can use some of these tools to kind of help minimize your chasing after by saying this is an internal tool. This is something that’s only used on the command line. Therefore, a lot of these vulnerabilities that are being detected are, yes, they’re vulnerable, but they’re not going to matter in my context.

The Need for Programming Skills in Security Teams

Yeah. I’m glad you brought that up. I think it’s really important for information security teams to have actual programming on the team. It’s like, security software engineers.

Not everybody will work in software, but when you have a smaller team and a smaller budget, hiring security professionals who can actually write code and fix those solutions is game-changing. Because otherwise, you have these tools that you buy. And once the tool can’t do what they need, they get stuck with it probably in a bunch of meetings with the vendor about, oh, we don’t have this feature. We don’t need this feature.

We need that feature. But if the vendor has a good API, you can just call that yourself.

Exactly. I ran into that multiple times with my program. And having the API endpoints, it’s life-changing because, to your point, it doesn’t exist within the tool. That’s fine. It does in the API. I can do something about it.

That’s a good point. I’m still doing that, quite often. I’m running into a bunch of use cases that are just not covered by some of the security products that I use. And it’s funny because the security product is supposed to make everybody’s lives easier. But in some cases, it makes it harder, a lot harder, not just for the developers, but for the information security team because it doesn’t have the features that you need. So you find yourself constantly building those features yourself.

It’s just the way it works in this industry most of the time.

No. It’s true. And to find a good application security director, leader who understands application security is hard to do and it’s vital. And so companies that have the right people, the good people on their staff, they need to do what they can to keep them on board.

Because if the board of directors is just constantly pushing back or your directors that you are underneath are pushing you all the time. Hey. And there’s not many good people out there. And when you find somebody that’s good, you have got to hold on to them.

Well, Phil, we’ve talked a lot about a lot of things, and we’ve got a lot of content for our show.

I think what I’m going to do if it’s okay, unless there’s a real big pressing topic you want to hit, I’d like to go ahead and conclude us here simply because you and I could probably talk for hours. And our listeners, as much as I bet they’re enjoying this conversation, are probably looking at the clock going, it’s time for lunch, or I’ve got to hop off this thing because I need to go eat.

Yeah.

Oh, no. No. No. So, Phil, I appreciate your time today. I appreciate you talking about this.

It’s such a deep topic, and there’s so much to it, from the account takeovers to running in circles, trying to close the gaps, trying to get things to work well and to just have a good program. And, again, good programs have developers on it. They have people who care, who are passionate, and you’re one of those guys. Absolutely.

I can see the passion and the fact that you care. And so, Phil, thank you for coming on today’s show.

Thank you so much for having me. I had a great time.

Thank you so much for joining us today on Secrets of AppSec Champions. If you found today’s information valuable, hit that subscribe button on your Apple Podcast, your Spotify, or wherever you’re listening to today’s episode.

Ratings and reviews are like gold for us. So if you’re feeling generous, please leave us a kind word as it helps others find our show. Until next time. Take care.