Right now, vulnerability scanners are the automated technology of choice for organizations of all sizes looking to detect security flaws and misconfigurations. In theory, these tools provide quick, cost-effective and scalable ways for the good guys to keep pace with the bad guys intent on compromising your software and systems.
The reality, however, is that many CIOs and CTOs are now drowning in an ocean of scanner findings. In fact, they’re so swamped it’s affecting their capacity to actually get vulnerabilities fixed. Put simply, they are suffering from vulnerability fatigue according to Qualitest Senior Vice President and Global Head of Cyber Security Uri Bar-El.
Overwhelmed by the sheer volume of vulnerability data their scanners are pumping out, CTOs and CIOs are struggling to prioritize, manage and mitigate their cyber security issues.
“Today, a typical organization has around 10 to 15 technologies whose purpose is to find vulnerabilities. These technologies come up with hundreds or thousands of vulnerabilities, and organizations just don’t know what to do with all these vulnerabilities anymore,” says Uri.
Why? There are two problems.
“The problem today is that whoever owns the process of finding vulnerabilities does not own the process of mitigating them, which creates misalignment,” says Uri.
He cites vulnerabilities in the source code repository as an example and says, “If we are scanning the code repository, the security function in the organization will own the scanning, they will own the problem, but they will not own the solution.”
In other words, the security function will get a set of scanning results. But then they’ll have to look around the organization and identify who should own each problem and mitigate it before they can actually work with them on the issue according to its level of risk.
The second problem is the lack of standardization between scanning tools, whereby each tool uses its own metrics. Some categorize issues as high, medium or low risk; some use a scale of 1-10; while yet others add weighting to prioritize certain metrics over others. This leads to a lack of data normalization.
“Each tool is an island,” says Uri. “So, when an organization gets a set of results, and again, these are hundreds or thousands of results, there is no way for the organization to compare apple with apple and know the real risk level and prioritize the mitigation process.”
Each tool being distinct creates further difficulties around duplication of vulnerabilities and tools not talking to one another. “Imagine you have 100 servers owned by different teams where the problem is that the servers haven’t been patched,” says Uri. “Your scanning tools will most likely report upwards of 100 vulnerabilities across different functions in the organization. But the solution is the same solution: patch the servers.”
Using current methods, most organizations would need to contact each team individually and tell them what they need to do in this scenario. “If there was a way to aggregate all of these things and correlate between findings, we could have said, ‘Right, let’s fix this problem once, and it will fix it across the board’,” says Uri.
These problems have been weighing on Uri’s mind for some time. But good news! There’s a solution and he’s building it right now.
This new method takes an orchestrated three-pronged approach embracing people, processes and technology. Better still, it’s customized to your individual set-up and security policies.
At its core is an automated and intelligent central engine, or brain. One side of this brain inputs the findings from your security tools and collates and normalizes this data to get rid of duplicates, determine vulnerabilities and assign a risk level.
On the other side, the brain then works out a fix and identifies who owns a particular problem, before spitting out the remediation activities required. This is performed using your existing ticketing system – for example, Jira – thereby sending the problem and its solution directly to the right owner.
“The engine is also able to keep monitoring the mitigation process, so we know if the developer has fixed it or not. And we know that without scanning the code again, so we know continuously the risk level of each part of the organization,” says Uri.
And because this central engine understands how to dispatch, there’s no need for a dedicated team to manage or control it.
Fixing the problem of vulnerability fatigue and finding a solution is not just about technology. It also requires people and processes.
“The people part is getting problems to the right owners and them having their counterparts in cyber security help them where needed,” says Uri.
And the processes? “So, for example, a service-level agreement between the developer teams and the cyber function that says any critical vulnerability will be mitigated within 10 hours, and three high vulnerabilities will be fixed within each sprint. That way teams know how much time to allocate to security in each sprint,” he says.
This novel approach will see a shake-up in people’s roles, with cyber security defining the what and different developer teams and business functions the how.
“It’s cyber security who says we need to mitigate every critical finding within a day, or we need to scan every piece of code before it goes live,” says Uri. “Then it’s up to the developers to say how this should be done, and which tools or what processes should be used.”
This elevates security testing from a standalone activity that is done manually, as an afterthought or not at all. “It’s decentralizing and democratizing security so that it becomes the role of each developer and each function in the organization,” says Uri.
Pushing responsibility and accountability for security to every developer in your organization goes back to the basics of writing good code and preventing problems before they occur.
“We are telling everyone in the organization that in order to do their job correctly, good code needs to be secure code as well,” says Uri. “It’s not something that we will test as an afterthought. It’s not even something that we will test very quickly. It is something that needs to be done inherently.”
Indeed, this makes this new approach to cyber security one step beyond shifting left. “What we’re doing here is the extreme of shift left where we don’t even need to shift anything, we’re just embedding it,” says Uri.
In short, there’s work to be done. “I think organizations will need to have some time to educate themselves on this and to implement it slowly but surely. It’s not a swift change the way I see it,” says Uri.
This incremental change will involve structural reorganization and a cultural shift by those dealing with security day to day. The changes will also need to be reflected by top management.
One thing is clear though, rather than let organizations sink under the weight of their scanned vulnerabilities, cyber security needs to steer people in a new direction. “Helping our clients not only discover vulnerabilities, but also actually deal with them, is our next step forward in this evolution,” says Uri.