Facebook Google Plus Twitter LinkedIn YouTube RSS Menu Search Resource - BlogResource - WebinarResource - ReportResource - Eventicons_066 icons_067icons_068icons_069icons_070

Tenable Blog

Subscribe

The Three Legged Stool Of Vulnerability Management

Don't Fall Off The Stool

When I developed the course "Advanced Vulnerability Scanning Techniques Using Nessus", I wanted to mention some of the trade-offs we make when we perform vulnerability scans using different configurations. Nessus creator Renaud Deraison helped point out that it seems to come down to three factors: speed, intrusiveness and comprehensiveness. What I found was that these three factors were extremely important throughout the duration of the class, and I realize that for vulnerability scanning and vulnerability management, these factors must be taken into consideration.

3leggedstool_sm.jpg
"Vulnerability scanning is a balance between speed, intrusiveness and comprehensiveness."

I think of these factors as a three-legged stool (and yes, I am borrowing from PaulDotCom's Larry Pesce who uses this analogy to describe security in general). If each of the three legs provides support for a stool, it means that if you take one leg away, the stool (or perhaps even the person sitting on it) will fall. For example, without speed, your scan may take days or even weeks to complete. Without intrusiveness, you may miss critical information about the severity level of a vulnerability, and without comprehensiveness you may miss a vulnerability altogether.

Bringing Balance to the Force, er, Scan

Let’s dive into each of these three factors and talk about the pros and cons of each one:

  • Speed - We live in a fast-paced world and while many may proclaim that we need to "slow down", there is merit to speed, especially for vulnerability scanning. For example, if you are performing a regular vulnerability scan of your network each week and it takes two weeks to complete the scan, then you will miss vulnerabilities present in a large portion of your network. Another good case for speed is Incident Response. If systems have been compromised in your environment and analysis has determined the vulnerability being exploited, the next logical step is to search for that vulnerability across the entire enterprise. In this case, speed is of critical importance because attackers may exploit the vulnerability before you are able to remediate the problem. Speed isn't always all it’s cracked up to be either. To gain performance, there are trade-offs in the other two areas. A fast scan is often not as thorough and will leave out certain areas of testing to save time. While your scan may finish quickly, it may not be as complete as you think. However, knowing that your scan might be missing vulnerabilities is important so you can go back with subsequent scans (or other methods, such as passive vulnerability scanning) to fill in the gaps.
  • balancingmonk_sm.jpg
    It’s rumored that Tenable CEO Ron Gula holds meetings and requires that everyone stand in the pose shown above for the duration of the meeting to become more "Zen".

  • Intrusiveness - When performing vulnerability analysis with automated tools, I often wonder "just how far will it go to determine if a host is vulnerable?". Obviously, as a tester, I want it to go to great lengths and try as hard as possible to verify that a vulnerability is present on a system. However, this type of plan can have disastrous effects. In the case of a traditional remote buffer overflow, this may cause the service or the host to become unresponsive or crash. Many will become frustrated with automated tools when there is a finding that even hints of a false positive. What you have to keep in mind is that if an automated scanner crashes a service 80% of the time and you run it against an enterprise with 100,000 or more hosts, and 80% of those hosts are vulnerable, then you have a self-inflicted problem on your hands that could have been easily avoided. On the flip side, you don't want the report to be riddled with false positives against any one particular host because the scanner was not thorough enough. In itself, intrusiveness is a balancing act. Vulnerability checks must walk the line between availability and accuracy. I believe this is something that Nessus has continually improved on over time. In my own personal experience, I have no problem letting Nessus run with "safe checks" enabled against a network when I scan it for the first time. The results are almost always balanced between "not crashing" and accurate. I do state "almost always" because no matter how unintrusive the scanner is, there is always a case where a service could crash because it received a packet on the network. In this situation, using passive vulnerability scanning can really help as it can perform vulnerability identification without sending any packets to the target systems, and therefore be as unintrusive as possible.
  • Comprehensiveness - When teaching vulnerability scanning, I like to ask my students the following question: "What is the difference between false positives and false negatives?". I usually get many really great answers, but the one I am looking for is this: false positives are something you see, and false negatives are something you don't. Being comprehensive means finding as many potential risk areas as you can. The biggest price you pay for comprehensiveness is speed. For example, when using Nessus you can enable "Thorough tests (slow)", which will cause Nessus to try harder but your scan will take longer. You can also tell Nessus to fuzz all available web applications at the cost of speed. Nessus will also perform port scans against all 65,535 available ports over both TCP and UDP if you configure it to do so. While this is very comprehensive, speed is greatly impacted. In terms of comprehensiveness, one area that is important to mention is the location of a particular vulnerability. Most software has a default configuration and will be installed using a default location. However, there are cases (as with web applications) where the software can be installed in a different location. Having your vulnerability scanner check all locations for a given vulnerability, such as in the registry, will take longer but the end result is that a vulnerability is reported that you may have never known about.

Conclusion

Given the three major factors for vulnerability scanning, the burning question is: "What do I do to be the most effective?" The answer is, of course, "It depends". Each environment is unique, and here are some things to think about that will guide you along the way:

  • Consider running multiple types of scans with some configured for speed and some configured for intrusiveness/comprehensiveness
  • Use passive vulnerability monitoring in conjunction with active vulnerability scanning and correlate the results
  • Run different scans with different policies against specific technologies such as web applications, embedded systems, etc.

With these factors in mind you will be well on your way to creating an effective vulnerability management strategy.

Related Articles

Cybersecurity News You Can Use

Enter your email and never miss timely alerts and security guidance from the experts at Tenable.