29 May 2013

Hannover/Tripwire metrics part 1

I mentioned the Hannover Research/Tripwire CISO Pulse/Insight Survey recently on the blog.  Now it's time to take a closer look at the 11 security metrics noted in section 5 of the report.  


The report doesn't explain the origin of these 11 metrics.  How and why were they singled-out for the study, from a vast population of possible security metrics?  To be precise, it doesn't actually say that survey respondents were presented with this specific choice of 11 metrics, nor how many were on the list, leaving us guessing about the survey methods.

Furthermore, the report neglects to explain what the succinctly-named metrics really mean.  If survey respondents were given the same limited information, I guess they each made their own interpretations of the metrics and/or picked the ones that looked vaguely similar to metrics they liked or disliked.  

Anyway, for the purposes of this blog, I'll make an educated guess at what the metrics mean and apply the PRAGMATIC method against each one in turn to gain further insight. 

Metric 1: "Vulnerability scan coverage"

Using automated tools to scan the organization's IT systems and networks repeatedly for certain technical issues is a common approach in large organizations to identifying known technical vulnerabilities - old/unpatched software, for example, or unexpectedly active network ports.  The metric refers to 'coverage', which I take to mean the proportion of the organization's IT systems and/or network segments that are being regularly scanned for known technical security vulnerabilities.  

Why would this be the most popular of the 11 metrics in the survey report, apparently used by up to two-thirds of the respondents?  Being naturally cynical, i'd say the fact that the survey was sponsored by Tripwire, a well-known supplier of vulnerability scanners, is a massive clue!

Anyway, let's lift the covers off the metric using the PRAGMATIC approach:
  • Predictiveness: an organization that scores low on this metric is probably unaware of technical vulnerabilities that it really ought to know about, betraying an immature approach to information security, whereas one firmly on top of its technical security vulnerabilities demonstrates a more mature approach ... to that one aspect of IT security anyway.  However, scan coveraqe per se doesn't tell us much about the system/network security - it merely tells us what proportion of our IT systems/networks are being scanned.  The scans themselves might reveal absolutely terrible news, an enormous mountain of serious vulnerabilities that need to  be addressed, whereas the coverage metric looks fabulous, or indeed the converse ("We only scan a small proportion of our systems/networks because the scans invariably come up clean!").  At best, this metric gives an indication of the organization's information security management capabilities, and a vague pointer towards its probable status.
  • Relevance to information security is limited in the sense that known technical system/network security issues are only one type of information security vulnerability.  Patching systems and securing network configurations is a valuable security control, but there are many others.  This metric, like most technical or IT security measures, is fairly narrow in scope.
  • Actionability: on this criterion, the metric scores quite well.  If scan coverage is too low (whatever that means), the response obviously enough is to increase the coverage by scanning a greater proportion of the systems/networks currently being scanned, and/or expanding the range of types of systems/networks being scanned.  There will be diminishing returns and, at some point, little if anything to be gained by expanding the coverage any further, but the metric should at least encourage the organization reach that point.
  • Genuineness: if someone (such as the CIO or CISO) wanted to manipulate the metric for some ulterior purpose (such as to earn an annual bonus or grab a bigger security budget), how could they do so?  Since the metric is presumably reported as a proportion or percentage, one possibility for mischief would be to manipulate the apparent size of the total population of IT systems/networks being scanned, for instance by consciously excluding or including certain categories.  "We don't scan the systems in storage because they are not operational" might seem fair enough, but what about "Development or test systems don't count because they are not in production"?  It's a slippery slope unless some authority figure steps in, ideally by considering and formally defining factors like this when the metric is designed, assuming there is such a process in place.
  • Meaningfulness: aside from the issues I have just raised, the metric is reasonably self-evident and scores well on this point, provided the audience has some appreciation of what vulnerability scanning is about - which is likely if this is an operational security metric, intended for IT security professionals.  Otherwise, it could be explained easily enough to make sense of the numbers at least.  It's quite straightforward as metrics go.
  • Accuracy: in all probability, a centralized vulnerability scanning management system can probably be trusted to count the number of systems/networks it is scanning, although that is not the whole story.  It probably cannot determine the total population of systems/networks that ought to be scanned, a figure that is essential to calculate the coverage proportion.  Furthermore, we casually mentioned earlier that vulnerability scans should be repeated regularly in order to stay on top of changes.  'Regularly' is another one of those parameters that ought to be formally defined, both as a policy matter and in connection with the metric.  At one ridiculous extreme, scanning a given IT system just once might conceivably be sufficient for it to qualify as "scanned" for ever more.  At the opposite extreme, mothballed IT systems might have to be dragged out of storage every month, week, day or whatever and turned on purely in order to scan them, pointlessly.
  • Timeliness: automated scan counts, calculations and presentation should be almost instantaneous.  Figuring out the total number of systems/networks may involve manual effort and would take a bit longer, but this is probably not a time-consuming burden.  With regard to the risk management process, the metric is related to vulnerabilities rather than incidents, hence the information is available in good time for the organization to respond and hopefully avert incidents caused by known technical attacks.
  • Independence and integrity: technical metrics are most likely to be measured, calculated and reported by technical people who often have a stake in them.  In this case, an independent assessor (such as an IT auditor) could confirm the scan counts easily enough by querying the scanner management console directly, and with more effort they could have a robust discussion with whoever calculated the 'total number of systems/networks' figure.  Someone might conceivably have meddled with the console to manipulate the scan counts, but we're heading into the realm of paranoia there.  It seems unlikely to be a serious issue in practice.  The fact that the figures could be independently verified is itself a deterrent to fraud.
  • Cost-effectiveness: the number of systems/networks that are being vulnerability scanned would most likely be available on the management console as a built-in report from the program.  Determining the total number of systems/networks that could or should be scanned would require some manual effort: although the management console may be able to generate an estimate from the active IP addresses that it discovers, offline systems (such as portables) and isolated network segments (such as the DMZ) would presumably be invisible to the console.  In short, the metric can be collected without much expense but what about the other part of the equation, the benefits?  Concerns about its predictiveness and relevance don't bode well.  There's no escaping the fact that vulnerability scanning is a very narrow slice of information security risk management.
On that basis, and making some contextual assumptions about the kind of organization that might perhaps be considering the vulnerability scanning metric, I calculate the PRAGMATIC score for this metric at about 64% - hardly a resounding hit but it has some merit.

This narrow-scope operational metric would of course be perfect if the organization just happened to need to measure vulnerability scanning coverage, for instance if the auditors had raised concerns about this particular issue.  It doesn't hold much promise as a general-purpose organization-wide information security management or strategic metric, however. 

So, that's our take on the first of the 11 metrics.  More to follow: if you missed it, see the introduction and parts two, three, four and five of this series.

No comments:

Post a Comment

Have your say!