26 December 2012

SMotW #37: unaccounted software licenses

Security Metric of the Week #37: proportion of software licenses purchased but not accounted for in the repository

We are not entirely sure of the origin or purpose of this metric, but it's typical of the those that pop randomly out of the woodwork every so often for no obvious reason, sometimes taking on a curious aura of respectability depending on who raised or proposed them.  

Unfortunately, as it stands, we lack any context or explanation for the metric.  We don't have access to whoever proposed it, we can't find their reasoning or justification, and hence we find it hard to fathom their thinking processes that presumably led them to propose it.

Perhaps someone had been checking, validating or auditing software licenses and  used something along these lines as a measure in their report.  Maybe it was suggested by a colleague at an information security meeting or online forum, or proposed by a naive but well-meaning manager in such a way that it simply had to be considered.  Who knows, perhaps it came up in idle conversation, mystically appeared out of the mist in a dream, turned up as a worked example in a security metrics book, or featured in some metrics catalog or database.  

It may well have been someone's pet metric, something they invented, discovered or borrowed one day for a specific purpose, found useful in that context, and so presumed their success means it must therefore be a brilliant security metric for everyone, in other, unspecified contexts.*  

To be frank, we not terribly bothered where it came from or why it appeared on our shortlist.  We do care about its utility and value as a security metric for ACME Enterprises Inc, in relation to the plethora of others under consideration.

Maybe for some it really is a wonderful metric ... but evidently not for ACME.  The PRAGMATIC score says it all:

P
R
A
G
M
A
T
I
C
Score
1
1
90
84
1
70
50
81
30
45%




It scores abysmally on Relevance (to ACME's information security), on its ability to Predict or be used to direct ACME's information security status, and on its Meaning to ACME's information security people and managers.  On the other hand, it is highly Actionable in the sense that a low score self-evidently  implies the need to account for more of the purchased software licenses.  It's also pretty Genuine and would be hard to falsify unless someone had the motivation, skill and time to fabricate a stack of 'evidence' from which the numbers could be reconstructed.  ACME's people have better things to do.

OK so it's not ideal for information security but maybe it would have more value to, say, Finance or IT?  Perhaps they too could be persuaded to PRAGMATIC rate the metric and compare it to those they are using or considering ... no promises, mind you.

Anyway, its poor score clearly takes it out of contention as an information security metric for ACME, and right now we have a date with a mince pie and a small glass of vintage port ...

Merry Christmas readers.

* Note that we are not immune from this kind of generalization and a bias towards the metrics that we find valuable.   The metrics in the book, including the 'security metrics of the week' on this blog, come from a variety of sources.  Some were metrics that we have used in anger ourselves, before, including a few of our own pet metrics of course.  Some have been suggested, recommended even, by various other security metrics authors.  Some made an appearance in security surveys, management reports, blogs, discussion groups and standards such as ISO/IEC 27004.  Some we invented on-the-fly while writing the book, deliberately trying to illustrate and demonstrate the power of the PRAGMATIC approach in helping to differentiate the good from the bad and the ugly.  

Please remember, above all else, that whatever we or others may say or imply, we are NOT telling you what security metrics to use in your situation.  We are not clairvoyants.  We have ABSOLUTELY NO IDEA what your specific security information needs might be, except in the most general hand-waving sense of being infosec greybeards ourselves.  Much as we would love to just give you "the best security metrics" or a set of "recommended" or "valuable" or "worthwhile" metrics, we honestly can't do that.

What we are offering is a
straightforward method for you to
find your own security metrics.

In the unlikely event that you are short of inspiration, the book includes a stack of advice on where to find candidate security metrics - places to go looking - and hints on how to invent new ones either from scratch or by modifying and customizing or adapting existing or proposed metrics.  The PRAGMATIC method is a great way to sift through a giant haystack of candidate security metrics to find the very needles you've been hunting for.

20 December 2012

SMotW #36: business continuity spend

Security Metric of the Week #36: business continuity expenditure

At first glance, this looks like a must-have information metric: surely expenditure on business continuity is something that management can't possibly do without?  As far as ACME Enterprises is concerned, this metric warrants a fairly high PRAGMATIC score of 71%, making it a strong candidate for inclusion in ACME's information security measurement system.

It has its drawbacks, however.  Determining BC expenditure accurately would be a serious challenge, but thankfully great precision is probably not necessary in this context: estimations and assumptions may suffice.  Still, it would be handy if the accounting systems could be persuaded to regurgitate a sufficiently credible and reliable number on demand.  Furthermore, it is not entirely obvious what management is expected to do as a result of the metric, at least not unless the business benefits of business continuity are also reported.  The net value of business continuity, then, could be an even better metric.

04 December 2012

SMotW #35: compliance maturity

Security Metric of the Week #35: information security compliance management maturity

Compliance with information security-related laws and regulations is undoubtedly of concern for management, since non-compliance can lead to  substantial penalties both for the organization and, in some cases, for its officers personally.  Legal and regulatory compliance is generally asserted by the organization, but confirmed (and in a sense measured) by independent reviews, inspections and audits.  

But important though they are, laws and regulations are just part of the compliance landscape.  Employees are also expected to comply with obligations imposed by management (in formal policies mostly) and by other third parties (in contracts mostly).  Compliance in these areas is also confirmed/measured by various reviews, inspections and audits.

In order to measure the organization's compliance practices, then, we probably ought to take all these aspects into account. 

P
R
A
G
M
A
T
I
C
Score
90
95
70
80
90
85
90
85
90
86%



This week's security metric is another maturity measure.  Maturity metrics (as we have described before) are very flexible and extensible, so it's no problem to take account of all the issues above, and more besides.

We have been quite harsh on the Actionability rating for this metric, giving it "just" 70%, in anticipation of the practical issues that would crop up if Acme's management deemed it necessary to improve the organization's security compliance.  On the other hand, breaking down and analyzing security compliance in some detail makes this an information-rich metric.  Aside from the overall maturity score, management would be able to see quite easily where the biggest improvement opportunities lie.

PRAGMATIC security metrics for competitive advantage

Blogging recently about Newton's three laws of motion, we mentioned that organizations using PRAGMATIC metrics have competitive advantages over those that don't.  Today, we'll expand further on that notion.

Writing in IT Audit back in 2003, Will Ozier discussed disparities in the way information security and other risks are measured and assessed.  Not much seems to have changed in the nine years since it was published.  Ozier suggested a "central repository of threat-experience (actuarial) data on which to base information-security risk analysis and assessment": today, privacy breaches are being collated and reported fairly systematically, thanks largely to the privacy breach disclosure laws, but those are (probably) a tiny proportion of all information security incidents - at least, in my experience things such as information loss, data corruption, IP theft and fraud are far more prevalent and can be extremely damaging.  Since these are not necessarily reportable incidents, most don't  become public knowledge, hence we don't have reliable base data from which to calculate the associated risks with any certainty. 

"In my experience" is patently not a scientific basis however.  I doubt that adding "Trust me" would help much either.

Talking of non-scientific, there is no shortage of surveys, blogs and other sources of anecdotal information about security incidents.  However, the statistics are of limited value for making decisions about information security  risks.  The key issue is bias: entire classes of information security incident may not even be recognized as such.  Take human errors, for instance.  Human errors that lead to privacy breaches may be reported but for all sorts of reasons there is a tendency not to want to blame someone, hence often the cause is unstated or ascribed to something else.  Most such incidents probably remain undetected, although some errors are noticed and quietly corrected.

However, while we lack publicly-available data about most information security incidents, organizations potentially have access to a wealth of internal information, provided that information security incidents are reported routinely to the Help Desk or wherever.  Information security reviews, audits and surveys within the organization can provide yet more data, especially on relatively serious incidents, and especially in large, mature organizations.

OK, so where is this rambling assessment leading us in relation to information security metrics?  Well in case you missed it, that "wealth of internal information" was of course a reference to security metrics.

And what have security metrics, PRAGMATIC security metrics specifically, got to do with competitive advantage?  Let me explain.

Aside from selecting or designing information security metrics carefully from the outset, management should review the organization's metrics from time to time to confirm and where necessary improve, supplement or retire them.  This should ideally be a systematic process, using metametrics (information about metrics) to examine the metrics, comparing their value rationally against their information requirements.  Fair enough, but why should they use PRAGMATIC metametrics?  Won't SMART metrics do?

The Accuracy, Independence and Genuinness of measurements are important concerns, especially if there might be systematic biases in the way the base data are collected or analyzed, or even deliberate manipulation by someone with a hidden agenda and a blunt ax.  This hints at the possibility of analyzing the base data or measurement values for patterns that might indicate bias or manipulation (Benford's law springs immediately to mind) as well as for genuine relationships that may have Predictive value.  It also hints at the need to check the quality and reliability of individual data sources, for instance the variance or standard deviation are guides to their variability and, perhaps, their integrity or trustworthiness.  Do you routinely review and reassess your security metrics?  Do you actually go through the process of determining which ones worked well, and which didn't?  Which ones were trustworthy guides to reality, and which ones lied?  Do you think through whether there are issues with the way the measurement data are gathered, analyzed, presented, and/or interpreted and used - or do you simply discard hapless metrics that haven't earned their keep without truly understanding why?

Relevance and Timeliness are both vital considerations for all metrics when you think about it.  How many security situations have been missed because some droplet of useful information was submerged in a tsunami of junk?  How many times have things been neglected because the information arrived too late to make the necessary decisions?  To put that another way, how much more efficiently could you direct and control information security if you had a handle on the organization's real security risks and opportunities, right now?  

In respect of competitive advantage, Cost-effectiveness pretty much speaks for itself.  It's all very well 'investing' in a metrics dashboard gizmo with all manner of fancy dials and glittery indicators, but have you truly thought through the full costs not just of generating the displays, but using them?   Are the measurements merely nice to know, in a coffee-table National Greographic kind of way, or would you be stuffed without them?  What about the opportunity cost of either being unable to use or discounting other, perfectly valid and useful metrics that, for some reason, don't look particularly sexy in the dashboard format?  Notice that we're not railing against expensive dashboards per se, provided they more than compensate for the costs in terms of the value they generate for the organization - more so than other metrics options might have achieved.  Spreadsheets, rulers and pencils have a lot going for them, particularly if they help focus attention on the information content rather than its form.

In contrast to the others, Meaningfulness is a fairly subtle metametric. We interpret it specifically as a measure of the extent to which a given information security metric 'just makes sense' to its intended audience.  Is the metric self-evident, smack-the-forehead blindingly obvious even, or does it need to be painstakingly described, at length, by a bearded bloke in a white lab coat with frizzy hair, attention-deficit-disorder and wild, staring eyes?  A metric's inherent Meaningfulness is a key factor in relation to its perceived value, relevance and importance to the recipient, which in turn affects the influence that the numbers truly have over what happens next.  A Meaningful metric is more likely to be believed, trusted and hence actually used as a basis for decisions, than one which is essentially meaningless.  Let the competitors struggle valiantly on with their voluminous management reports, tedious analysis and, frankly, dull appendices stuffed with numbers that nobody values.  We'll settle for the Security Metrics That Truly Matter, thanks.

The Timeliness criterion is also quite subtle.  In the book we explain how the concept of feedback and hysteresis applies to all forms of control, although we have not seen it described before in this context.  A typical  manifestation of hysteresis involves temperature controls using relatively crude electromechanical or electronic sensors and actuators.  As the temperature  reaches a set-point, the sensor triggers an actuator such as a valve or heating element to change state (opening, closing, heating or cooling as appropriate).  Consequently the temperature gradually changes until it reaches another set point, whereupon the sensor triggers the actuator to revert to its original state.  The temperature therefore cycles constantly between those set points, which can be markedly different in badly designed or implemented control systems.  Hysteresis loops apply to information security management as well as temperature regulation: for instance, adjusting the settings on a firewall between "too secure" and "too insecure" is better if the metrics relating to firewall traffic and security exceptions are available and used in near-real-time, rather than on the basis of, say, a monthly firewall report, especially if the report takes a week or three to compile and present!  The point is that network security incidents may exploit that gap or delay between "too secure" and "too insecure", so Timeliness can have genuine security and business consequences.

Finally for today, spurious precision is a factor relating to several of the PRAGMATIC criteria (particularly Accuracy, Predictability, Relevance, Meaning, Genuinness and Cost-effectiveness).  We're talking about situations where the precision of reporting exceeds the precision of measurement and/or the precision needed to make decisions. Have your competitors even considered this when designing their security metrics?  Do they obsess over marginal and irrelevant differences between  numbers derived from inherently noisy measurement processes, or appreciate that "good enough for government work" can indeed be good enough, much less distracting and eminently sensible under many real-world circumstances?  A firm grasp of statistics can help here but it's not necessary for everyone to be a mathematics guru, so long as someone who knows their medians from their Chi-squared can be trusted to spot when assumptions, especially implicit ones, no longer hold true.  

We'll leave you with a parting thought.  Picture yourself presenting and discussing a set of PRAGMATIC security metrics to, say, your executive directors.  Imagine the confidence you will gain from knowing that the metrics you are discussing have been carefully selected and honed for that audience because they are Predictive, Relevant, Actionable ... and all that.  Imagine the feeling of freedom to concentrate on the knowledge and meaning, and thus the business decisions about security, rather than on the numbers themselves.   Does that not give you a clear advantage over your unfortunate colleagues at a competitor across town, struggling to explain let alone derive any meaning from some near-random assortment of pretty graphs and tables, glossing over the gaps and inconsistencies as if they don't matter?