31 January 2013

SMotW #42: Consequences of noncompliance

Security Metric of the Week #42: Historic consequences of noncompliance

The title or name of this metric is not exactly crystal clear.  We presume it involves totting-up the fines/penalties resulting from noncompliance to (information security relevant) legal, regulatory and contractual obligations.


Knowing how much security noncompliance is costing the organization might conceivably help management determine whether sufficient resources are being applied to compliance activities ... but the metric has a few issues.  

It is historical, for starters, and we're not certain that projected trends would be meaningful in this context.  Noncompliance incidents tend to occur sporadically or in clusters (since investigating one incident may dig up others) rather than regularly and predictably.  

The Time taken to identify/calculate and sum compliance costs is another concern

Fines and penalties are not the only costs of noncompliance.  Bad publicity resulting from enforcement notices can cause brand damage, but how much is anyone's guess.  It 

Running the metric through the PRAGMATIC process resulted in a score of 68%:

P
R
A
G
M
A
T
I
C
Score
70
80
72
82
80
80
20
67
65
68%

As compliance metrics go, this is not one of the best.





30 January 2013

PRAGMATIC Security Metrics sample chapter

The publisher has released chapter 2 as a sample of the book.   Chapter 2 concerns the reasons or purposes for using security metrics.

Over on CISSPforum, we've just been chatting about the validity of security certifications such as SSAE16 (formerly SAS70), PCI-DSS and ISO/IEC 27001.  These schemes use formal compliance audits to determine whether organizations deserve to be certified.  Formal compliance audits are based around highly specific and explicit requirements, leaving the auditor and auditee little leeway to "interpret" the obligations.  It is implied that such "interpretation" would be a bad thing, although there is a strong argument to the contrary.

Unfortunately for this approach, information security refuses to be put into a neatly labelled box.  It simply is not possible to specify security control requirements in sufficient detail for the classic compliance audits, in a generic or universal one-size-fits-all specification.  There are far too many variables.  A control that might be entirely suitable for one organization might be over the top for another, hopelessly inadequate for a third, and totally inappropriate for a fourth.  Yes, context matters.  Nobody has succeeded, thus far, in writing a formal security controls specification that takes full account of all the different circumstances in which it needs to be applied, at least not without resorting to "recommendations" and "implementation guidance" such as in ISO/IEC 27002.  Aside from anything else, a comprehensive requirements specification would be so complex and wide-ranging as to be virtually unmaintainable, which is exactly the situation with '27002 right now.

Some industry sectors have done their best to create generic requirements specifications tailored to organizations in those industries.  SSAE16, PCI-DSS and ISO/IEC TR 27015 are examples from the financial services industry.  It is generally acknowledged that these are not very effective security standards, since the controls that are commonly applicable enough to be formally defined as requirements are a subset of the controls each organization actually needs to be considered secure.  They are the lowest common denominators of security.  Adopting the required controls falls well short of actually being secure.

Worse still, compliance certification has become a game, a costly* diversion of scarce resources from the vital business of addressing information security risks.  The evolution of PCI-DSS demonstrates this, with successive releases being more and more carefully word-crafted to make it harder for organizations to wheedle out of their security compliance obligations "on a technicality".  That PCI-DSS allowed organizations to continue using WEP and WPA long after these were known to be pathetic controls illustrates the problem.

Let's look at this from a different perspective.  SSAE16, PCI-DSS and ISO27k are being used to assess  organizations' security by auditing their compliance with generic security specifications.  They are security metrics.  Therefore, they can be evaluated and improved using the PRAGMATIC method.  What's more, there are many other potential metrics, some of which might prove even more valuable for the purposes of external stakeholders - primarily for assurance reasons but also for accountability etc. (see section 2.4 in chapter 2!).  I feel it's time to break away from the myopic formal-specification-compliance-audit-and-certification mindset, with decent security metrics for external as well as internal stakeholders in the organization's information security status being the key that unlocks much more creative and ultimately more effective approaches.  

What do you think?

* Costly for organizations being certified, costly for their customers, employees and other parties who lose out due to inadequate information security, but highly profitable for the small army of certification bodies, certification auditors and training companies gathered around the trough.

23 January 2013

SMotW #41: insurance cover vs premiums

Security Metric of the Week #41: insurance coverage versus annual premiums



An organization's insurance arrangements potentially reveal quite a bit about management's stance on risk.  Residual risks that management feel they cannot mitigate further, cannot avoid and will not accept, are typically transferred to third parties through various business arrangements, including insurance.  The insurance premiums management is willing to pay give an indication of the value they place on those risks, as well as the insurers' assessment of the same risks from their perspectives.

Insurance coverage versus annual premiums is someone's attempt to specify an insurance-based metric.  It sounds simple enough in theory: determine the insurance cover provided, divide it by the annual insurance premium, and plot it year-by-year to show ... errrr, what exactly?  At face value, it's just a measure of the value obtained from the insurers (e.g. $10m of cover for a premium of $10k p.a. is 'obviously' better value than $10m of cover for $20k p.a.), but digging a bit deeper, there are several unknown variables if we only look at the headline figure.  

What is meant by "$10m of cover" in fact?  Assuming it is the maximum possible payout, it begs questions about how the actual payouts are determined, meaning we ought to look into the precise terms and conditions of cover.  

"A premium of $10k" could also be misleading as the premium is often discounted in return for the customer accepting an 'excess', effectively a self-insured element.

The customer and insurer will normally have negotiated the rate, in the process exchanging vital information about the nature of the risks being insured and the nature of the insurance cover being offered.  As currently stated, the candidate metric doesn't even specify that we are talking about information security risks, let alone clarify which information security risks are being insured against.

Finally, on a practical note, it is seldom as easy as you might think to determine total for a given type of expense, especially in large/diverse organizations.  You may be lucky: there may be a specific accounting code for insurance premiums, and it may have been used correctly, but the chances of finding a reliable code for 'information security insurance' are slim.

So, let's take a look at Acme's PRAGMATIC scoring calculation:

P
R
A
G
M
A
T
I
C
Score
64
46
5
25
20
16
10
82
94
40%


Evidently, they are not terribly impressed with this candidate metric.  Note the very low ratings for Actionability and Timeliness, and the very high rating for Cost-effectiveness of this metric.  An annual, historical metric may be quite cheap and easy to calculate but frankly (as currently worded) it is not very useful.

['As currently worded' suggests that Acme might consider re-framing or re-phrasing the metric, checking the PRAGMATIC scores of variants to find more promising, much sharper candidates.  Perhaps something can be done to improve those low ratings?  This is an entirely valid and sensible way to use the PRAGMATIC method, although to be honest we would rather start with a better candidate in the first place, carefully honing it until it glistens like a Samurai sword in the moonlight.  Any metric that initially scores below 50% is arguably too dull to justify the effort, unless there are no better alternatives on the rack.]

19 January 2013

SMotW #40: ROI

Security Metric of the Week #40: ROI (Return On Investment)

ROI is a commonplace management accounting measure, a means of assessing the net worth (benefits less costs) of an investment, common as muck you might say.  But is is a good security metric?  

There are patently situations, in some organizations, in which ROI must be good since it often turns up in business cases, proposals or budget requests to help justify corporate projects or initiatives..  And yet some  prefer other financial metrics such as IRR (Internal Rate of Return).

When Acme Enterprises Inc. puts ROI through the PRAGMATIC sausage-machine, ROI turns out to be a fairly mediocre security metric: 

P
R
A
G
M
A
T
I
C
Score
65
72
25
25
88
50
44
60
90
58%


ROI is a Cost-effective metric (takes just an hour or two to calculate) that is presumably Meaningful to its intended audience i.e. management, but it doesn't score too well on the remaining criteria.  

It is not terribly Predictive of security outcomes.  'Throwing money at security' is no sure-fire way to become secure, while some penny-pinching organizations appear to survive with the bare minimum of security expenditure.  

The low Accuracy, Actionability and Genuinness ratings reflect concerns over its use as a decision-support tool.  ROI is normally used in isolation to justify individual projects that someone has already, in effect, chosen, rather than to compare a full suite of many possible investments, including various combinations and permutations (portfolio management) - meaning that a given security project may have a positive ROI, but various other security investments (not on the table) may have even better ROIs.  Even with the technical assistance of Finance Department to get the arithmetic right, ROI analyses for security investments necessarily require numerous assumptions, particularly in respect of the projected savings through mitigating security risks, reducing the probability and/or impact of security incidents.

Timeliness suffers because the metric is often only measured and reported prior to a project or initiative commencing, some months or years before it completes.  If Acme Enterprises was using ValIT (now subsumed within COBIT 5), it would be tracking actuals against projected costs and benefits, continually updating and refining the business case and ROI calculations.  But for the purposes of the book, we assumed Acme was just like most other organizations i.e. the ROI and business case is a one-off attempt to assess the  financials.

13 January 2013

Critically appraising security surveys


A thought-provoking paper by Alexis Guillot and Sue Kennedy of Edith Cowan University in Perth, Australia, examined typical information security surveys, concluding that, despite questions over their scientific validity, they are a useful source of management information.  

From a scientific perspective, the following criticisms are commonly leveled:
  • Survey design/method
  • Sample selection, including demographics and self-selection 
  • Sample size
  • Bias, particularly where sponsors have a vested/commercial interest in the topic
These are acknowledged as valid concerns by the paper's authors.  The authors also commented that surveys do not appear to account for the limited knowledge/expertise of the individual respondents, a subtler issue.  The scope of the surveys can be quite broad (e.g. covering IT security, physical security, risk management, incident management and financial management), yet do respondents (normally IT security professionals, I guess) take the trouble to seek out answers from their professional colleagues who are more familiar with each of these aspects, or do they simply make up answers on their behalf?  This hints at a further concern: the tendency for busy respondents to complete survey questions carelessly or dismissively, probably tending towards risk-averse responses given the mind-set typical of security professionals.


The authors acknowledge that survey-derived information - even though it is often biased in various ways and may be of dubious scientific value - may still prove useful for persuading management to support and invest in information security [in the absence of anything better].  I can almost hear Douglas Hubbard (author of How to Measure Anything) applauding from the back of the room.  In a sense, the end justifies the means.

If we were to apply the PRAGMATIC method to these surveys, such criticisms would be reflected in depressed ratings for Genuinness and Independence, and perhaps also Accuracy.  The Timeliness of surveys is also of concern, since they are usually annual or bi-annual snapshots, and take some months to produce.  On the other hand, their Predictive value, Relevance and Meaningfulness would be quite high, along with Cost-effectiveness (given that many security survey reports are provided free of charge, at least to those who responded if not to the general public) and Actionability (they are evidently being used as awareness vehicles to prompt management into responding).

The paper did not discuss whether the criticisms can or ought to be addressed, and if so how.  Using PRAGMATIC, we see that improving the Accuracy, Genuinness and Independence by, for example, commissioning a 'proper' scientific study of information security by a professional survey team would further impact the Cost and Timeliness - in other words, the net result may not be a markedly different PRAGMATIC score.  That's not to say that improving survey methods is pointless, rather that there are clearly trade-offs to be made.  

This gives us a very pragmatic bottom line: published security surveys are, on the whole, good enough to be worth using as security metrics.  While many of us take them at face value, they are even more valuable if you have the knowledge and interest to consider and ideally compensate for the underlying issues and biases, thinking about them in PRAGMATIC terms.  Whether you share your analysis with management (or, better still, undertake the analysis in conjunction with concerned managers) is a separate matter, but at least you will be well prepared to discuss their concerns if someone challenges the survey findings.  That's got to beat having the wind knocked out of your sails by a dismissive comment from an exec, surely?


POSTSCRIPT:  for a counter-view, check out this Microsoft academic research paper.  The authors examined the basis for wildly-inflated but widely circulated claims for the total value of cybercrime, for example.  Extreme extrapolation from relatively limited samples can result in one or two high-side outliers totally dominating and inflating the cost estimates.  Definitely food for thought. 

11 January 2013

"PRAGMATIC security metrics" available now




The long wait is over: PRAGMATIC Security Metrics has been published by Auerbach/CRC Press.

With the ink still drying, the wholesalers and distributors are busy stocking up the retailers as we speak.   

Those of you who had the foresight to pre-order it can expect your delivery soon.  If you haven't, you can order it online now from Amazon/Book DepositoryCRC Press, Foyles, Booktopia, !ndigo, BokusRed Pepper, Powell’s, QBD and elsewhere.

If you would rather thumb through the book before parting with the readies, ask your local bookstore or library to stock it.

Alternatively, persuade management to get the book in for your corporate university, library or bookshelf ... or simply buy it yourself and reclaim the expenses, or put the charge down to experience and self-training!

As to the business case for buying the book, can you afford not to improve your security metrics?  If you struggle to justify the sixty-odd dollars it will take to revolutionize your understanding of the measurement and management of information security, metrics may be the least of your worries!

10 January 2013

PRAGMATIC Security Metric of the Quarter #3

PRAGMATIC Security Metric of the Third Quarter




These are the example information security metrics we have discussed and scored over the past three months, ranked in descending order of their PRAGMATIC scores: 





Example metric P R A G M A T I C Score

Metametrics
96 91 99 92 88 94 89 79 95 91%

Access alert message rate
87 88 94 93 93 94 97 89 79 90%
Asset management maturity 90 95 70 80 90 85 90 85 90 86%
Compliance maturity 90 95 70 80 90 85 90 85 90 86%
Physical security maturity 90 95 70 80 90 85 90 85 90 86%

Thud factor
82 80 60 60 70 45 85 86 84 72%
Business continuity spend 75 92 20 82 95 70 70 70 70 72%
Benford's law 84 30 53 95 11 98 62 98 23 62%
Controls coverage  87 89 65 40 74 35 46 40 30 56%
Homogeneity 67 70 40 59 67 50 33 65 45 55%
Access control matrix status 70 50 60 60 88 25 40 20 40 50%

Unaccounted software licenses
1 1 90 84 1 70 50 81 30 45%

Unauthorized/invalid access count
61 78 33 16 33 0 44 35 33 37%

On that basis, we are happy to announce that the Information Security Metric of the Quarter is <cue drum roll> metametrics meaning a systematic measure of the quality (utility, value, fitness for purpose etc.) of the organization's information security metrics.  

Clearly we have in mind here the PRAGMATIC approach, but in fact other/variant approaches are possible, so long as there is a sensible, rational way of assessing the metrics that makes sense to management.  The key point of the metric, and the reason it scores so highly, is that measuring the quality of the organization's security metrics is an important step enabling management to improve the suite of metrics in a systematic manner, rather than the much more common ad hoc approach.  Better information security metrics, in turn, allows management to get a firm grip on the organization's information security arrangements, bring them under control, and improve them systematically too.  There is in fact a positive feedback loop at play, since better, more reliable and suitable information security arrangements generate better, more reliable and suitable data concerning information security risks and controls, in other words better metrics. 

That said, we fully accept our own obvious bias in this matter.  Having invented and written about the PRAGMATIC approach, we inevitably see metametrics through rose-tinted glasses.  Things may not be quite so rosy from your perspective, and that's fair enough.  But when you have a moment to yourself, take another look at the 13 metrics on the summary table above, plus those covered in the previous 2 quarters (browse back through the blog, or visit the Security Metric of the Quarter #1 and Security Metric of the Quarter #2), and draw your own conclusions.  You probably disagree with us on the scoring of some of the metrics (even in the hypothetical context of an imaginary company).  But, overall, do you accept that this is a reasonably straightforward, sensible way to consider, compare and contrast metrics?  Would you agree that, on the whole, the metrics that score well on the PRAGMATIC scale are better than those that score badly?  Is this discussion about the pros and cons of security metrics, using metametrics, something that you might use back at the ranch?

Bottom line: if we have persuaded you that the PRAGMATIC approach has merit, perhaps even that it might be a valuable addition to your arsenal of security management techniques, read the book for the full nine yards.  This blog is just a taster of what's to come.

Gary & Krag