29 May 2013

Hannover/Tripwire metrics part 1

I mentioned the Hannover Research/Tripwire CISO Pulse/Insight Survey recently on the blog.  Now it's time to take a closer look at the 11 security metrics noted in section 5 of the report.  


The report doesn't explain the origin of these 11 metrics.  How and why were they singled-out for the study, from a vast population of possible security metrics?  To be precise, it doesn't actually say that survey respondents were presented with this specific choice of 11 metrics, nor how many were on the list, leaving us guessing about the survey methods.

Furthermore, the report neglects to explain what the succinctly-named metrics really mean.  If survey respondents were given the same limited information, I guess they each made their own interpretations of the metrics and/or picked the ones that looked vaguely similar to metrics they liked or disliked.  

Anyway, for the purposes of this blog, I'll make an educated guess at what the metrics mean and apply the PRAGMATIC method against each one in turn to gain further insight. 

Metric 1: "Vulnerability scan coverage"

Using automated tools to scan the organization's IT systems and networks repeatedly for certain technical issues is a common approach in large organizations to identifying known technical vulnerabilities - old/unpatched software, for example, or unexpectedly active network ports.  The metric refers to 'coverage', which I take to mean the proportion of the organization's IT systems and/or network segments that are being regularly scanned for known technical security vulnerabilities.  

Why would this be the most popular of the 11 metrics in the survey report, apparently used by up to two-thirds of the respondents?  Being naturally cynical, i'd say the fact that the survey was sponsored by Tripwire, a well-known supplier of vulnerability scanners, is a massive clue!

Anyway, let's lift the covers off the metric using the PRAGMATIC approach:
  • Predictiveness: an organization that scores low on this metric is probably unaware of technical vulnerabilities that it really ought to know about, betraying an immature approach to information security, whereas one firmly on top of its technical security vulnerabilities demonstrates a more mature approach ... to that one aspect of IT security anyway.  However, scan coveraqe per se doesn't tell us much about the system/network security - it merely tells us what proportion of our IT systems/networks are being scanned.  The scans themselves might reveal absolutely terrible news, an enormous mountain of serious vulnerabilities that need to  be addressed, whereas the coverage metric looks fabulous, or indeed the converse ("We only scan a small proportion of our systems/networks because the scans invariably come up clean!").  At best, this metric gives an indication of the organization's information security management capabilities, and a vague pointer towards its probable status.
  • Relevance to information security is limited in the sense that known technical system/network security issues are only one type of information security vulnerability.  Patching systems and securing network configurations is a valuable security control, but there are many others.  This metric, like most technical or IT security measures, is fairly narrow in scope.
  • Actionability: on this criterion, the metric scores quite well.  If scan coverage is too low (whatever that means), the response obviously enough is to increase the coverage by scanning a greater proportion of the systems/networks currently being scanned, and/or expanding the range of types of systems/networks being scanned.  There will be diminishing returns and, at some point, little if anything to be gained by expanding the coverage any further, but the metric should at least encourage the organization reach that point.
  • Genuineness: if someone (such as the CIO or CISO) wanted to manipulate the metric for some ulterior purpose (such as to earn an annual bonus or grab a bigger security budget), how could they do so?  Since the metric is presumably reported as a proportion or percentage, one possibility for mischief would be to manipulate the apparent size of the total population of IT systems/networks being scanned, for instance by consciously excluding or including certain categories.  "We don't scan the systems in storage because they are not operational" might seem fair enough, but what about "Development or test systems don't count because they are not in production"?  It's a slippery slope unless some authority figure steps in, ideally by considering and formally defining factors like this when the metric is designed, assuming there is such a process in place.
  • Meaningfulness: aside from the issues I have just raised, the metric is reasonably self-evident and scores well on this point, provided the audience has some appreciation of what vulnerability scanning is about - which is likely if this is an operational security metric, intended for IT security professionals.  Otherwise, it could be explained easily enough to make sense of the numbers at least.  It's quite straightforward as metrics go.
  • Accuracy: in all probability, a centralized vulnerability scanning management system can probably be trusted to count the number of systems/networks it is scanning, although that is not the whole story.  It probably cannot determine the total population of systems/networks that ought to be scanned, a figure that is essential to calculate the coverage proportion.  Furthermore, we casually mentioned earlier that vulnerability scans should be repeated regularly in order to stay on top of changes.  'Regularly' is another one of those parameters that ought to be formally defined, both as a policy matter and in connection with the metric.  At one ridiculous extreme, scanning a given IT system just once might conceivably be sufficient for it to qualify as "scanned" for ever more.  At the opposite extreme, mothballed IT systems might have to be dragged out of storage every month, week, day or whatever and turned on purely in order to scan them, pointlessly.
  • Timeliness: automated scan counts, calculations and presentation should be almost instantaneous.  Figuring out the total number of systems/networks may involve manual effort and would take a bit longer, but this is probably not a time-consuming burden.  With regard to the risk management process, the metric is related to vulnerabilities rather than incidents, hence the information is available in good time for the organization to respond and hopefully avert incidents caused by known technical attacks.
  • Independence and integrity: technical metrics are most likely to be measured, calculated and reported by technical people who often have a stake in them.  In this case, an independent assessor (such as an IT auditor) could confirm the scan counts easily enough by querying the scanner management console directly, and with more effort they could have a robust discussion with whoever calculated the 'total number of systems/networks' figure.  Someone might conceivably have meddled with the console to manipulate the scan counts, but we're heading into the realm of paranoia there.  It seems unlikely to be a serious issue in practice.  The fact that the figures could be independently verified is itself a deterrent to fraud.
  • Cost-effectiveness: the number of systems/networks that are being vulnerability scanned would most likely be available on the management console as a built-in report from the program.  Determining the total number of systems/networks that could or should be scanned would require some manual effort: although the management console may be able to generate an estimate from the active IP addresses that it discovers, offline systems (such as portables) and isolated network segments (such as the DMZ) would presumably be invisible to the console.  In short, the metric can be collected without much expense but what about the other part of the equation, the benefits?  Concerns about its predictiveness and relevance don't bode well.  There's no escaping the fact that vulnerability scanning is a very narrow slice of information security risk management.
On that basis, and making some contextual assumptions about the kind of organization that might perhaps be considering the vulnerability scanning metric, I calculate the PRAGMATIC score for this metric at about 64% - hardly a resounding hit but it has some merit.

This narrow-scope operational metric would of course be perfect if the organization just happened to need to measure vulnerability scanning coverage, for instance if the auditors had raised concerns about this particular issue.  It doesn't hold much promise as a general-purpose organization-wide information security management or strategic metric, however. 

So, that's our take on the first of the 11 metrics.  More to follow: if you missed it, see the introduction and parts two, three, four and five of this series.

SMotW #59: residual risk liability

Security Metric of the Week #59: total liability value of residual/untreated information security risks

This sounds like a metric for the CFO: tot-up and report all the downside potential losses if untreated or residual information security risks were to materialize.  Easy peasy, right?

Err, not so quick, kimo sabe.

In order to report risk-related liabilities in dollar terms, we would presumably have to multiply the impacts of information security incidents with the probabilities of their occurrence.  However, both parameters can only be roughly estimated, hence the metric is subjective and error-prone which naturally cuts down on its Accuracy rating. 

The skills and effort needed to calculate the liabilities, especially with the care needed to address that subjectivity, makes this a relatively Costly security metric too, although arguably there are substantial benefits in doing the analysis, aside from the metric.  

The Actionability rating is depressed since it is unclear what management would be expected to do in response to the metric.  If the value is high, are they supposed to pump more money into information security?  And what if the value is low: is it safe to cut back on the security budget?  Either way, the metric alone does not indicate the extent or scale of the response.  There is no comparator or criterion, except perhaps for prior values, but unless you went to extraordinary lengths to control the measurement process, random variations arising from the subjectivity would generate a lot of noise masking the puny signal.
  
On a more positive note, the liabilities arising from residual risks are patently Relevant to information security, and in the form of large dollar figures, are likely to be highly Meaningful to management, given the common if crude impression of management that "In the end, it all comes down to money".  Making the effort to express information security risks in dollar terms does at least help position security as a business issue, although there are better ways.

Acme managers rated the metric's overall PRAGMATIC score a disappointing 59%, which effectively put it out of the running in its present form given that  there were several similar but higher-scoring candidate metrics on the table.  

It's not entirely obvious how the inherent weaknesses of this metric might be addressed to improve its PRAGMATIC score.  What, if anything, would you suggest?  Have you actually used a metric similar to this, and if so how did it work out?  We'd love to hear from you.

28 May 2013

Hannover/Tripwire security survey emphasizes culture

"Building a culture of security within the organization as well as compliance with regulations, standards, and policies are the most important security capabilities for executives and non-executives: the surveyed information security managers were most likely to give these capabilities the highest overall importance ranking."
So says Hannover Research's CISO Pulse Survey aka CISO Insight Survey*, a small-scale study on behalf of Tripwire.  Whether you consider the 100 or so mostly North American respondents a valid sample of the population is your decision, but let's just say that their conclusions are "unsurprising".

Unfortunately the report does not explain what 'building a culture of security' actually involves.  It's a shame that the security culture is so often mentioned glibly in such vacuous, throwaway statements.  The concept may gets heads nodding sagely but, in my experience with a few exceptions, information security professionals, managers and executives rarely have much of a clue about how to do it.  It's the elephant in the room.  Everyone agrees that something must be done, but presumably expects someone else to do it!

An information security awareness program is a vital part of establishing and maintaining the security culture provided it is done well - and by that I'm getting at things such as:
  • Being overtly supported by all levels of management, top-to-bottom;
  • Addressing the entire organization, not just "end-users" (a horribly demeaning term, and an IT-centric one at that);
  • Being creative, appealing and motivational;
  • Being topical and current, keeping up with what's hot in this dynamic area;
  • Presenting useful, interesting, well-written content in forms and styles that suit the intended audiences (note the plural: we each have our own communications needs and preferences, so carve up the population into distinct segments rather than trying to paint them all with the same broad brush);
  • Being broadly-based, taking in a wide variety of topics, some of which are tangential but still important in this sphere (compliance being a classic example: compliance with information security and privacy laws is but a small part of the compliance imperative);
  • Being relevant and applicable, promoting information security as a business issue with genuine business value rather than for its own sake.
When I get the chance, I'll be critiquing and scoring the specific metrics mentioned in the report using the PRAGMATIC method, here on the security metrics blog.  Meanwhile, read more on how to build a security culture (including why that is not the ultimate goal), how to measure it and about interpreting survey statistics:
Regards,

PS As if that's not enough, we've just published a complete security awareness module on social engineering, social networking and human factors which includes a paper on security metrics in this area.

PPS  I did have time to continue the bloggings after this introduction.  By all means take a look at parts onetwothreefour and five of this series.

* The survey is, of course, part of Tripwire's marketing, hence they squeeze us for our contact details prior to releasing the report.  Let's hope they are responsible marketers with an appreciation of our privacy rights.

27 May 2013

Unusual information security metric: number of train passengers

A information security metrics piece in our local newspaper caught my recently.  To be honest, it didn't actually use the word "metric" as such, nor "information security" for that matter, but that's what it was.

Like many others, the train company in Wellington NZ has a problem with fare dodgers.  Some bright spark in their internal audit team, I guess, realized that comparing the number of people who use individual trains with the number of tickets sold would give them a huge clue about which trains and stations should be at the top of the ticket inspectors' hit list.

Counting passengers would be a tedious and error-prone job for a person, but an infra-red beam across the carriage doors would do nicely - particularly as the hardware may well already be installed as part of the door control and safety system.

The automated count will inevitably have errors (e.g. passengers who alight at the wrong stations then rejoin the same train), but provided the counting system is correctly configured and calibrated, the errors should be within known bounds and good enough for the purpose.  Likewise the number of tickets will have genuine errors, for example passengers with season tickets who neglect to swipe them.  The absolute number of passengers traveling is less important than the relative numbers of passengers and tickets: the further apart they are, the more likely something untoward is going on.

I imagine the statistics will be presented graphically, showing a breakdown of the number of passengers and corresponding number of tickets for various journeys.  Those with the greatest discrepancies would naturally be targeted by the inspectors.

I imagine also the graphs will have a few empty slots where the ancient rolling stock breaks down - which hints at another important metric for the railway: service reliability.  Conceivably a greater number of passengers will be prepared to pay their way if the trains were modern, comfortable, fast and reliable.  But perhaps I'm being overly cynical.  Fare-dodgers aren't helping since their payments would help fund the upgrades needed.

Regards,
Gary Hinson

24 May 2013

Security metric #58: emergency changes

Security Metric of the Week #58: rate of change of emergency change requests

Graphical example


The premise for this week's candidate security metric is that organizations with a firm grip on changes to their ICT systems, applications, infrastructure, business processes, relationships etc. are more likely to be secure than those that frequently find the need for unplanned - and probably incompletely specified, developed, tested and/or documented - emergency changes.  

Emergency change requests are those that get forced through the normal change review, approval and implementation steps to satisfy some urgent change requirement, short-cutting or even totally bypassing some of the steps in the conventional change management process.  Often the paperwork and management authorization is done retroactively for the most desperate of emergency changes.  

Being naturally pragmatic, we appreciate that some emergency changes will almost inevitably be required even in a highly secure organization, for instance when a vendor releases an urgent security patch for a web-exposed system, addressing a serious vulnerability that is being actively exploited.  Emergency changes are a necessary evil, particularly when the conventional change management process lumbers along.  However, the clue is in the name: emergency changes should not be happening routinely!

Looking at the specific wording of the proposed metric, there are some subtleties worth expanding on.  

First of all, it would be simpler to track and report the number of emergency changes during the reporting period, in other words the rate of emergency changes.  Let's say for the sake of argument that the rate is reported as "12 emergency changes last month": is that good or bad news for management?    Is 12 a high, medium or low value?  What's the scale?  Without additional context, it's impossible to say for sure.  A line graph plotting the metric's value over time (vaguely similar to the one above) would give some of that context, in particular demonstrating the trend.  If instead we measure and report the rate of change of emergency changes, it would be even easier for management to identify when the security situation is improving (i.e. when the rate of change is negative) or deteriorating (a positive rate of change).  For instance, the up-tick towards the right of the rate graph above may cause concern since the rate of emergency changes has clearly increased.  However, the rate of change actually flipped from negative to positive at the bottom of the dip some months earlier, and that would have been a better, earlier opportunity to figure out what was going on in the process.  In this kind of situation, rate of change is a more Timely metric than rate.

Next, note that the proposal is to measure not emergency changes made but emergency changes requested.  The idea is to emphasize that, by planning further ahead, fewer emergency changes need be requested.  Fewer requests, in turn, means less work for the change management committee and a greater opportunity to review the emergency changes that do come through.  Deliberately moving the focus upstream in the process from 'Make change' to 'Request change' again makes the metric more Timely.

Finally, consider what would happen if this metric was implemented without much thought and preparation, simply being used by management to bludgeon people into improving (i.e. reducing) the rate of change of emergency change requests.  The intended outcome, in theory, is obviously to improve advance planning and preparation such that fewer emergency changes are required: the unintended consequence may be that, in practice, roughly the same number of changes are put through the process but fewer of them are classed as emergencies.  Some might be termed urgent or obligatory if that would deflect management's wrath while still ensuring that the changes are pushed through, much as if they had been called emergencies in fact.  This is an example of the games people play when we start measuring their performance, especially if we use the numbers as a big stick to beat them.  In this case, the end result may be a worsening of information security since those urgent or obligatory changes may escape the intense, focused review that emergency changes endure.  There are things we could do to forestall the subversion of the metric, such as:
  • Using complementary metrics (e.g. the rate of all types of change);
  • Explicitly defining the classifications to be applied, along with compliance effort to make sure they are being used correctly;
  • Improving the efficiency and speed of the regular change management process (a spin-off benefit of doing something positive for emergency changes) ...
... and the best time to start all that is ahead of implementing the metric, hinting at the 'metric implementation process' (read more on that in the book).  

To close off this blog piece, let's take a quick look at Acme management's opinion of the metric:

P
R
A
G
M
A
T
I
C
Score
64
71
69
73
78
70
70
69
83
72%




They liked it: 72% is a pretty good score.  The PRAGMATIC ratings are fairly well balanced, although there is still some room for improvement.  Management were not entirely impressed at the metric's ability to Predict Acme's information security status since there are clearly many other factors involved besides the way it handles emergency changes.  On the other hand, they thought the metric had Meaning (particularly having discussed the things we've mentioned here in the blog, in the course of applying the PRAGMATIC method) and was Cost-effective - a relatively cheap and simple way to get a grip on the change management process, with benefits extending beyond the realm of information security.  [That's a topic to discuss another time: PRAGMATIC security metrics are not just good for security!] 

The Timeliness rating was not quite as high as you might have thought, given the earlier discussion, for the simple reason that Acme was not handling a huge number of changes as a rule.  Therefore, the metric only made sense if measured over a period of at least one month, preferably every two or three months, inevitably imposing a time-lag and perhaps causing the hysteresis effect noted in the book (pages 91-93).

15 May 2013

Security metric #57: % of information assets classified

Security Metric of the Week #57: Proportion of information assets correctly classified


Patently, this metric relates to the classification of information, an important form of control.  

The assumption underlying classification is that the majority of an organization's information is neither critical nor sensitive.  It is therefore wasteful to secure all the information to the extent that is appropriate for the small amount that is highly critical or sensitive.  Likewise, the basic or baseline controls that are appropriate for most information are unlikely to be sufficient for the more critical or sensitive stuff.

The classification process can be as simple or as complicated as you like, according to the number of classes.  Taken to extremes:
  • A single classification level such as "Corporate Classified" could be defined in which case everything would end up being protected to the same extent.   
  • More likely, certain important items of information would be deemed "Corporate Classified" with the remainder being "Corporate Unclassified", meaning a two-level classification scheme (OK, three if you count the information assets that have yet to be classified!).
  • At the opposite end of the scale, the classification could be so granular in detail that many classes contain just a single information asset with a unique set of security controls for that specific asset.
  • Classification is essentially a pointless exercise at both extremes.  It's value increases in the middle ground where 'a reasonable number' of classes are defined, each containing 'a reasonable number' of information assets.  It's up to you to determine what's reasonable!
The driver for classification is also a variable.  Although we mentioned 'criticality' and 'sensitivity', those are not the only parameters.  For example, picture a 3x3x3 Rubik's cube with low-medium-high categories for confidentiality, integrity and availability, or a classification scheme that depends on the value of the information, howsoever defined.  

Military and government classification schemes appear quite simple in that they are largely or exclusively concerned with confidentiality (e.g. Secret, Top Secret, Ultra), but there are numerous wrinkles in practice such as subtly different definitions of the classes by different countries, and subsidiary markings identifying who is authorized to access the information. 

Corporate classification schemes commonly distinguish personal information, trade secrets, other internal-use information and public information, but again there are numerous variations.

Classifying information involves two key steps: 
  1. The information is assessed to determine the appropriate class using defined classification criteria.  
  2. Information security controls deemed appropriate for the particular classification level are applied.  
This week's example metric concerns step 1, and is only indicative of step 2 if we assume that a sound process is being followed religiously.   Step 2 could be measured independently using a suitable compliance metric.

The illustrative graphic above shows an hypothetical organization systematically assessing and classifying its information assets, measuring and reporting the metric month-by-month.  The graph plots "Proportion of information assets correctly classified" by month.  The simple Red-Amber-Green color-coding makes it obvious that things have improved substantially since the start of the initiative, with two step-changes in the levels presumably representing discrete projects or stages that made significant progress.

Actually measuring this metric could be something of a mission if you insist on doing so accurately (more on that point below).  First, since you are reporting a proportion, you need to determine the size of the whole, in other words how many information assets are there to be classified, in total?  Answering that further requires clarity over what constitutes an information asset.  Leaving aside the question of whether the term includes ICT hardware and storage media, or just the information/data content, the unit of analysis is also unclear.  For instance, does a customer database containing 1,000 customer records each with 100 fields count as one information asset, or 100, or 1,000, or 100,000, or some other number?   The answer is not immediately obvious.

In the same vein, the metric explicitly refers to assets being 'correctly' classified implying that, strictly speaking, someone should check the veracity of the classifications - potentially a huge amount of work and additional cost just for the sake of the metric.  

On the other hand, clarity over 'information asset' and 'correctly classified' may have value to the organization's information security beyond the metric.

Anyway, let's pick up on that point about the accuracy requirement for this metric.  Since we are reporting a proportion, the absolute numbers are less important than their relative quantities.  Rather than accuracy, consistency of the measurement approach is the primary concern.  With that in mind, it doesn't particularly matter how we define 'information asset' or 'correctly classified' just so long as the definition remains the same from month to month.  For various other reasons, it may occasionally be necessary to alter the definitions, in which case we should probably re-base prior values in order to maintain consistency of the metric.

Another big advantage of reporting a proportion is that it is possible to select and measure a representative sample of the population - 'representative' being the crucial term.  We're not going to discuss sampling methods today, though.  If you need more, there are brief notes about sampling in PRAGMATIC Security Metrics, while any decent statistics text covers it in laborious detail.

The excellent PRAGMATIC ratings indicate this metric is a hit for Acme Enterprises Inc:

P
R
A
G
M
A
T
I
C
Score
75
75
97
85
90
80
80
80
80
82%





In discussing various candidate metrics, Acme's managers were particularly impressed with this one's Actionability and clarity of Meaning (notwithstanding the notes above - presumably they already had a clear picture in the areas mentioned).   Driving up the proportion of information assets correctly classified was seen as a valid and viable goal to improve information security - not so much a goal in itself but a means of achieving a general security improvement for Acme as a whole, on the reasonable assumption that, following classification, security resources would be applied more rationally to implement more appropriate security controls.

08 May 2013

Security metric #56: embarrassment factor

Security Metric of the Week #56: embarrassment factor

This naive metric involves counting the privacy breaches and other information security incidents that become public knowledge and so embarrasses management and/or the organization.  The time period corresponds to the reporting frequency - for example it might be calculated and reported as a rolling count every 3-12 months, depending on the normal rate of embarrassing incidents.  

In bureaucratic or highly formalized organizations, it would be a challenge even to define what constitutes 'embarrassing', although most of us can figure it out for ourselves without getting too anal about it.

The metric's purpose, of course, is to reduce the number of embarrassing breaches/incidents that occur, which may involve reducing the rate of breaches/incidents and/or reducing the extent to which they are embarrassing.    With that end in mind, the precise definition of 'embarrassing' doesn't actually matter much, just so long as the audience appreciates that the metric fairly indicates the underlying trend.  Annotating the graph to remind viewers about specific incidents should have the desired effect.

In PRAGMATIC terms, ACME management rated this metric at 54%, in other words it would be unlikely to make the cut in their Information Security Measurement System or Executive Security Dashboard.  However, this is such a simple, easy and cheap metric to generate that the CISO might like to keep an informal tally of embarrassing incidents for his/her own purposes.  So long as the trend remains positive, the metric has little impact.  On the other hand, if ACME experiences a rash of embarrassing incidents, mentioning the metric's adverse trend could be an opportunity for the CISO to raise the matter with senior management.  

Sometimes, getting things on the agenda is half the battle.

01 May 2013

Security metric #55: policy coverage

Information Security Metric of the Week #55: information security policy coverage of frameworks such as ISO/IEC 27000

In much the same way that two-dimensional maps of three-dimensional landscapes are useful for hill-walkers, various frameworks, standards and methods such as ISO27k, SP800-53COBIT and the Standard of Good Practice are useful guides for navigating the field of information security.  Just as cartographers must transform the literal land into graphic representations on the map, the standards bodies and assorted authors take somewhat arbitrary decisions about which elements of information security to cover and in what sequence.

For example, section 7 of ISO/IEC 27002:2005 covers two distinct but related issues in asset management: 7.1 Responsibilities for protecting information assets, and 7.2 Classification of information.  Those two aspects could have been scoped and titled differently and might have been placed in separate sections or incorporated into other sections of the standard but the ISO/IEC committee, in its wisdom, chose to cover them both together in section 7.  

Security responsibilities and information classification are relevant to various information security risks and control objectives, hence they (along with most other controls) could have been discussed from different perspectives in several parts of the standard.  However this would have created duplication and confusion.  Instead, the controls are each discussed once and, where necessary, cross-referenced elsewhere.

ISO/IEC 27002 provides a convenient map that is widely understood.  Aside from the structure - more importantly in fact - the standard lays out a reasonably comprehensive suite of information security controls that could be considered a basic or minimal set: with some exceptions, most organizations that take information security seriously are using most of the controls listed in the standards.  Therefore, comparing an organization's information security controls against those recommended in the standard to identify any gaps is one way to measure the comprehensiveness of its controls.

That said, ISO27k is imperfect.  Aside from issues with the wording and meaning of the standard when it was published, there is a further dynamic aspect.  ISO/IEC 27002:2005 has become outdated in various respects, for example it does not explicitly and comprehensively cover cloud computing since cloud computing was barely even conceived when the standard was drafted.  With some artistic license, several recommended controls in the standard can be interpreted in the cloud computing context, but other necessary controls are either completely missing from the standard or are of limited value as currently worded.  To fill-in the gaps, we could wait for the standard to be updated and released (later this year, hopefully), or we could use various other security standards and frameworks in the meantime, supplementing them with advice from information security, risk, compliance, governance and related professionals, tailored to our specific circumstances.

Against that background, let's look at the value of a metric that measures the extent to which the organization's security policies cover the entire security landscape.

When they assessed this metric using the PRAGMATIC method, ACME management had in mind using their own information security coverage map which had been drawn up by the CISO to reflect the common ground across several security standards.  They envisaged the CISO systematically checking for discrepancies between the suite of policies and ACME's map and drawing up a simple color-coded coverage diagram similar to that shown above - red meaning "Inadequately covered", amber being "Partially covered" and green for "Fully covered".

P
R
A
G
M
A
T
I
C
Score
70
75
90
69
85
76
72
65
85
76%


  

The managers recognized a potential bias in that the person assessing and measuring the policies also owns them.  The CISO might honestly believe that one or more given ACME policies entirely cover part of the coverage map, whereas another security professional might feel that the policies don't go far enough to address the associated risks.  They could get around this limitation by commissioning an independent consultant or auditor to assess and measure the policies, and perhaps by separately measuring the correspondence between their information security map and applicable standards.  They might even go as far as to adopt the excellent Unified Compliance Framework, a rigorous synthesis of information security-related recommendations and obligations drawn from practically all the standards and laws in this area.  On the other hand, all that extra work would markedly delay the production of the metric and increase the costs.  A more pragmatic approach might be to have someone from Internal Audit or Risk Management cast a cynical eye over the scoring and challenge the CISO to justify her decisions - a process known as normalization in the world of metrics.  The CISO would also be asked to make notes during the measurement which would be useful for planning updates both to the policies and to the coverage map ... and here we're already talking about using the metric to inform decisions, implying that it definitely has potential.  In summary, the metric's 76% PRAGMATIC score feels right.

This is just one of a few similar metrics discussed in the book, and it would not be hard to think up many more along these lines, including variants of the ones we have discussed, similar metrics proposed elsewhere, and novel metrics invented for this purpose.  The PRAGMATIC method enables us to analyze and compare the metrics in a rational and systematic way, forcing us to think through the pros and cons of each one before selecting "a few good information security metrics".  We don't mean to trivialize the effort required to complete the metrics design, specify any mathematical analysis and presentation, implement them and of course use them, but PRAGMATIC gets us over by far the biggest obstacle: selecting the right metrics.