26 December 2012

SMotW #37: unaccounted software licenses

Security Metric of the Week #37: proportion of software licenses purchased but not accounted for in the repository

We are not entirely sure of the origin or purpose of this metric, but it's typical of the those that pop randomly out of the woodwork every so often for no obvious reason, sometimes taking on a curious aura of respectability depending on who raised or proposed them.  

Unfortunately, as it stands, we lack any context or explanation for the metric.  We don't have access to whoever proposed it, we can't find their reasoning or justification, and hence we find it hard to fathom their thinking processes that presumably led them to propose it.

Perhaps someone had been checking, validating or auditing software licenses and  used something along these lines as a measure in their report.  Maybe it was suggested by a colleague at an information security meeting or online forum, or proposed by a naive but well-meaning manager in such a way that it simply had to be considered.  Who knows, perhaps it came up in idle conversation, mystically appeared out of the mist in a dream, turned up as a worked example in a security metrics book, or featured in some metrics catalog or database.  

It may well have been someone's pet metric, something they invented, discovered or borrowed one day for a specific purpose, found useful in that context, and so presumed their success means it must therefore be a brilliant security metric for everyone, in other, unspecified contexts.*  

To be frank, we not terribly bothered where it came from or why it appeared on our shortlist.  We do care about its utility and value as a security metric for ACME Enterprises Inc, in relation to the plethora of others under consideration.

Maybe for some it really is a wonderful metric ... but evidently not for ACME.  The PRAGMATIC score says it all:

P
R
A
G
M
A
T
I
C
Score
1
1
90
84
1
70
50
81
30
45%




It scores abysmally on Relevance (to ACME's information security), on its ability to Predict or be used to direct ACME's information security status, and on its Meaning to ACME's information security people and managers.  On the other hand, it is highly Actionable in the sense that a low score self-evidently  implies the need to account for more of the purchased software licenses.  It's also pretty Genuine and would be hard to falsify unless someone had the motivation, skill and time to fabricate a stack of 'evidence' from which the numbers could be reconstructed.  ACME's people have better things to do.

OK so it's not ideal for information security but maybe it would have more value to, say, Finance or IT?  Perhaps they too could be persuaded to PRAGMATIC rate the metric and compare it to those they are using or considering ... no promises, mind you.

Anyway, its poor score clearly takes it out of contention as an information security metric for ACME, and right now we have a date with a mince pie and a small glass of vintage port ...

Merry Christmas readers.

* Note that we are not immune from this kind of generalization and a bias towards the metrics that we find valuable.   The metrics in the book, including the 'security metrics of the week' on this blog, come from a variety of sources.  Some were metrics that we have used in anger ourselves, before, including a few of our own pet metrics of course.  Some have been suggested, recommended even, by various other security metrics authors.  Some made an appearance in security surveys, management reports, blogs, discussion groups and standards such as ISO/IEC 27004.  Some we invented on-the-fly while writing the book, deliberately trying to illustrate and demonstrate the power of the PRAGMATIC approach in helping to differentiate the good from the bad and the ugly.  

Please remember, above all else, that whatever we or others may say or imply, we are NOT telling you what security metrics to use in your situation.  We are not clairvoyants.  We have ABSOLUTELY NO IDEA what your specific security information needs might be, except in the most general hand-waving sense of being infosec greybeards ourselves.  Much as we would love to just give you "the best security metrics" or a set of "recommended" or "valuable" or "worthwhile" metrics, we honestly can't do that.

What we are offering is a
straightforward method for you to
find your own security metrics.

In the unlikely event that you are short of inspiration, the book includes a stack of advice on where to find candidate security metrics - places to go looking - and hints on how to invent new ones either from scratch or by modifying and customizing or adapting existing or proposed metrics.  The PRAGMATIC method is a great way to sift through a giant haystack of candidate security metrics to find the very needles you've been hunting for.

20 December 2012

SMotW #36: business continuity spend

Security Metric of the Week #36: business continuity expenditure

At first glance, this looks like a must-have information metric: surely expenditure on business continuity is something that management can't possibly do without?  As far as ACME Enterprises is concerned, this metric warrants a fairly high PRAGMATIC score of 71%, making it a strong candidate for inclusion in ACME's information security measurement system.

It has its drawbacks, however.  Determining BC expenditure accurately would be a serious challenge, but thankfully great precision is probably not necessary in this context: estimations and assumptions may suffice.  Still, it would be handy if the accounting systems could be persuaded to regurgitate a sufficiently credible and reliable number on demand.  Furthermore, it is not entirely obvious what management is expected to do as a result of the metric, at least not unless the business benefits of business continuity are also reported.  The net value of business continuity, then, could be an even better metric.

04 December 2012

SMotW #35: compliance maturity

Security Metric of the Week #35: information security compliance management maturity

Compliance with information security-related laws and regulations is undoubtedly of concern for management, since non-compliance can lead to  substantial penalties both for the organization and, in some cases, for its officers personally.  Legal and regulatory compliance is generally asserted by the organization, but confirmed (and in a sense measured) by independent reviews, inspections and audits.  

But important though they are, laws and regulations are just part of the compliance landscape.  Employees are also expected to comply with obligations imposed by management (in formal policies mostly) and by other third parties (in contracts mostly).  Compliance in these areas is also confirmed/measured by various reviews, inspections and audits.

In order to measure the organization's compliance practices, then, we probably ought to take all these aspects into account. 

P
R
A
G
M
A
T
I
C
Score
90
95
70
80
90
85
90
85
90
86%



This week's security metric is another maturity measure.  Maturity metrics (as we have described before) are very flexible and extensible, so it's no problem to take account of all the issues above, and more besides.

We have been quite harsh on the Actionability rating for this metric, giving it "just" 70%, in anticipation of the practical issues that would crop up if Acme's management deemed it necessary to improve the organization's security compliance.  On the other hand, breaking down and analyzing security compliance in some detail makes this an information-rich metric.  Aside from the overall maturity score, management would be able to see quite easily where the biggest improvement opportunities lie.

PRAGMATIC security metrics for competitive advantage

Blogging recently about Newton's three laws of motion, we mentioned that organizations using PRAGMATIC metrics have competitive advantages over those that don't.  Today, we'll expand further on that notion.

Writing in IT Audit back in 2003, Will Ozier discussed disparities in the way information security and other risks are measured and assessed.  Not much seems to have changed in the nine years since it was published.  Ozier suggested a "central repository of threat-experience (actuarial) data on which to base information-security risk analysis and assessment": today, privacy breaches are being collated and reported fairly systematically, thanks largely to the privacy breach disclosure laws, but those are (probably) a tiny proportion of all information security incidents - at least, in my experience things such as information loss, data corruption, IP theft and fraud are far more prevalent and can be extremely damaging.  Since these are not necessarily reportable incidents, most don't  become public knowledge, hence we don't have reliable base data from which to calculate the associated risks with any certainty. 

"In my experience" is patently not a scientific basis however.  I doubt that adding "Trust me" would help much either.

Talking of non-scientific, there is no shortage of surveys, blogs and other sources of anecdotal information about security incidents.  However, the statistics are of limited value for making decisions about information security  risks.  The key issue is bias: entire classes of information security incident may not even be recognized as such.  Take human errors, for instance.  Human errors that lead to privacy breaches may be reported but for all sorts of reasons there is a tendency not to want to blame someone, hence often the cause is unstated or ascribed to something else.  Most such incidents probably remain undetected, although some errors are noticed and quietly corrected.

However, while we lack publicly-available data about most information security incidents, organizations potentially have access to a wealth of internal information, provided that information security incidents are reported routinely to the Help Desk or wherever.  Information security reviews, audits and surveys within the organization can provide yet more data, especially on relatively serious incidents, and especially in large, mature organizations.

OK, so where is this rambling assessment leading us in relation to information security metrics?  Well in case you missed it, that "wealth of internal information" was of course a reference to security metrics.

And what have security metrics, PRAGMATIC security metrics specifically, got to do with competitive advantage?  Let me explain.

Aside from selecting or designing information security metrics carefully from the outset, management should review the organization's metrics from time to time to confirm and where necessary improve, supplement or retire them.  This should ideally be a systematic process, using metametrics (information about metrics) to examine the metrics, comparing their value rationally against their information requirements.  Fair enough, but why should they use PRAGMATIC metametrics?  Won't SMART metrics do?

The Accuracy, Independence and Genuinness of measurements are important concerns, especially if there might be systematic biases in the way the base data are collected or analyzed, or even deliberate manipulation by someone with a hidden agenda and a blunt ax.  This hints at the possibility of analyzing the base data or measurement values for patterns that might indicate bias or manipulation (Benford's law springs immediately to mind) as well as for genuine relationships that may have Predictive value.  It also hints at the need to check the quality and reliability of individual data sources, for instance the variance or standard deviation are guides to their variability and, perhaps, their integrity or trustworthiness.  Do you routinely review and reassess your security metrics?  Do you actually go through the process of determining which ones worked well, and which didn't?  Which ones were trustworthy guides to reality, and which ones lied?  Do you think through whether there are issues with the way the measurement data are gathered, analyzed, presented, and/or interpreted and used - or do you simply discard hapless metrics that haven't earned their keep without truly understanding why?

Relevance and Timeliness are both vital considerations for all metrics when you think about it.  How many security situations have been missed because some droplet of useful information was submerged in a tsunami of junk?  How many times have things been neglected because the information arrived too late to make the necessary decisions?  To put that another way, how much more efficiently could you direct and control information security if you had a handle on the organization's real security risks and opportunities, right now?  

In respect of competitive advantage, Cost-effectiveness pretty much speaks for itself.  It's all very well 'investing' in a metrics dashboard gizmo with all manner of fancy dials and glittery indicators, but have you truly thought through the full costs not just of generating the displays, but using them?   Are the measurements merely nice to know, in a coffee-table National Greographic kind of way, or would you be stuffed without them?  What about the opportunity cost of either being unable to use or discounting other, perfectly valid and useful metrics that, for some reason, don't look particularly sexy in the dashboard format?  Notice that we're not railing against expensive dashboards per se, provided they more than compensate for the costs in terms of the value they generate for the organization - more so than other metrics options might have achieved.  Spreadsheets, rulers and pencils have a lot going for them, particularly if they help focus attention on the information content rather than its form.

In contrast to the others, Meaningfulness is a fairly subtle metametric. We interpret it specifically as a measure of the extent to which a given information security metric 'just makes sense' to its intended audience.  Is the metric self-evident, smack-the-forehead blindingly obvious even, or does it need to be painstakingly described, at length, by a bearded bloke in a white lab coat with frizzy hair, attention-deficit-disorder and wild, staring eyes?  A metric's inherent Meaningfulness is a key factor in relation to its perceived value, relevance and importance to the recipient, which in turn affects the influence that the numbers truly have over what happens next.  A Meaningful metric is more likely to be believed, trusted and hence actually used as a basis for decisions, than one which is essentially meaningless.  Let the competitors struggle valiantly on with their voluminous management reports, tedious analysis and, frankly, dull appendices stuffed with numbers that nobody values.  We'll settle for the Security Metrics That Truly Matter, thanks.

The Timeliness criterion is also quite subtle.  In the book we explain how the concept of feedback and hysteresis applies to all forms of control, although we have not seen it described before in this context.  A typical  manifestation of hysteresis involves temperature controls using relatively crude electromechanical or electronic sensors and actuators.  As the temperature  reaches a set-point, the sensor triggers an actuator such as a valve or heating element to change state (opening, closing, heating or cooling as appropriate).  Consequently the temperature gradually changes until it reaches another set point, whereupon the sensor triggers the actuator to revert to its original state.  The temperature therefore cycles constantly between those set points, which can be markedly different in badly designed or implemented control systems.  Hysteresis loops apply to information security management as well as temperature regulation: for instance, adjusting the settings on a firewall between "too secure" and "too insecure" is better if the metrics relating to firewall traffic and security exceptions are available and used in near-real-time, rather than on the basis of, say, a monthly firewall report, especially if the report takes a week or three to compile and present!  The point is that network security incidents may exploit that gap or delay between "too secure" and "too insecure", so Timeliness can have genuine security and business consequences.

Finally for today, spurious precision is a factor relating to several of the PRAGMATIC criteria (particularly Accuracy, Predictability, Relevance, Meaning, Genuinness and Cost-effectiveness).  We're talking about situations where the precision of reporting exceeds the precision of measurement and/or the precision needed to make decisions. Have your competitors even considered this when designing their security metrics?  Do they obsess over marginal and irrelevant differences between  numbers derived from inherently noisy measurement processes, or appreciate that "good enough for government work" can indeed be good enough, much less distracting and eminently sensible under many real-world circumstances?  A firm grasp of statistics can help here but it's not necessary for everyone to be a mathematics guru, so long as someone who knows their medians from their Chi-squared can be trusted to spot when assumptions, especially implicit ones, no longer hold true.  

We'll leave you with a parting thought.  Picture yourself presenting and discussing a set of PRAGMATIC security metrics to, say, your executive directors.  Imagine the confidence you will gain from knowing that the metrics you are discussing have been carefully selected and honed for that audience because they are Predictive, Relevant, Actionable ... and all that.  Imagine the feeling of freedom to concentrate on the knowledge and meaning, and thus the business decisions about security, rather than on the numbers themselves.   Does that not give you a clear advantage over your unfortunate colleagues at a competitor across town, struggling to explain let alone derive any meaning from some near-random assortment of pretty graphs and tables, glossing over the gaps and inconsistencies as if they don't matter?

28 November 2012

SMotW #34: homogeneity

Security Metric of the Week #34: organizational and technical homogeneity

The degree of homogeneity (sameness) or heterogeneity (variation or variability) within the organization and its technologies affects its aggregated information security risks, in much the same way that monoculture and multiculture crops may face a differing risks from natural predators, parasites, adverse environmental conditions etc.  A particular mold that successfully attacks a certain cultivar of wheat, for example, may decimate a wheat field planted exclusively with that cultivar whereas it may not take hold, making little impact on a neighboring field planted with a mix of wheat cultivars differing in their susceptibility or resistance to the mold.  On the other hand, under ideal conditions, the monoculture crop may do exceptionally well (perhaps well enough to counteract the effects of the mold) where the mixed crop does averagely.  

Homogeneity of technologies, suppliers, contracts etc. increases an organization's exposure to  common threats - for example, serious security vulnerabilities in MS Windows may simultaneously impact the millions of organizations that rely on Microsoft's products.  On the other hand, homogeneity means standardization, lower complexity and ‘economies of scale’, generally generating substantial business benefits.  It is clearly in Microsoft's commercial interests to be seen to address serious security vulnerabilities in its products urgently, or risk mass defection of its customers (those who aren't entirely dependent, at least!).

The overall PRAGMATIC score for this candidate metric is mediocre:

P
R
A
G
M
A
T
I
C
Score
67
70
40
59
67
50
33
65
45
55%




The metric rates poorly on both Timeliness and Cost due to the difficulties of gathering and analyzing suitable data with any kind of precision.  However, a quick-and-dirty low-Accuracy assessment might be sufficient get this issue raised and discussed at the top table, which might actually be good enough (we're hinting at the measurement objective - an issue we have hardly mentioned in the blog but which is covered at length in the book).  The metric may perhaps be measured using scoring scales that we have discussed in several previous blog postings, for instance.

Sitting at 40%, the Actionability rating is also depressed for two distinct reasons: 
  1. It is not entirely clear what constitutes an 'ideal' amount of homogeneity, since, as we have just said, there are pros and cons to it;
  2. There are obvious practical constraints on management's ability to change the organization's homogeneity even if they wanted to do so.  Senior management might institute a supplier diversity policy, for instance, but there is likely to be considerable inertia due to the existing portfolio of suppliers currently contracted.  In many cases, there will be overriding commercial or technical reasons to retain the current suppliers, on top of the natural affinity that emerges through social interaction between individual employees and their supplier contacts.
Bottom line: this candidate metric is unlikely to make the grade for Acme Enterprises Inc., but it may be valuable elsewhere.

22 November 2012

Newton's take on security metrics

He may not have considered this at the time, but Sir Isaac Newton's three laws of motion are applicable to security metrics ... 


Law 1.  Every object in a state of uniform motion tends to remain in that state of motion unless an external force is applied to it. 

An organization lacking effective metrics has no real impetus to change its approach to information security.  Management doesn't know how secure or insecure it is, nor whether security is "sufficient", and has no rational basis for allocating resources to security, nor for spending the budget on security activities that generate the most value.  Hence, they carry on doing pretty much what they've always done.  They approve the security budget on the basis of "last year's figure, plus or minus a bit".  They do security compliance activities under sufferance, and at the last possible moment.  

The law of inertia is particularly obvious in the case of large bodies that continue to blunder through situations that smaller, more nimble and responsive ones avoid.  We're not going to name names here: simply check the blogosphere and news media for plenty of unfortunate examples of sizable, generally bureaucratic, often governmental organizations that continue to experience security incident after incident after incident.  Management shrugs off adverse audit reports, inquiries and court cases as if it's not their fault.  "Our hands are tied", they bleat, "don't blame us!" and messrs Sarbanes and Oxley groan. 

By the same token, the auditors, investigators, courts and other stakeholders lack the data to state, definitively, that "You are way behind on X, and totally inadequate on Y".  They know things are Not Quite Right, but they're not entirely sure what or why.  Furthermore, those who mandate various security laws, regulations and edicts have only the vaguest notion about what's truly important, and what would have the greatest effect.  Mostly they're guessing too.


Law 2.  The relationship between an object's mass m, its acceleration a, and the applied force F is F = ma

Applying a force to an object accelerates or decelerates it.  The amount of acceleration/deceleration is proportional to the force applied and the mass of the object.  Do we honestly need to spell out how eloquently this describes metrics?  For those of you who whispered "Yes!" we'll simply mention the concepts of proportional control and feedback.  Nuff said.


Law 3.  For every action there is an equal and opposite reaction.

An interesting one, this.  

Once organizations are designing, developing, selecting, implementing, using, managing and improving their suites of PRAGMATIC information security metrics, they will inevitably start using the metrics to make changes that systematically and measurably improve their security.  That's the action part.  

Newton might predict a reaction: what would that be?  

Well, one reaction will involve the human threats such as hackers, malware authors, fraudsters, spies and so forth: they will up their game in order to continue successfully exploiting those victims who are more secure, or of course direct their evil attentions to less secure victims, including those who lack security metrics and hence presumably still manage, direct and resource security using guesswork, gut feel, magic incantations, lucky charms and astrology.   "I've heard on the golf course|read in the in-flight magazine|been told by a little bird that competitor X only spends 5% of its IT budget on security.  Clearly, we're spending far too much!"

Another reaction will involve other parts of the organization - other departments who notice that, for once, information security has management's ear.  They are successfully justifying the security budgets and investments that they themselves would love to have.  Some will react negatively, challenging and undermining the security metrics out of jealousy and a desire to go back to the good old days (law 1 in action), while others will seize the opportunity to reevaluate their own metrics, finding their own PRAGMATIC set.

Yet another reaction will come from the authorities, owners and other stakeholders who can't help but notice the marked contrast between PRAGMATIC and non-PRAGMATIC organizations.  The former give them fact-based, reliable and most of all useful information about their information security status and objectives, while the latter mysteriously hint at celestial bodies and rabbits' feet.  We confidently predict that security compliance obligations imposed on organizations will increasingly specify PRAGMATIC metrics, and indeed the PRAGMATIC approach, as part of the deal.

Let's be realistic about it: the change will undoubtedly be incremental and subtle at first, starting with the thought leaders and innovators who grasp PRAGMATIC and make it so.  Gradually, the language of security metrics will change as the early adopters enthuse about their new-found abilities to manage security more rationally and scientifically than has been possible before, and others come to appreciate that at last they can make sense of the metrics mumbo-jumbo spouted by the consultants and standards.  The laggards who cling to their existing approaches like a drowning man clings to a sodden log will face extinction through increasing security threats and incidents, and increasingly strident pressure from their stakeholders to "be honest about security".

21 November 2012

SMotW #33: thud factor

Security Metric of the Week #33: thud factor, policy verbosity index, waffle-o-meter

If you printed out all your security policies, standards, procedures and guidelines, piled them up in a heap on the table and gently nudged it off the edge, how much of a thud would it make?  

'Thud factor' is decidedly tongue-in-cheek but there is a point to it.  The premise for his metric is that an organization can have too much security policy material as well as too little.  Excessively lengthy, verbose, confusing and/or overlapping policies are less likely to be read, understood and complied-with, while compliance and enforcement would also be of concern for  excessively succinct, narrow and ambiguous policies. 

A scientist might literally measure the thud using an audio sound level meter, dropping the materials (stacked/arranged in a standard way) from a standard height (such as one metre) onto a standard surface (such the concrete slab of the laboratory floor), getting a sound pressure reading in decibels.  A diligent scientist would take numerous readings including controls to account for the background noise levels in the lab (he/she might dream of having a soundproof anechoic chamber for this experiment, but might settle for a series of experimental runs in the dead of night), checking the variance to confirm whether everything was under control ...

... but that's not really what we had in mind.  We were thinking of something far more crude, such as a questionnaire/survey using a simple 5 point Likert scale:
  1. Silence, similar to a pin drop.
  2. A slight flutter of papers.
  3. A gentle jolt as the heap hits the floor.
  4. A distinct thud.
  5. A bang loud enough to make people turn and look.
More likely, we'd opt for a continuous percentage scoring scale using those five waypoints to orient respondents but allowing them to navigate (interpolate) between them if they wish.

At the high end of the scale, there is so much policy stuff that it has become a total nightmare in practice to manage, use and maintain.  Management keeps on issuing policies in a vain attempt to cover every conceivable situation, while employees appear to keep on taking advantage of situations that don't yet have explicit policies.  Worse still, issued policies are constantly violated because due to lack of awareness or confusion over them, caused in part by inconsistencies and errors in the policy materials.  Some policies are probably so old they predate the abacus, while others use such stilted and archaic language that a high court judge would be flummoxed.  There are policies about policies, different policies covering the same areas (with conflicting requirements, of course) and probably turf wars over who should be writing, mandating, issuing and complying with the policies.  If anyone does anything remotely unacceptable, security wise, there is probably a policy statement somewhere that covers it ... but unfortunately there is also probably another one that could be interpreted to sanction it.

For organizations right at the low end of the scale, security policies are conspicuous by their absence.  There may perhaps be some grand all-encompassing statement along the lines of a Vogon admonition:


"Information shall be secured." 

... but no explanation or supporting detail - no practical guidance for the poor sods who are supposed to be complying with it.  Consequently, people make it up as they go along, some of them naturally tending towards the "Do nothing" and "It's not my problem" approach, others believing that security requires absolutely anything and everything that is not explicitly required for legitimate, stated reasons to be blocked.  On the upside, periodic policy maintenance is a breeze since there is next to nothing to review and confirm, but what little material there is is so ambiguous or vacuous that nobody is quite sure what it means, or what it is intending to achieve.  Compliance is a joke: there is no point trying to hold anyone to anything since there are policy gaps wide enough to steer an entire planetary system through.  Management resorts to trite phrases such as "We trust our people to do the right thing", as if that excuses their appalling lack of governance.

There is a happy medium between these extremes, although it would be tricky to set a hard and fast rule determining the sweet spot since it is context-dependent.  It usually makes sense to have the security policies match those covering other areas in the organization (such as finance, HR, operations, governance and compliance) in terms of quality (taking account of aspects such as depth, breadth, integrity, utility, readability etc.), but on the other hand if those other policies are generally accepted as being poor and ineffective, the security stuff should be better, good enough perhaps to show them The Way.

In metrics terms, the subjectivity of the measure is an issue: thud factor is in the eye of the beholder.  One person might think there are "far too many bloody policies!" while another might say "You can never have enough policies - and indeed we don't."  Nevertheless, explaining the issue and persuading several suitable people to rate thud factor on a common scale is one way to generate objective/scientific data from such a subjective matter.  Imagine, for instance, that you really did circulate a survey using the 5 point thud factor scale shown above, and collected responses from, say, 20 managers and 30 staff.  Imagine the mean score was 2.7: that is close to the middle of the scale, which indicates the subjective opinion that there are 'about enough' security policies etc., meaning there is probably no burning need to create or destroy policies.  However, if at the same time the variance was 1.8, that would indicate quite a wide diversity of opinions, some people believing there are too few policies and others believing there are too many - in other words, there is limited consensus on this issue, which might be worth pursuing (especially as there is probably some confusion about the measure!).  If you had the foresight to encourage people to submit written comments while completing the survey, you would have a wealth of additional information (non-numeric metrics, if there is such a beast) concerning the reasoning behind the scores and, perhaps, some specific improvement suggestions to work on.  

Anyway, let's see how it scores as a PRAGMATIC metric in the imaginary situation of Acme Enterprises Inc.:

P
R
A
G
M
A
T
I
C
Score
82
80
60
60
70
45
85
86
84
72%




72% puts this metric surprisingly high on the list of candidates.  It turns out to be a potentially valuable security metric that might have been dismissed out of hand without the benefit of the PRAGMATIC analysis.

The PRAGMATIC method gives us more than just a crude GO/NO-GO decision and an overall score.  Looking at the specific ratings, we see that Accuracy is definitely of concern with a rating of 45%.  If Acme's management showed sufficient concerned at the policy quality issue and was seriously considering adopting this metric, there are things we could do to improve its Accuracy - albeit without resorting to nocturnal scientists!  For example, we might revise the wording of the Likert scale/waypoints noted above to be more explicit and less ambiguous.  We could be more careful about the survey technique such as sample sizes and statistics needed to generate valid results, and perhaps look for differences between sub-populations (e.g. do managers and staff have the same or differing impressions about thud factor?  Do all departments and business units share more-or-less the same view?).

If we presume that a management-led team has been tasked with developing or reviewing Acme's security metrics, the PRAGMATIC approach would turn what is normally a rather vague and awkward argument over which metrics to use into a much more productive discussion about the merits of various candidate metrics, comparing their PRAGMATIC scores and using the individual ratings to propose improvements to their design.