21 November 2012

SMotW #33: thud factor

Security Metric of the Week #33: thud factor, policy verbosity index, waffle-o-meter

If you printed out all your security policies, standards, procedures and guidelines, piled them up in a heap on the table and gently nudged it off the edge, how much of a thud would it make?  

'Thud factor' is decidedly tongue-in-cheek but there is a point to it.  The premise for his metric is that an organization can have too much security policy material as well as too little.  Excessively lengthy, verbose, confusing and/or overlapping policies are less likely to be read, understood and complied-with, while compliance and enforcement would also be of concern for  excessively succinct, narrow and ambiguous policies. 

A scientist might literally measure the thud using an audio sound level meter, dropping the materials (stacked/arranged in a standard way) from a standard height (such as one metre) onto a standard surface (such the concrete slab of the laboratory floor), getting a sound pressure reading in decibels.  A diligent scientist would take numerous readings including controls to account for the background noise levels in the lab (he/she might dream of having a soundproof anechoic chamber for this experiment, but might settle for a series of experimental runs in the dead of night), checking the variance to confirm whether everything was under control ...

... but that's not really what we had in mind.  We were thinking of something far more crude, such as a questionnaire/survey using a simple 5 point Likert scale:
  1. Silence, similar to a pin drop.
  2. A slight flutter of papers.
  3. A gentle jolt as the heap hits the floor.
  4. A distinct thud.
  5. A bang loud enough to make people turn and look.
More likely, we'd opt for a continuous percentage scoring scale using those five waypoints to orient respondents but allowing them to navigate (interpolate) between them if they wish.

At the high end of the scale, there is so much policy stuff that it has become a total nightmare in practice to manage, use and maintain.  Management keeps on issuing policies in a vain attempt to cover every conceivable situation, while employees appear to keep on taking advantage of situations that don't yet have explicit policies.  Worse still, issued policies are constantly violated because due to lack of awareness or confusion over them, caused in part by inconsistencies and errors in the policy materials.  Some policies are probably so old they predate the abacus, while others use such stilted and archaic language that a high court judge would be flummoxed.  There are policies about policies, different policies covering the same areas (with conflicting requirements, of course) and probably turf wars over who should be writing, mandating, issuing and complying with the policies.  If anyone does anything remotely unacceptable, security wise, there is probably a policy statement somewhere that covers it ... but unfortunately there is also probably another one that could be interpreted to sanction it.

For organizations right at the low end of the scale, security policies are conspicuous by their absence.  There may perhaps be some grand all-encompassing statement along the lines of a Vogon admonition:


"Information shall be secured." 

... but no explanation or supporting detail - no practical guidance for the poor sods who are supposed to be complying with it.  Consequently, people make it up as they go along, some of them naturally tending towards the "Do nothing" and "It's not my problem" approach, others believing that security requires absolutely anything and everything that is not explicitly required for legitimate, stated reasons to be blocked.  On the upside, periodic policy maintenance is a breeze since there is next to nothing to review and confirm, but what little material there is is so ambiguous or vacuous that nobody is quite sure what it means, or what it is intending to achieve.  Compliance is a joke: there is no point trying to hold anyone to anything since there are policy gaps wide enough to steer an entire planetary system through.  Management resorts to trite phrases such as "We trust our people to do the right thing", as if that excuses their appalling lack of governance.

There is a happy medium between these extremes, although it would be tricky to set a hard and fast rule determining the sweet spot since it is context-dependent.  It usually makes sense to have the security policies match those covering other areas in the organization (such as finance, HR, operations, governance and compliance) in terms of quality (taking account of aspects such as depth, breadth, integrity, utility, readability etc.), but on the other hand if those other policies are generally accepted as being poor and ineffective, the security stuff should be better, good enough perhaps to show them The Way.

In metrics terms, the subjectivity of the measure is an issue: thud factor is in the eye of the beholder.  One person might think there are "far too many bloody policies!" while another might say "You can never have enough policies - and indeed we don't."  Nevertheless, explaining the issue and persuading several suitable people to rate thud factor on a common scale is one way to generate objective/scientific data from such a subjective matter.  Imagine, for instance, that you really did circulate a survey using the 5 point thud factor scale shown above, and collected responses from, say, 20 managers and 30 staff.  Imagine the mean score was 2.7: that is close to the middle of the scale, which indicates the subjective opinion that there are 'about enough' security policies etc., meaning there is probably no burning need to create or destroy policies.  However, if at the same time the variance was 1.8, that would indicate quite a wide diversity of opinions, some people believing there are too few policies and others believing there are too many - in other words, there is limited consensus on this issue, which might be worth pursuing (especially as there is probably some confusion about the measure!).  If you had the foresight to encourage people to submit written comments while completing the survey, you would have a wealth of additional information (non-numeric metrics, if there is such a beast) concerning the reasoning behind the scores and, perhaps, some specific improvement suggestions to work on.  

Anyway, let's see how it scores as a PRAGMATIC metric in the imaginary situation of Acme Enterprises Inc.:

P
R
A
G
M
A
T
I
C
Score
82
80
60
60
70
45
85
86
84
72%




72% puts this metric surprisingly high on the list of candidates.  It turns out to be a potentially valuable security metric that might have been dismissed out of hand without the benefit of the PRAGMATIC analysis.

The PRAGMATIC method gives us more than just a crude GO/NO-GO decision and an overall score.  Looking at the specific ratings, we see that Accuracy is definitely of concern with a rating of 45%.  If Acme's management showed sufficient concerned at the policy quality issue and was seriously considering adopting this metric, there are things we could do to improve its Accuracy - albeit without resorting to nocturnal scientists!  For example, we might revise the wording of the Likert scale/waypoints noted above to be more explicit and less ambiguous.  We could be more careful about the survey technique such as sample sizes and statistics needed to generate valid results, and perhaps look for differences between sub-populations (e.g. do managers and staff have the same or differing impressions about thud factor?  Do all departments and business units share more-or-less the same view?).

If we presume that a management-led team has been tasked with developing or reviewing Acme's security metrics, the PRAGMATIC approach would turn what is normally a rather vague and awkward argument over which metrics to use into a much more productive discussion about the merits of various candidate metrics, comparing their PRAGMATIC scores and using the individual ratings to propose improvements to their design.

No comments:

Post a Comment

Have your say!