24 September 2012

SMotW #25: critical systems compliance

Security Metric of the Week #25: proportion of critical information assets residing on fully compliant systems

In order to measure this metric, someone has to: 
  1. Identify the organization's critical information assets unambiguously;
  2. Determine or clarify the compliance obligations;
  3. Assess the compliance of systems containing critical information assets.

All three activities are easier said than done.  In our experience, the concepts behind this metric tend to make most sense in those military and governmental organizations that make extensive use of information classification, but even there the complexities involved in measuring compliance with a useful amount of accuracy would make it slow and expensive.  Consequently, the low Accuracy, Cost and Timeliness scores all take their toll on the metric's PRAGMATIC score:

P
R
A
G
M
A
T
I
C
Score
48
26
36
41
56
13
19
46
12
33%


Thus far, we have considered and scored this and other example metrics from the perspective of management within the organization.  The situation is somewhat different from the perspective of the authorities that typically impose or mandate security compliance obligations on others.  We are not going to elaborate further ourselves but leave it to you as an exercise to re-score the metric on behalf of, say, a government agency responsible for privacy.  Imagine yourself inside such a body, discussing information security metrics with management.  What would they make of its Predictability, Relevance to information security, Actionability, Genuinness, Meaningfulness to the intended audience, Accuracy, Timeliness, Independence or integrity, and Cost-effectiveness?  Go ahead, try out the PRAGMATIC method and tell us what you make of it ...

17 September 2012

SMotW #24: security traceability

Security Metric of the Week #24: Traceability of information security policies, control objectives, standards & procedures

This metric is based on the fundamental premise that all information security controls should be derived from and support control objectives, those being explicit business requirements for security.   Controls that cannot be traced to specific, documented requirements may not be justified, and may in fact be redundant and counterproductive: alternatively, the requirements may be valid but unstated, indicating a likely gap in the organization's policies etc.

The metric implies that there should be a way of tracing, referencing or linking controls with the corresponding security requirements, in both directions: it should be possible for management to determine which control/s satisfy a given control objective, and which control objective/s are satisfied by a given control.  There are various ways of achieving this in practice, such as a 2-dimensional table with control objectives along one axis and controls along the other.  The body of the table can simply contain ticks for the relevant intersections, or more detailed information concerning the implementation status of the controls.  In theory, every row in the table should contain at least one entry in the body, and so should every column: many will have more than one since there is a many-to-many relationship between control objectives and controls.

Turning now to the PRAGMATIC score:

P
R
A
G
M
A
T
I
C
Score
85
89
88
90
91
87
65
84
85
85%

That's a good score, let down just a bit on Timeliness since it will take a while to draw up the table and elaborate all the linkages to start with, and then to re-check them every time the metric is reported.  Furthermore, making changes in response to the metric will inevitably be a slow process, resulting in a substantial lag between measuring, reporting and responding to the metric.

By the way, a similar many-to-many relationship exists between control objectives and risks.  Conceptually, this adds a third dimension to the table, allowing us to trace information security risks to the corresponding control objectives and on to the related controls (or vice-versa).  Such multi-dimensional relationships are quite easily represented in a database but are harder to track, manage and measure manually.  

10 September 2012

SMotW #23: business continuity maturity

Security Metric of the Week #23: Business Continuity Management (BCM) Maturity

The high PRAGMATIC score for this week's metric shows that we consider it a valuable measure of an organization's business continuity management practices:

P
R
A
G
M
A
T
I
C
Score
90
95
70
80
90
85
90
87
90
86%

This metric is designed on exactly the same lines as the HR security maturity metric, SMotW #15, using a maturity scoring table with predefined criteria for various aspects of business continuity management indicating various levels  of maturity.

We are not going to give you the entire maturity scoring table now (you will have to continue waiting patiently for the book, I'm afraid) but here are two rows demonstrating the approach:

No business continuity management
Basic business continuity management
Good business continuity management
Excellent business continuity management
Nothing even vaguely approximating a policy towards business continuity
Something vaguely approximating a policy towards business continuity, though not very well documented, hard to locate and probably out of date
A clear strategy towards business continuity, supported by a firm policy owned and authorized by management and actively maintained
A coherent and comprehensive business continuity strategy, supported by suitable policies, procedures, guidelines and practices; strong coordination with other relevant parties
Business continuity requirements completely unknown
Major business continuity requirements identified, but typically just those mandated on the organization by law; limited documentation
Business impact analysis used systematically from time to time to identify, characterize and document business continuity requirements, both internal and external
Business continuity requirements thoroughly analyzed, documented and constantly maintained through business impact analysis, compliance assessments, business analysis, disaster analysis etc.


The table's four columns correspond to maturity scores of 0%, 33%, 67% and 100% respectively.  Each row in the table considers a different aspect or element of the measured area, in this case business continuity management, laying out four markers or sets of criteria for the four scores.   

If your management decides to adopt security maturity metrics like this, you could either take the scoring tables directly from the book (when available!), or use them as a starting point for customization.  Adapt them according to your experience in each area, integrating good practices recommended by various standards such as ISO27k and NIST's SP800-series, and organizations such as ISACA and the Business Continuity Institute.  Adjust the wording of the criteria to be more objective if you wish.  Include specific criteria or conditions.  Reference your policies, legal and regulatory obligations, whatever.

You may for instance feel that certain aspects of business continuity management are far more important than others, in which case you could weight the scores from each row accordingly ... but doing so would further complicate the scoring process and might lead to interminable discussions about the weightings, rather than about the organization's business continuity management maturity.  

Similarly, you may prefer further or fewer columns, giving you more or less granularity in the criteria.  Knock yourself out.

The percentage scoring scale lets us score things "towards the lower edge of the category" if appropriate, and to fine-tune the scores to represent a range of situations (e.g. if two businesses, departments or business units both qualify for the 3rd column on a certain criterion but one is a bit stronger than the other, its score might be a few percent higher than the other).  

The flexible design of this style of metric, coupled with its high PRAGMATIC score, is why we find it so useful in practice.  It is a particularly good way of  measuring relatively subjective matters in a relatively objective and repeatable manner.

03 September 2012

SMotW #22: IRR

Security Metric of the Week #22: Internal Rate of Return

IRR is one of a number of financial metrics in our collection.  IRR measures the projected profitability of an investment, a proposed security implementation project for example.  If the IRR is greater than the organization's cost of capital, the project may be worth pursuing (unless there are limited funds available, and other proposals with even higher IRR or intangible benefits).

Comparing IRR against other financial metrics is tricky.  For starters, we are not accountants, economists or financiers by training, and this stuff is hard!  Furthermore, different circumstances and different types of investment call for different metrics ... but arguably the most important factor is that organizations tend to rely on certain financial metrics to assess and monitor most of their projects.  Regardless of any technical arguments for or against using IRR as a metric, if management routinely uses it, there is undoubtedly going to be pressure on security projects to follow suit.

Being PRAGMATIC about it:

P
R
A
G
M
A
T
I
C
Score
70
72
25
30
82
50
44
60
88
58%





Notice the 88% score for Cost: if IRR is going to be required anyway for investment appraisal, the marginal cost of using it as a security metric is almost nil.  Finance probably has the requisite models/spreadsheets and expertise to calculate IRR for all proposed projects on an even footing ... but someone still has to provide the input parameters, so it is not totally free.

The low ratings for Accuracy and Genuinness reflect the underlying fact that virtually all investments are inherently uncertain.  The metric depends on projections and estimations, and they in turn are influenced by the assumptions of whoever provides the raw data.  Strong optimists and pessimists are likely to make unrealistic claims about the costs and benefits and may not even appreciate their own bias (we all secretly believe we know because we are the realists!).  'Calibrating' the people making the projections may help, and this tends to happen naturally with experience - in other words, IRR accuracy probably correlates with the number of years of experience at calculating investment returns.  Another way to improve the accuracy is to persuade several competent and interested people to provide the requisite numbers for the factors used to calculate IRR.  If their estimations cluster closely around the same values (i.e. low deviation from the mean, low variance), the numbers have more credibility  than if they provide wildly differing estimates: exploring the reasons for those differences (for example, different assumptions or factors) can generate further insight and value from the metric, perhaps suggesting the need to control those factors more closely.