30 October 2012

SMotW #30: access control matrix status

Security Metric of the Week #30: status of logical access control matrices for computer applications

The idea behind this metric was to ask application development and support teams, application owners and/or other suitable people to assess the status of logical access control matrices for a range of application systems, perhaps comparing and ranking them.  

Right up-front, we're making the bold assumption that they understand the term "access control matrix".  In practice we might need to explain and help them figure out the basis on which to figure out how good or bad each one is.

In the hypothetical Acme Inc context, the PRAGMATIC score for this metric works out at 50%:

P
R
A
G
M
A
T
I
C
Score
70
50
60
60
88
25
40
20
40
50%





Although the access control matrix status is a reasonable Predictor of the quality of an application's access control, that is only one component of application  security, and a rather small part of information security as a whole, hence the Relevance score is not so hot.  

The metric is fairly Actionable in the sense that poor scores indicate a need to improve on the way access control matrices are used,  However it may not be clear how to go about making improvements, purely on the strength of the metric.  One approach is to share good practices from high-scoring secure applications with the low-scoring ones, which is fine so long as the systems are comparable and someone is able to identify which practices are good.  

The metric is fairly Genuine in that it is hard to justify a high measure for an application that patently lacks an effective access control matrix (e.g. it doesn't have one at all, or it is undocumented, out of date, incomplete or a total mess).  On the other hand, an assertive application/information asset owner may well be upset at his/her system being scored lower than his/her peers but, instead of actually improving the access controls applying pressure to those who generate the data for the metric.  The potential conflicts of interest of the measurers also depress the Independence rating.

The surprisingly high rating for Meaning reflects the above-stated assumption that people are broadly familiar with the concept, plus the fact that 'status of the access control matrices' is much simpler and easier for people to understand than logical access controls in general or application security as a whole.  The access control matrix is clearly just one element, but in our experience, it is a reasonable indicator of application security.  To put that another way, few secure application systems lack one, and most highly secure application systems have well-developed access matrices that are actively maintained and used.

The ratings for Accuracy, Timeliness and Cost-effectiveness are low due to the amount of time and effort it would take to gather meaningful measures from the range of people envisaged.

23 October 2012

SMotW #29: controls coverage

Security Metric of the Week #29: security controls coverage



Baseline Defenses Coverage (Antivirus, Antispyware, Firewall, and so on)
This is a measurement of how well you are protecting your enterprise against the most basic information security threats. Your coverage of devices by these security tools should be in the range of 94 percent to 98 percent. Less than 90 percent coverage may be cause for concern. 
Whereas Jaquith's metric involves simply determining the proportion of IT systems that are running security software, we had in mind a more sophisticated metric that takes into account a wider range of security controls - perhaps a comprehensive review or audit of information security controls in use across the enterprise against a standard such as COBIT, ISO/IEC 27002 or the Information Security Forum's Standard of Good Practice.

Although we feel it would be quite Predictive and Relevant to information security, the overall PRAGMATIC score for our version of the metric is mediocre, let down by the ratings for Accuracy, Timeliness, Independence and Cost:

P
R
A
G
M
A
T
I
C
Score
87
89
65
40
74
35
46
40
30
56%




A common issue with crude 'coverage' metrics is that they generally gloss-over important details and hence do not necessarily reflect the actual information security risks in different situations.  For example, a storeroom full of new PCs and servers waiting to be configured and installed would presumably depress the metric if they were not running firewalls, antivirus etc., yet the risk to the organization is negligible.  On the other hand, a single critical network server with something like an out-of-date antivirus package or a misconfigured firewall might legitimately be assessed as having full coverage, whereas in fact it represents a substantial risk.  

This issue (a metric risk) can be addressed if the people doing the measurement take such factors into account, but their interpretation increases the subjectivity of the process.  This in turn affects the Accuracy, Timeliness and Independence scores, while the Costs increase as a result of needing skilled people to assess coverage in more depth.

16 October 2012

SMotW #28: Benford's law

Security Metric of the Week #28: Benford's law

Benford's law is a fascinating theorem in number theory with applications in information security, accountancy, engineering, computer audit and other fields.  

Benford's law predicts the distribution of initial digits on numbers in numeric data sets generated in an unbiased and unconstrained fashion.  In short, roughly a third of such multi-digit numbers start with a 1, whereas only one twentieth start with a 9.  If someone (such as a fraudster) or something (such as a rogue or buggy computer application) has been manipulating or fabricating data, the numbers tend not to have leading digits with the predicted frequencies.  Turning that on its head, if we compare the actual against predicted distributions of leading digits in a data set, significant discrepancies probably indicate something strange, and possibly something untoward going on: we would have to dig deeper to determine the real cause.

The PRAGMATIC scores for this metric are as follows:


P
R
A
G
M
A
T
I
C
Score
84
30
53
95
11
98
62
98
23
62%






Benford's law is normally used to analyze data sets for fraud, and as such the metric has some merit as a fraud indicator.  However, a data set that complies with Benford's law may have been manipulated by a fraudster clever enough to ensure that his fictitious numbers have the predicted frequencies of initial digits.  This is not an altogether unrealistic scenario, since successful fraudsters are indeed clever and manipulative by nature.


The need to explain the mathematical basis for the metric to most audiences detracts from its Meaningfulness score.  The Timeliness and Cost-effectiveness scores are depressed by the practicalities of obtaining and analyzing sufficient volumes of raw data and exploring the real reasons for any skewed distributions.  As far as we know, there are limited applications of Benford's law to information security, hence the low Relevance score.  While Benford's law is highly Accurate (if applied correctly) and Independent, it is only Actionable if the reasons for skewed distributions are understood (for instance identify and fire the fraudster, or diagnose and debug the rogue program).

10 October 2012

SMotW #27: unauthorized/invalid access count

Security Metric of the Week #27: number of times that assets were accessed without authentication or validation

This candidate metric immediately begs questions such as would you know: 

  • When assets are accessed?  Certain accesses to some IT systems, databases, applications, data files etc. may well be monitored and logged routinely, but probably not all of them, and certainly not when it comes to non-IT information assets such as paperwork and intangible knowledge.
  • Who or what was accessing them?  If someone is able to access assets indirectly through a separate computer system, network connection or third party, how would you know this was taking place?  What if the access was entirely automated e.g. a scheduled backup process: does that count as an access event?
  • Whether the access attempts were successful or unsuccessful?  The metric is ambiguous on whether it counts access attempts and/or access events.
  • Whether they were 'authenticated'?  Often, people are presumed to have been authenticated previously purely by dint of being in a certain place (e.g. an employee on site in the office) but what if the presumption is false (e.g. an office intruder or visitor)?
  • Whether they were 'validated'?  'Validation' seems a curious term in this context.  Precisely what is being validated, and on what basis?
If we're being really picky, we might wonder whether this is truly meant to be a simple cumulative count of events, or in fact a rate of accesses (i.e. the count in a defined - but currently unstated - period of time, such as a month).  Going by the literal wording of the metric, we're not even entirely sure that it is measuring access to information assets, specifically!

Our concerns are naturally reflected in a poor PRAGMATIC score:

P
R
A
G
M
A
T
I
C
Score
61
78
33
16
33
0
44
35
33
37%




Notice the zero score for Accuracy.  It is difficult to identify, let alone measure, when someone attempts unauthorized and inappropriate access to an asset.  If they are unsuccessful as a result of the identification, authentication and access controls blocking their access, that fact will hopefully be recorded somewhere.  However, if they are successful due to the controls failing to prevent their access,  that is unlikely to be recorded.  We might take a guess at it, but that's a guess not a measure.

SMotW #27 is a typical example of a security metric that was probably crafted with some specific purpose in mind.  To those who designed it, it probably meant something at the time.  Unfortunately, without the background context, we have little idea what it is about.  On the other hand, if the original design was properly documented or was explained by the designer/s, we would know what the measurement was trying to achieve - in other words, its purpose and the related assumptions or constraints.

02 October 2012

PRAGMATIC Security Metric of the Quarter #2

PRAGMATIC Security Metric of the Second Quarter

It has been a good quarter in the sense that several of the example metrics we have discussed have scored substantially higher than our first Security Metric of the Quarter, Discrepancies between physical location and logical access location.   


With the highest PRAGMATIC score of all the metrics we have reviewed
in the past three months, we are proud to announce that our second
Security Metric of the Quarter is ... 


... <cue annoying drum roll to cover embarrassing pause
while we fumble with the envelope> ...
 




Congratulations, please walk elegantly to the stage to receive your glittering prize from our scantily-clad presenter and her vaguely amusing side-kick.

Aside from BCM maturity, the HR security maturity metric came a very close second, achieving almost exactly the same score.  They are both 'maturity metrics', of course.  The maturity scoring approach is a particularly flexible and useful way of measuring subjective matters in an objective and repeatable manner.


These are the security metrics we have discussed and scored during the quarter, in the context of the imaginary company Acme Inc.  Click their names to remind yourself what the panel thought of them:



Example metric P R A G M A T I C Score
BCM maturity 90 95 70 80 90 85 90 87 90 86%
HR security maturity 90 95 70 80 90 85 90 85 90 86%
Traceability 85 89 88 90 91 87 65 84 85 85%
Awareness level 86 89 86 82 85 80 69 48 75 78%

Uptime


84 97 66 78 94 61 79 47 89 77%
Audit findings  79 89 87 96 92 84 30 96 36 77%
Employee churn 60 66 20 85 60 80 75 80 91 69%
Security spending 82 94 60 60 89 29 33 49 59 62%
IRR 69 72 25 30 82 50 44 60 88 58%
Policy compliance 55 64 75 50 68 34 59 76 33 57%
Unclassified assets 52 53 63 44 62 13 17 87 44 48%
Systems compliance 48 26 36 41 56 13 19 46 12 33%