29 May 2012

SMotW #8: Corporate security culture

Security Metric of the Week #8: measuring the organization's security culture

Culture is such a simple word for such a huge amount of complexity and ambiguity.  Fostering a 'culture of security' within the organization sounds like an excellent idea, but it's a lot easier to say than to do.  Perhaps metrics can help drive things in the right direction?

Culture can be measured in various ways ranging from informally observing and describing things, through to scientific research methods used in sociology and psychology.  Common surveys fall in the middle somewhere: their Accuracy depends on how well they are designed and conducted.  

The Independence of the surveyors is another factor: using a specialist team of competent, scientifically trained, professional assessors is an option but will dramatically impact the Timeliness and Costs, compared to using internal auditors and students.  Self-administered intranet surveys may be the way to go, but again they need to be designed carefully to avoid excessive bias (like for instance the reluctance of some employees to complete web surveys honestly, if at all).

Another option is to measure, say, the extent of employee compliance with policies, or absenteeism, or the general nature and tone of emails, water-cooler mutterings or social media.  These may only be indirectly related to corporate security culture but they do suggest possible metrics, perhaps focusing on certain aspects of most concern. 

With our vision of Acme Enterprises and a specific version of this example metric in mind, we scored it thus:

P
R
A
G
M
A
T
I
C
Score
60
76
55
75
60
60
10
75
20
55%

In your organization, given its current state of security maturity and facing its particular challenges and opportunities, you might score this metric quite differently to our example, and that's OK.  The context is important.  Nevertheless, the PRAGMATIC method provides a rational basis for the discussion, and often leads to insights and even better metrics.

21 May 2012

SMotW #7: Logical vs physical access discrepancies


Security Metric of the Week #7: Discrepancies between physical location and logical access location

Correlating records (log entries) between physical and logical access control systems will often reveal curious discrepancies, such as someone logging in remotely (e.g. from home, a remote office or via the Interweb) whereas their staff pass has recently been used to access the office locally.  Did they shoot home from the office, without swiping their pass on exit?  Have they loaned their staff pass or login credentials to someone?  Has someone duplicated their staff pass or hacked their network credentials?  Or are they for some reason logging in at the office through a 3G or other mobile network, instead of using the conventional LAN cable dangling out of the wall?  Correlating the logs to find such discrepancies may or may not provide more specific answers to questions of this nature, depending on how much information is available and how reliable it is.  However, the number of such discrepancies, perhaps divided into different types, is a metric that tells us something about the scale of this particular issue.


We gave this metric an overall PRAGMATIC score of 78% in the context of the hypothetical manufacturing company that we envisaged for the book.  The highest-scoring parameter was 90% for Genuine-ness, since relatively few individuals are capable of deliberately altering the physical or logical access logs to manipulate the metric due to the relatively strong controls generally used to secure such logs.  The lowest score was 60% for Cost-effectiveness, since correlating logs is painstaking, although it can be partially automated.  In your specific organization, these scores may well be quite different for genuine  reasons.  Have a think about how you would score this metric against the PRAGMATIC criteria.



P
R
A
G
M
A
T
I
C
Score
75
76
72
90
82
75
85
83
60
78%








We categorized this as a management-level metric, of interest to middle managers rather than senior management or staff/operational people.  It is clearly not a strategic security metric and so would be of little use to a director.  At the same time, it would be of limited utility to those people running the physical and logical access control systems: what would they make of it?  We figured a security manager might perhaps use the metric to ensure that sufficient resources and priority are applied to log reviews etc. 

14 May 2012

SMotW #6: Policy coverage

Security Metric of the Week #6: Information security policy coverage

Corporate information security policies don't normally exist in splendid isolation but to some extent build upon internal and external sources such as:
  • Identified information security risks (threats, vulnerabilities and potential impacts) or issues of concern to the organization; 
  • Other policy statements and/or other requirements mandated by management;
  • Security-relevant compliance obligations imposed by applicable laws, regulations, contracts, agreements, moral codes etc.;
  • Good practice security advice from public information security standards, models and frameworks such as ISO27k and the NIST SP800-series, plus the vendors of IT system and software, consultants, textbooks, industry advisories etc., plus of course the advice of competent and experienced employees (e.g. IT audit, risk management and information security professionals).

This week's example metric therefore seeks to measure coverage of the organization's security policies - the extent to which they take account of applicable issues and requirements.


There are several ways in which this metric might be measured in practice, depending on available resources and on the particular aspects that are of most value to management for policy-related decisions.  A common approach is to draw up a two-dimensional matrix (i.e. a table!) listing out the requirements from each source (normally as columns), and identifying the corporate policies, standards, procedures, contracts, agreements etc. that cover them (normally as rows).  A simple traffic-light color scheme in the body of the matrix will suffice to identify requirements that are fully covered by the corporate policies etc. (green), partially covered (amber), not covered (red) or not applicable (clear).  


While the resultant "heat map" is perfectly adequate as a reporting and management tool, some might prefer to count and report the number or proportion of reds, ambers and greens, get more sophisticated using percentages and weightings (since some requirements are trivial whereas others may be crucial), or perhaps identify the number of months that have passed since each requirement was identified and has not yet been fully addressed.  These elaborations will increase the Cost of the metric and may affect its Meaningfulness (the numbers will have to be explained, but the additional information may be valued by management).



P
R
A
G
M
A
T
I
C
Score
75
82
92
78
80
70
73
60
81
77%









We gave this metric a healthy score of 77% on the PRAGMATIC scale.  It is certainly Relevant to security and Actionable.  For example, if an additional security requirement emerges as a result of, say, risk analysis or a new law, it will initially be identified on the matrix as red blobs against the policies, procedures etc. that need to be updated.  These will gradually turn green as the updates are completed.  

Do you measure policy coverage?  If so, how do you do it?  How useful do you find the metric?  We'd love to find out from you how it works out in practice.

08 May 2012

PRAGMATIC metrics from security surveys



Like most of its kind, the latest information security breaches survey is stuffed with security-related statistics (metrics), mostly used to identify issues, compare trends relative to previous surveys and to contrast responses between certain categories of organizations.  Some of them could potentially be adapted for use as security metrics within one organization, but which (if any) would make worthwhile corporate security metrics?  The PRAGMATIC method gives us a rational way to address the issue.




Suppose, for example, that management is concerned about the organization's security policy - or rather its policies since there are several in fact.  Maybe there is a general feeling that, although the policies are formally written and mandated, employees are paying scant attention to their security obligations.  Are there any metrics in the breaches survey that we might use or adapt for internal corporate use?

The breaches survey tells us on page 6: "Possession of a security policy by itself does not prevent breaches; staff need to understand it and put it into practice.  Only 26% of respondents with a security policy believe their staff have a very good understanding of it; 21% think the level of staff understanding is poor.  Three-fifths of large organisations invest in a programme of security awareness training, up by 10% on 2010 levels; less than half of small businesses, however, do this.  The survey results indicate a clear payback from this investment; 36% of organisations that have an ongoing programme feel their staff have a very good understanding of policy, versus only 13% of those that train on induction only and 9% of those that do nothing.  Similarly, only 10% of organisations with an ongoing programme feel their staff have a poor understanding, versus 36% of those that train on induction and 49% of those that do nothing.  There is some industry variation, with the property and construction sector least mature.  Sometimes, it takes a breach before companies train their staff."

Two metrics are implied by that paragraph:
  1. Extent of employee understanding of the security policies; and
  2. Amount of investment in security awareness training.
For the first metric, the survey measured respondents' opinions, presumably using a Likert scale, something along the lines of: "How well do you believe employees understand the security policies: (A) Not at all; (B) Poorly; (C) So-so; (D) Quite well; or (E) Completely?"  [This is not the actual question they asked - I didn't see the actual survey questionnaire so I'm guessing.]  We might consider using this kind of approach to survey opinions within our organization, although there are lots of issues to take into account when designing any kind of survey, such as:
  • Who will we survey?  Which kinds of people, and how many of them?  Do we intend to distinguish and contrast responses from different groups or types of respondent, or is it OK to lump them all together?  Should respondents be allowed to remain anonymous?
  • How many response options should we offer, and how should they be worded, precisely?
  • Should the responses be in ascending or descending alphabetical order?  Or mixed order?  The same order on every survey, or randomized?
  • Should we allow for responses that are off-the-scale, or intermediate values?  Will we collect respondents' comments or explanations?
  • What else do we also need to know?  While we are at it, are we going to ask a bunch of questions (as is normal for a survey), or keep this simple, perhaps just the one question (a poll)?
  • Should this be administered as a self-selection survey, perhaps on the corporate intranet, or should someone physically go around asking employees, or email them, or phone them, or send them forms?
  • Should we offer incentives to encourage more responses?  What incentives are appropriate?  How might this affect the validity of the statistics?
  • Aside from the data collection itself, who will analyze the data?  How?  Which statistics are the most appropriate?
  • When should we conduct the survey?  When is the best time?  How long should we allow?  Should we do it once or more than once - regularly or in an ad hoc manner?
  • How will the survey results be used?  Will they be in a report, a presentation, online, used for background information or directly for decision support?  Line graphs, bar charts, pie charts, probabilities or what?
  • How much should we spend on the survey? ...
... That cost question begs several deeper ones: why are we measuring this?  Do we really understand what issues concern us?  Will a survey give us usable information and what will we do with the results?  And most of all, what determines whether the value of the information from this metric will outweigh the cost of collecting it? 
 
The second metric seems pretty straightforward, although in practice it is surprisingly difficult to put an accurate and precise figure on the amount of most investments.  However, a rough estimate may be all we really need (Douglas Hubbard makes this point very well, at length, in "How to measure anything" - we will review the book soon).

OK, moving on, let's now consider the PRAGMATIC scores for these two metrics.  Cost-effectiveness is definitely an issue for metric 1, particularly if we intend to go ahead with a manually-administered survey, survey lots of people and/or offer substantial incentives.  There are also doubts concerning its Relevance (how well does it reflect information security?  Isn't it just one of many factors?), Meaningfulness (would we need to spend time explaining the results to the intended audience/s, or risk their misunderstanding?), Accuracy (depends heavily on the survey approach and the number of responses), Genuinness (might the numbers be manipulated deliberately by someone with an ax to grind?), Independence (both in terms of those we are surveying, and who conducts, analyzes and presents the results), Actionability (is it entirely obvious what ought to be done if the data are negative, or for that matter positive?) and Predictability (we may believe there is a causative link to the organization's security status, but are we certain about that?).  

Metric 2, in contrast, could turn out to be much cheaper - perhaps awareness and training expenditure is already measured by Finance, for some other purpose.  Maybe it can simply be estimated from the budgets and project expenses in this area.  The metric's Relevance, Predictability, Independence, Actionability etc. would also have to be weighed-up in scoring this metric, but we leave that as an exercise for you.

While the actual PRAGMATIC numbers in a specific organization depend on these and other factors, for the sake of this blog, let's assume metric 1 scores 44% while metric 2 scores, say, 67%.  On this basis alone, metric 2 clearly appears to be the better metric - however, we are not yet necessarily ready to go ahead with metric 2.  In reality, there are many other possible metrics, and many variations on any one metric, that we perhaps ought to consider.  In some ways, these two metrics could be considered complementary, hence we might even decide to use them both.  Or neither.

Most of these issues could be resolved through a deeper understanding of management's security goals and the questions that the metrics are intended to address.  We might need to explore the data gathering and statistical techniques in more depth, and so on.  However, the PRAGMATIC method has at least prompted us to think more deeply about what we are trying to achieve, and helped us analyze some candidate metrics.  We have developed a richer appreciation of these metrics in the course of the analysis and, just as importantly, insight into our security metrics requirements.  The PRAGMATIC analysis is often more valuable than the actual PRAGMATIC scores.

There is of course much more detail on the PRAGMATIC method in our book ('in press').  There's a whole chapter, for example, about selecting a coherent suite - a measurement system - comprising mutually-supportive metrics, that we'll no doubt bring up in future blog items.  Until the book is released, however, you'll have to glean what you can from the blog, browse the SecurityMetametrics website, come to one of our conference presentations (e.g. AusCERT or SANS Security West), read other security metrics books and articles, raise this on the SecurityMetametrics discussion forum or contact us directly.

The UK information security breaches survey that prompted this blog item is excellent, one of the best, but there are many other security surveys and loads more sources of inspiration for security metrics.  That's something else we'll blog about in due course.  So much to say, so little time ...

07 May 2012

SMotW #5: Accounts per employee

Security Metric of the Week #5: ratio of number of IT system accounts (user IDs) to number of employees 


The mean number of IT system accounts or user IDs per employee is one measure of how well an organization controls the issue, maintenance and withdrawal of IDs, which in turn is an indicator of its maturity towards IT security.  

If user IDs are essentially unmanaged, they are created on a whim (implying a lack of control over the privileged IDs needed to create IDs) and seldom reviewed or removed, even when employees change jobs or leave the organization.  Over time, the number of redundant (no longer required) IDs builds up, creating further issues such as the possibility of IDs being re-used inappropriately, and difficulties reviewing and reconciling IDs to people due to the amount of junk.

If they are well managed, all user IDs have to be justified and linked to individual people performing specific roles.   Effective user ID administration processes ensure that ID creation/change requests are properly checked and formally authorized before being actioned, and periodic reviews take place to confirm that no unauthorized changes have been made.  The overall effect is greater personal accountability for the use of IT systems.

"Employee" would need to be carefully defined for the purposes of this metric - for instance, the ratio may or may not take into account temps, interns, contractors etc.  The metric's specification would also need to be clear about non-interactive/special purpose user IDs, such as those used to install or run services.  The way these aspects are specified is less important than clarity of the specification, since that affects the consistency and validity of the metric over successive periods.

P
R
A
G
M
A
T
I
C
Score
74
67
38
39
68
42
36
83
44
55%





For such an ostensibly useful metric, the PRAGMATIC score works out at a disappointing 55%, held back by the Actionable, Genuine, Accurate, Timely and Cost criteria.  The upshot is that it may be worthwhile addressing these factors specifically in order to gain the benefit of the metric, unless higher-scoring alternative metrics can be found.  [In the book we suggest security maturity metrics, for example, that score highly and hence are better ways of measing that particular aspect.]