25 December 2013

SMotW #85: controls consistency

Security Metric of the Week #85: consistency of information security controls



This metric implies that someone is concerned about security controls being inconsistent, but what does that mean - inconsistent in what regard? Possible types of inconsistencies include:

  • Controls do not sufficiently mitigate the risks, address the wrong risks, or are in some way inappropriately designed/specified;
  • Expected or standardized controls (e.g. controls mandated in law) not implemented in all relevant places;
  • Controls not implemented to the same degree or extent, or in the same way, in all relevant places;
  • Controls that vary over time (e.g. security procedures ignored in busy periods);
  • Controls not operated or managed in the same way in all relevant places;
  • Others.

ACME's senior managers did not rate this metric highly, being concerned about its Accuracy, Timeliness, Independence/integrity and Cost-effectiveness:


P
R
A
G
M
A
T
I
C
Score
78
83
67
60
71
33
27
31
27
53%

However, from the perspectives of the CISO or ISM, the metric was more PRAGMATIC:

P
R
A
G
M
A
T
I
C
Score
85
90
76
60
90
50
46
100
75
75%

They could see themselves using this metric to drive up consistency of security controls in whatever respects they chose to measure ... although exactly how they would measure consistency was not exactly self-evident: they were thinking initially about using and perhaps extending their routine compliance checks against ACME's baseline security standards.

Notice the distinctly different ratings for Independence/integrity given in these two PRAGMATIC assessments. In the former, senior management were concerned that if they started using the metric to pressure Information Security and various business units to improve their information security, things might deteriorate to arguments over the measurements rather than productive discussion around making necessary improvements. They also weren't entirely convinced that the metric would be a trustworthy guide to controls consistency. In contrast, the CISO and ISM envisaged measuring the metric themselves for their own purposes in connection with continuously improving ACME's ISO27k Information Security Management System, with little need for discussion or argument with those being measured. In fact, the metric might not even need to be reported or circulated beyond the infosec office.

This is a good illustration of why published lists of security metrics (including the 150 examples in our book!) are of dubious value except perhaps as creative inspiration. Despite what you might think, a security metric that works brilliantly for one organization may be mediocre or quite inappropriate for another, while one that is ideal for a particular purpose and a specific audience within a given organization may be a poor choice in other circumstances or for other audiences. This is precisely what makes the PRAGMATIC method shine: it offers a systematic, structured way to figure out and compare the merits of various possible security measures in a specific situation or context, something that was previously very difficult to achieve.

With that, we'd like to wish all our readers a brilliant Christmas: the next SMotW will appear here early in the new year, although we might perhaps blog about new year's metrics resolutions. Meanwhile, we hope Santa brings you all you desire, and doesn't get stuck in the chimney.

Merry Christmas from Gary & Krag. Have a good one.

17 December 2013

SMotW #84: % of security-certified systems

Security Metric of the Week #84: proportion of IT systems whose security has been certified compliant



Large organizations face the thorny problem of managing their information security consistently across the entire population of business units, locations, networks, systems etc. Achieving consistency of risk analysis and treatment is tricky for all sorts of reasons in practice: diverse and geographically dispersed business units, unique security challenged, cultural differences, political issues, cost and benefit issues, differing rates of change, and more.

Three common approaches are to: 
  1. Devolve security responsibilities as much as possible, basically leaving the distributed business units and teams to their own devices (which implies widespread distrust of each other's security arrangements between different parts of the organization);
  2. Centralize security as much as possible, possibly to the extent that remote security teams are mere puppets, with all the heavy lifting in security done via the network by a tight-knit centralized team (with the risk that the standard security settings might not, in fact, be appropriate in every case);
  3. Hybrid approaches, often involving strong central guidance (security policies and standards mandated by HQ) but local security implementation/configuration and management (with some discretion over the details).
Some highly-organized organizations (military and governmental, mostly) take the hybrid approach a step further with strong compliance and enforcement actions driven from the center in an effort to ensure that those naughty business units out in the field are all playing the game by the rules. Testing and certifying compliance of IT systems against well-defined systems security standards, for instance, gives management greater assurance that system security is up to scratch - provided the testing is performed competently which usually means someone checking and accrediting the teams who do the testing so that they are permitted to issue compliance certificates.

ACME Enterprises Inc may not be the very largest of imaginary corporations but it does have a few separate sites and lots of servers. With some concern about how consistently the servers were secured, ACME's managers agreed to take a PRAGMATIC look at this metric:

P
R
A
G
M
A
T
I
C
Score
72
79
73
89
68
32
22
89
88
68%

With most of the numbers hovering in the 70's and 80's, the two lowest ratings stand out: their reasoning for the 32% rating for Accuracy was that certified compliance of a system to a technical security standard does not necessarily mean it is actually secure: ACME has had security incidents on certified compliant servers that met the standard but, for various reasons, turned out to have been inadequately secured after all.

On the other hand, it was seen as A Good Thing overall that more and more servers were both being made compliant and certified as such, hence management thought this metric had some potential as an organization-wide security indicator: they gave it 72% for Predictiveness since, in their opinion, there was a reasonably strong correlation between the proportion of servers having been certified compliant, and ACME's overall security status.

Let me repeat that: although certification is not a terribly reliable guide to the security of a given server, the certification process is driving server security in the right direction, hence the proportion of certified servers could be a worthwhile strategic-level security metric for ACME.  Interesting finding!

The rating of just 22% for Timeliness was justified on the basis that the certification process is slow: the certification tests take some time to complete, and the certification team has a backlog of work. The process and the metric gives a delayed picture of the state of security. Focusing management attention on the proportion of servers certified would undoubtedly have the side-effect of pressuring the team to certify more of the currently unchecked servers (perhaps increasing the possibility of the tests being shortcut, although the certification team leader was known to be a no-compromise do it right or not at all kind of person), but there are ways to deal with that issue.

The metrics discussion headed off at a tangent at this point, as they realized that "Time taken to security-certify a server" might be another metric worth considering. Luckily, with many other security metrics on the table already, someone had the good sense to park that particular proposal for now, adding time-to-certify to the list of metrics to be PRAGMATICally assessed later, and they got back on track - well almost ...
One of the managers queried the central red stripe on the mock-up area graph on the table. The CISO admitted that the stripe represented the servers that had failed their certification testing, and so opened another can o' worms when the penny dropped that 'proportion of servers certified or not certified' is not the whole story here. As the temperature in the workshop room rapidly escalated, the arrival of lunch and the temporary departure of several managers to catch up with their emails saved the day!

10 December 2013

SMotW #83: information asset values

Security Metric of the Week #83: total value of information assets owned by each Information Asset Owner


This week's metric presumes two key things.  

First, it presumes that the organization has Information Asset Owners (IAOs). While the terms vary, IAOs are generally the people who are expected to protect and and exploit the information assets in their remit or nominally assigned to them, both the organization's own information asset and those placed in its care by other organizations or individuals (its clients and employees for instance). Someone senior such as the Human Resources Director would typically be the IAO for the HR system, while lesser databases, systems and paperbases might be allotted to mid-level managers. By holding IAOs personally accountable for valuable information, management puts them under pressure to assess and treat the associated risks sensibly, and ideally to enhance the value of the assets by using them well.

Second, the metric presumes that there is some way to value the information assets - easier said than done, but valuation has several benefits so it is worth some effort. In fact, it is hard to envisage rational corporate management without this information, and yet curiously enough in many organizations asset valuation is merely an accountancy exercise, one that is largely restricted to tangible assets (book values) and certain financial/investment instruments (off-balance-sheet).

ACME managers rated the metric at 51%:

P
R
A
G
M
A
T
I
C
Score
48
64
78
57
79
38
50
22
26
51%



If you look up example metric 7.6 in chapter 7 of our book, you'll discover that we deliberately omitted the scoring rationale for this metric in order to emphasize keeping notes about the PRAGMATIC process. If the only record that remains is the table of ratings, or even worse just the overall PRAGMATIC score, it's hard to recall the discussion and the reasoning behind the metric ... but let's give it a go now and see how we get on.

Overall, the 51% PRAGMATIC score tells us that management was not very impressed with the metric: in their estimation, it should not be dismissed out of hand but it is unlikely to feature highly on anyone's security metrics wish-list.  [OK, but we really need to know why. What was it about the metric that slightly interested and slightly concerned them?]

The high spots in the scoring table were the metric's Meaningfulness and Actionability. Looking at the sample graphic above, it's obvious at a glance that three IAOs (Fred, Alan and Sarah) own just over half of the information assets by value between them, with the remainder divided between seven other IAOs. That in turn implies that Fred, Alan and Sarah are shouldering heavier information security burdens than the other seven, so perhaps some reallocation of information assets is in order? It's hard to tell with so little information to go on. With hindsight, the Meaningfulness and Actionability ratings were both quite generous, but it could well be that we are interpreting the metric quite differently now than when it was originally considered. 

The metric's low spots were its Independence and Cost-effectiveness. The 22% rating for Independence suggests that perhaps management believed the IAOs with most to gain or lose from the metric would be largely responsible for taking and reporting the measurements, a potential conflict of interest. The poor rating on Cost-effectiveness gives the impression that this is a metric with limited value and high costs.

Now pick any other PRAGMATIC criterion and try to figure out why it was rated as it was. It's even harder to reconstruct the arguments here! Maybe the ACME managers who were involved in the original discussion will remember what was said, although if that was many months ago, things will have moved on - ACME's security metrics program will have matured somewhat, and the business context is different.

So, the main take-home message from this week's example is to keep decent notes as you work through the PRAGMATIC process. It is appropriate, indeed necessary to review and revisit the organization's choice of information security metrics from time to time (perhaps every year or so). Trust us, it will be much easier to pick up the threads of previous discussions by referring to your scoring notes than to start from scratch.

There's one final point before we end. The metric was originally proposed, described, discussed and scored in words and numbers - no pictures. We prepared the simple pie chart graphic above for this blog, later, using some made-up data in MS Excel, but visualizing metrics like this turns out to be a powerful way to help us imagine and think through how they might actually work out in practice. It's also a potential source of bias, however, since we have undoubtedly framed the discussion in a certain way with that particular illustration (we have interpreted it as a pie chart, a proportional representation for starters). If we had illustrated this same piece with the bar chart below instead of the pie chart above, what effect might that have had on your thoughts concerning this metric?  Think on.


07 December 2013

Asking wise questions


"You can tell whether a man is clever by his answers.
You can tell whether a man is wise by his questions."
Naguib Mahfouz


Posing good questions is one of the things that research scientists learn to appreciate early on: scoping and framing an issue in such a way that the questions which arise can actually be answered through feasible research is important, but there's more to it than that. Although narrowly-focused, specific questions tend to be easier to answer, the answers are usually just as narrow and specific. Whereas broader, deeper, bolder questions relating to bigger concerns are much harder to deal with and answer, they can generate greater insight.

"How many spam emails were wrongly classified by the anti-spam software as ham last month?" is an example of a narrowly-scoped information security question, relatively straightforward to answer ... but of little real consequence to the business. The number that is generated by this metric - the measurement - may be factually correct and quite precise, but what does it actually mean? It may have more value if it were expressed relative to prior months, highlighting the trend, but even then it would most likely prompt, say, the CEO to ask "So what?". We have almost certainly posed the wrong question if the CEO doesn't even care whether it is answered, especially given that there are many other much bigger issues on the top table.

The number of wrongly classified spams is clearly a poor security metric for the management audience, although it may perhaps be useful for, say, the technician fine-tuning the anti-spam parameters.

Odd, then, that much of the present-day discussion around security metrics concerns narrow technical metrics just like that. Over in some dark and dusty corner of the Internet, a gaggle of security metrics people and vendors are excitedly chatting about technical vulnerability and patching metrics. They are fine-tuning their measures of packets dropped by the firewalls and malware trapped by their antivirus solutions. 

"So what?" doesn't even seem to have occurred to them, as yet.

If instead, we simply re-framed the question along the lines of "How much did spam-emails-wrongly-classified-as-ham cost us last month?" the answer would have more obvious meaning and relevance to the business. While the measurement and analysis would take a bit more work, expressing the answer as a dollar figure would instantly provides a clue, a way for the audience to assess the scale of the spam issue relative to various other business matters. Even if a business manager was not entirely certain about the meaning of spam and ham, he/she would instantly appreciate that $3m is a materially different matter to $300, whereas who cares if there were three hundred or three million wrongly classified spams? There's no scale, no point of reference. 

Some of you may be thinking "Aha! Let's simply express our management-level information security metrics in dollars!" but financial metrics are not the be-all-and-end-all. Dollars make sense, true, and dollars are important, but governing and managing the corporation well goes beyond purely financial considerations. The CEO is unlikely to be fascinated by the dollar cost of wrongly-classified spam, packets dropped or viruses blocked, unless the costs are huge anyway, in which case he/she is probably already painfully aware of the issue!  

By the way, I deliberately just used the word 'fascinated'. Metrics that are plain boring have little if any impact, and so might as well not exist. Metrics that are exciting, that interest and engage the audience, prompting them to ask related follow-up questions such as "Why?" and "What should we do about this?", are the ones many of us are seeking. This is why PRAGMATIC includes the Meaningfulness criterion. Effective metrics provide information that makes sense, resonates with and ultimately motivates or fires-up the audience. "So what?" shouldn't even come into it.
The more general solution is to pose the questions that matter, which means framing and understanding the issues that matter, which in turn begs the question "What matters?". In chapter 2 of our book, we wrote at some length about the purpose of metrics, why things are measured, including this little tip:
"This is a deceptively important chapter, more than just a way to introduce the book. While you ponder the questions we raise here, ask yourself other similar questions that are relevant to you and your organization, your management, your information security situation. Better yet, consider the way in which we have phrased the questions as, in so doing, we are subtly framing the problem space. Posing appropriate questions is the real art to metrics. Answering them is relatively easy! If you gain nothing else from this book, we sincerely hope you pick up some hints on framing and posing better questions of information security."
We continued by posing a handful of tough, big-picture questions such as "Are we secure enough?" and "Are we sufficiently compliant?", and then exploring several other key reasons for measuring information security, such as to improve information security, for strategic, tactical and operational reasons, for compliance and assurance purposes and so on. These are the whats in "So what?", the undeniable, straightforward, obvious reasons or purposes for which we need security-related measurements.

The strategic perspective is particularly important for security metrics since it determines the organization's entire approach to information security. Senior management support for, and investment in, information security depends on their understanding and appreciation of the risks and opportunities in that sphere, relative to other aspects of the business. Strategic high-level security metrics give meaning to the budget requests and substance to the business cases. From a governance perspective, management cannot knowingly disregard such serious matters (although there may be things they would rather not have been told!). Get the strategic security metrics right and the rest will follow.

Against that backdrop, what is the point in measuring and reporting the number or even the dollar value of misclassified spams?

Take a long hard look at chapter 2. Think about it in the context of your organization - your strategic concerns, your management's pain-points, the business objectives for information security and risk management.  Antivirus, antispam and firewalls are merely a means to an end: that 'end' is the important bit. Do your security metrics address the right questions, the questions that matter?

Rgds,
Gary

[PS  The issue of 'asking the right questions' goes way beyond information security metrics. Here in New Zealand, for instance, a Citizens Initiated Referendum is currently asking:
"Do you support the Government selling up to 49% of Meridian Energy, Mighty River Power, Genesis Power, Solid Energy and Air New Zealand?" 
I have no idea which citizens chose the question nor how they did so, but it smells distinctly like the product of a committee, no doubt one that has been vociferously discussed and argued over, endlessly word-smithed, tweaked and compromised. Aside from being wordy, the question is ambiguous and confusing. Is it one question, in fact, or five rolled into one? Several terms in the question are over-laden with meanings and implications. Is it getting at a party political issue about the government in power, or about what's best for the nation? Does "support" mean "agree with" or "accept"? And beneath it all lurk much bigger but unstated questions relating to the distinction between the Government's and the citizens' assets, state ownership, liabilities and capitalism. As a strategic metric for the government and the country, it's not exactly PRAGMATIC ... and if the politicians had a say in its formulation, perhaps that suits their purposes, along with the predicted low turnout. The asset sales have already started. The strategy is already in play. A contrary referendum result would be, errrr, 'unfortunate'.]

[PPS  I might be clever enough to deal with incidents, but it probably would have been wise to avoid them!]

05 December 2013

SMotW #82: non-financial impacts

Security Metric of the Week #82: non-financial impacts of information security incidents


You may genuinely believe that "In the end, it all comes down to money" and, in respect of our capitalist society and commercial organizations at least, you have a point. Money is the near-universal unit of measurement, valuation and comparison, undoubtedly an important parameter. However, "There's more to life than money" and more at stake than simply returning a profit.  This metric attempts to measure the broader effects of information security incidents, other than their financial impacts. 

Consider the following examples to understand what the metric might attempt to measure:

  • In addition to the financial costs and penalties arising from privacy breaches, individuals' personal interests and wellbeing are harmed, corporate reputations and brands suffer, and society as a whole is impoverished by the erosion of trust;
  • When government departments and non-profit organizations suffer privacy incidents, the financial aspects tend to be minor or negligible considerations;
  • Viruses, human errors or hacks of SCADA/ICS systems can result in environmental disasters and harm that no amount of money will solve;
  • Even though short-term performance constraints in a transactional retail website may not cause a material loss of income, delays and frustration are hardly going to enhance customer satisfaction and loyalty;
  • The financial costs, while considerable, are not the most important fallout from recent unauthorized public disclosures concerning the NSA's surveillance activities;
  • When a 'silver surfer' suffers a hard drive failure that destroys their unique and irreplaceable stash of family snapshots, the direct financial costs in replacing the broken disk are immaterial;
  • If a teen's, celebrity's or politician's sexting somehow finds its way onto the Internet, the personal embarrassment and perhaps career-limiting knock-on effects are far more than purely monetary. Along with bullying and taunting, such incidents can literally be life-changing if not life-threatening.

ACME's managers scored the metric a lowly 31%:

P
R
A
G
M
A
T
I
C
Score
60
65
0
20
60
6
30
20
17
31%

They must have been in a distinctly cynical frame of mind to rate the metric a resounding zero for Actionability: there are things that can and should be done to limit non-financial impacts of incidents, and the metric could provide a rough guide as to how much effort and resources to invest in that area. Remember, though, that ACME Enterprises Inc. is a typical (if fictional) commercial organization, perhaps implying a myopic management focus on numbers preceded by dollar signs.

They were similarly unimpressed with the Accuracy of this metric. Measuring the full financial impacts of security incidents is hard enough; how would one even start to measure non-financial effects?  [Actually, it's not as tricky as one might think. One measurement approach might be to use scales indexed with specific examples of various kinds of non-financial impact, enabling the metrician to rate or score actual incidents relative to the examples. Victims' impressions or perceptions of lasting harm can be measured by survey techniques.]

Whereas we have picked out this metric for discussion and scoring in isolation, in practice it would probably be considered and used alongside its non-identical twin "Financial impacts of information security incidents". The PRAGMATIC approach works equally as well for linked or complex metrics, despite the need to take account of additional options and factors. Taking things up a level or two, it might be interesting to analyze an organization's overall approach to information security metrics - or indeed all forms of metrics - in PRAGMATIC terms. Organizations with a highly mature approach to metrics might address rhetorical questions such as "How Cost-effective are our metrics?" and "Are our metrics sufficiently Predictive and Genuine?" in a systematic, strategic manner.

25 November 2013

SMotW #81: control count

Security Metric of the Week #81: number of different information security controls



We're not entirely sure why anyone would feel the need to count their security controls, unless perhaps they think there might either be too many or too few, begging the question "How many controls should we have?". Nevertheless, somebody proposed this as an information security metric and ACME's managers explored, discussed and scored it through the PRAGMATIC process:

P
R
A
G
M
A
T
I
C
Score
71
75
72
75
88
30
50
65
43
63%

They felt that counting security controls would be tedious, error-prone and laborious hence the metric's depressed ratings for Timeliness, Accuracy and Cost-effectiveness. The 88% rating for Meaningfulness suggests that they believed this metric would provide useful information, provided the following issues were addressed.

The word "different" in the full title of the metric could be misleading: different in what sense? Does it actually mean separate as in counting antivirus installations on each IT system as different controls, or does it indicate different kinds or types of control?  If so, how different do they need to be to count separately? Failing to define the metric would probably lead to inconsistencies, particularly if various people were involved in counting controls. 

ACME would also need to be careful about what does or doesn't constitute an 'information security control'. For instance the door locks on an office, a media storeroom, a toilet and a janitors' closet have quite different implications in relation to protecting ACME's information assets: do any of them qualify as 'information security controls'?  Do they all count?

That said, the metric could prove a useful way to manage the overall suite of security controls if the issues were bottomed-out. 'Getting a handle on things' through metrics means not just measuring stuff, but using the numbers both as a means to determine what adjustments to make and to determine that the adjustments do in fact lead to the anticipated changes in the numbers, thus supporting the implied cause-effect linkages.

The graph above illustrates a more sophisticated version of the metric that distinguishes preventive, detective and corrective controls, showing baseline and custom control counts for each type. This is just one of many ways the numbers might potentially be counted, analyzed and presented. If you are thinking seriously about this metric, you might also like to consider variants that distinguish:
  • Confidentiality, integrity and availability controls;
  • Free, cheap, mid-price and expensive controls;
  • Controls that have been fully, partially or not yet implemented (established, new or proposed controls);
  • Basic, intermediate and advanced controls;
  • Old fashioned/traditional and novel/cutting-edge controls;
  • Control counts within different departments, operating units, countries, businesses etc.;
  • Fail-safe/fail-closed versus fail-unsafe/fail-open controls; 
  • Automated, manual and physical controls;
  • Controls required for compliance with externally-imposed obligations versus those required for internal business reasons;
  • Counts versus proportions or percentages;
  • Trends or timelines versus snapshots;
  • Other parameters (what do you have in mind?  What matters most to your organization?).