20 November 2015

Decision-led metrics

Metrics in general are valuable because, in various ways, they support decisions. If they don't, they are at best just nice to know - 'coffee table metrics' I call them. If coffee table metrics didn't exist, we probably wouldn't miss them, and we'd have cut costs.

So, what decisions are being, or should be, or will need to be made, concerning information risk and security? If we figure that out, we'll have a pretty good clue about which metrics we do or don't want.

Here are a few ways to categorize decisions:
  • Decisions concerning strategic, tactical and operational matters, with the corresponding long, medium and short-term focus and relatively broad, middling or narrow scope;
  • Decisions about risk, governance, security, compliance ...;
  • Decisions about what to do, how to do it, who does it, when it is done ...;
  • Business decisions, technology decisions, people decisions, financial decisions ...;
  • Decisions about departments, functions, teams, systems, projects, organizations; 
  • Decisions regarding strategies/approaches, policies, procedures, plans, frameworks, standards ...;
  • Decisions relating to threats, vulnerabilities and impacts - evaluating and responding to them;
  • Decisions made by senior, middle or junior managers, by staff, and perhaps by or relating to business partners, contractors and consultants, advisors, stakeholders, regulators, authorities, owners and other third parties;
  • Decisions about effectiveness, efficiency, suitability, maturity and, yes, decisions about metrics (!);
  • ... [feel free to bring up others in the comments].

Notice that the bullets are non-exclusive: a single metric might support strategic decisions around information risks in technology involving a commercial cloud service, for instance, putting it in several of those categories. 

If we systematically map out our current portfolio of security metrics (assuming we can actually identify them: do we even have an inventory or catalog of security metrics?) across all those categories, we'll probably notice two things. 

First, for all sorts of reasons, we will probably find an apparent excess or surplus of metrics in some areas and a dearth or shortage elsewhere. That hints at perhaps identifying and developing additional metrics in some areas, and cutting down on duplicates or failing/coffee-table metrics where there seems to be too many which is itself a judgement call or a decision about metrics - and not as obvious as it may appear. Simplistically aiming for a "balance" of metrics across the categories is a naive approach

Second, some metrics will pop up in multiple categories ... which is wonderful. We've just identified key metrics. They are more important than most since they evidently support multiple decisions. We clearly need to be extra careful with these metrics since data, analysis or reporting issues (such as errors and omissions, or unavailability, or deliberate manipulation) is likely to affect multiple decisions.

Overall, letting decisions and the associated demand for information determine the organization's choice of metrics makes a lot more sense than the opposite "measure everything in sight" data-supply-driven approach. What's the point in measuring stuff that nobody cares about? 


12 November 2015

Metrics database

I wonder if any far-sighted organizations are using a database/systems approach to their metrics? Seems to me a logical approach given that there are lots of measurement data swilling around the average corporation (including but not only those relating to information risk, security, control, governance, compliance and privacy). Why not systematically import the data into a metrics database system for automated analysis and presentation purposes? Capture the data once, manage it responsibly, use it repeatedly, and milk the maximum value from it, right?

If you think that's a naive, impracticable or otherwise krazy approach, please put me straight. What am I missing? Why is it that I never seem to hear about metrics databases, other than generic metrics catalogs (which are of limited value IMNSHO) and Management Information Systems (which were all the rage in the 80s but strangely disappeared from sight in the 90s)?

Conversely, if your organization has a metrics database system, how is it working out in practice?  What can you share with us about the pros and cons?

07 October 2015

Security dashboard tips

Tripwire blog's The Top 10 Tips for Building an Effective Security Dashboard is an interesting collection of advice from several people. It's thought provoking, although I don't entirely agree with it.

Tip 2 'Sell success, not fear', mentions:
"For example, in the event that they cannot find personnel who come equipped with the skills needed to improve progress, security personnel can use dashboards to demonstrate the impact that well trained individuals could have on finding and resolving issues and threats, as well as to subsequently leverage that insight for training and cultivating available skills."
Although somewhat manipulative, metrics can indeed provide data supporting or justifying proposed security improvements, assuming that, somehow, someone has already decided what needs to be done ... and suitable metrics can be useful for that purpose too.

The thrust of tip 4 'Use compelling visualizations' is that the dashboard needs to be glossy: I agree dashboards should be professionally crafted and reasonably well presented but I feel their true value and utility has far more to do with the information content than the look.

Tip 9 'Thoroughly vet the information before it is presented' is an odd one. The advice to be ready to explain outliers and anomalies makes sense, but the implication of someone vetting the data before it goes to the dashboard is that it will be both delayed and sanitized. Hmmm.

Well, take a look for yourself and see what you make of the ten tips.

10 September 2015

Metrics case study on Boeing


The Security Executive Council has published an interesting case study concerning the review and selection of metrics relating to physical and information risks at Boeing.  [Access to the article is free but requires us to register our interest.]

The case study mentions using SMART criteria and a few other factors to select metrics but doesn't go into details, unfortunately.  Nevertheless, the analytical approach is worth reading and contemplating.

If we were to conduct such an assignment for a client today, we would utilize a combination of tools and techniques across six distinct phases:

  1. Background information gathering concerning Boeing's business situation, information risks, and existing metrics, using standard analytical or audit methods, clarifying the as-is situation and building a picture of what needs to change, and why. This phase would typically culminate in a report and a presentation/discussion with management.

  2. GQM (Goal-Question-Metric) assessment eloquently described by Lance Hayden in IT Security Metrics. This is a more structured and systematic version of the approach outlined in the case study. A workshop approach would be useful, probably several in fact to delve into various aspects with the relevant business people and experts. The output would be a matrix or tree-root diagram illustrating the goals, questions and metrics.

  3. PRAGMATIC assessment and ranking of the metrics generated in phase 2 using the approach documented in our book. The output would be a management report containing a prioritized list of metrics ranked according to their PRAGMATIC scores, leading to a further presentation/discussion with management and, hopefully, agreement on a shortlist of the most promising metrics, those actually worth pursuing. This and the previous phase would take a creative approach, thinking about what needs to be measured, why, how, when etc., using both GQM and PRAGMATIC to firm-up the metrics that best fit the requirements  and focus groups to finalize the metrics (both existing metrics that are worth retaining possibly with some changes, and novel metrics being introduced).
  4. Planning and preparing for the implementation phase, perhaps including pilot studies.

  5. Implementation: making the changes needed to collect, analyse, report and most of all use the metrics.  This might well involve retiring or recasting some of the client's existing metrics that haven't earned their keep, in a way that teases out the last dregs of value from the data gathered previously.
  6. Ongoing metrics management and maintenance: using information from the GQM and PRAGMATIC steps to monitor and if appropriate refine or replace the metrics, ensuring for instance that they are proving valuable to the business (i.e. they should be cost-effective - one of the PRAGMATIC criteria conspicuously absent from SMART).  
In parallel with that sequence would be conventional project management activities - planning, resourcing & team building, motivation, tracking, reporting and assignment risk management.

21 August 2015

Lean security

Lean manufacturing or kaizen is a philosophy or framework comprising a variety of approaches designed to make manufacturing and production systems as efficient and effective as possible, approaches such as:
  • Design-for-life - taking account of the practical realities of production, usage and maintenance when products are designed, rather than locking-in later nightmares through the thoughtless inclusion of elements or features that prove unmanageable;
  • Just-in-time delivery of parts to the production line at the quantity, quality, time and place they are needed (kanban), instead of being stockpiled in a warehouse or parts store, collecting dust, depreciating, adding inertia and costs if product changes are needed;
  • Elimination of waste (muda) - processes are changed to avoid the production of waste, or at the very least waste materials become useful/valuable products, while wasted time and effort is eliminated by making production processes slick with smooth, continuous, even flows at a sensible pace rather than jerky stop-starts;
  • An obsessive, all-encompassing and continuous focus on quality assurance, to the extent that if someone spots an issue anywhere on the production line, the entire line may be stopped in order to fix the root cause rather than simply pressing ahead in the hope that the quality test and repair function (a.k.a. Final Inspection or Quality Control) will bodge things into shape later ... hopefully without the customer noticing latent defects;
  • Most of all, innovation - actively seeking creative ways to bypass/avoid roadblocks, make things better for all concerned, and deliver products that go above and beyond customer expectations, all without blowing the budget.
Service industries and processes/activities more generally can benefit from similar lean approaches ... so how might kaizen be applied to information risk management and security?
  • Design-for-security - products and processes should be designed from the outset to take due account of information security and privacy requirements throughout their life, implying that those requirements need to be elaborated-on, clarified/specified and understood by the designers;
  • Just-in-case - given that preventive security controls cannot be entirely relied-upon, detective and corrective controls are also necessary;
  • Elimination of doubt - identifying, characterizing and understanding the risks to information (even as they evolve and mutate) is key to ensuring that our risk treatments are necessary, appropriate and sufficient, hence high-quality, reliable, up-to-date information about information risk (including, of course, risk and security metrics) is itself an extremely valuable asset, worth investing in;
  • Quality assurance applies directly - information security serves the business needs of the organization, and should be driven by risks of concern to various stakeholders, not just 'because we say so';
  • Innovation also applies directly, as stated above.  It takes creative effort to secure things cost-effectively, without unduly restricting or constraining activities to the extent that value is destroyed rather than secured.

04 August 2015

Smoke-n-mirrors IBM style

I've just been reading the IBM 2015 Cyber Security Intelligence Index, trying to figure out their 'materials and methods' i.e. basic parameters for the survey, such as population size and nature. All I can find are some obtuse references in the first paragraph:
"IBM Managed Security Services continuously monitors billions of events per year, as reported by more than 8,000 client devices in over 100 countries. This report is based on data IBM collected between 1 January 2014 and 31 December 2014 in the course of monitoring client security devices as well as data derived from responding to and performing analysis on cyber attack incidents. Because our client profiles can differ significantly across industries and company size, we have normalized the data for this report to describe an average client organization as having between 1,000 and 5,000 employees, with approximately 500 security devices deployed within its network."
Reading between the lines, it appears that this is a report gleaned primarily from 'more than 8,000 client [network security?] devices' belonging to an unknown number of organizations around the world who are customers of IBM Managed Security Services ... which IBM has described as:
"24/7/365 monitoring and management of security technologies you house in your environment. IBM provides a single management console and view of your entire security infrastructure, allowing you to mix and match by device type, vendor and service level to meet your individual business needs while drastically reducing your security costs, simplifying security management and accelerating your speed to protection."
But, before you delve into the actual report, read that final sentence of the first paragraph again: they have 'normalized the data' (whatever that means) to an 'average client organization' with about 500 security devices ... so given the total of 8,000 devices, and on the assumption that 'average' means 'mean', it appears the survey covers just 16 organizations whose network security devices are managed by IBM. Oh boy oh boy. No wonder they are so reluctant to tell us about the analytical methods!  

The data are from 2014, the report was published in July 2015. Given the miniscule sample, I wonder why it took them 7 months to do the analysis and reporting? Crafting the words to gloss over the glaring flaws, perhaps?

The remainder of the report is pretty humdrum - some superficially interesting graphics and four 'case studies' (three of which - that's 75% or a 'vast majority', IBM - are not actual cases as such but fictional accounts based on the collective experiences of an unknown number of clients). There's nothing particularly unusual or noteworthy in the report, despite the hyperbole (2014 was hardly "The year the Internet fell apart", IBM). The trends and other statistical information is worthless in scientific terms.

Remember this cynical blog piece whenever you see the report quoted. Better still, read the report for yourself and make up your own mind.

11 June 2015

Culture metrics

Over on Entrepreneur e-zine, serial company founder Greg Besner recommends the following ten metrics concerning organization's culture
  1. Communication
  2. Innovation
  3. Agility
  4. Wellness
  5. Environment
  6. Collaboration
  7. Support
  8. Performance focus
  9. Responsibility
  10. Mission and value alignment
OK, but why did he pick those ten parameters to measure over all the others? What makes them special?

In the article, Greg briefly explains his ten metrics in terms that make it clear why he thinks they are important. The trouble is, with just a moment's thought, I can easily come up with another ten, complete with my reasons for measuring them ... and I guess you too could come up with your self-justified list of ten culture metrics ... and so could anyone else with enough interest and expertise in this area ...

I guess right now you are puzzling over Greg's list, wondering about mine, and thinking about what else might be measured. Furthermore, I bet you are forming opinions about 'culture metrics' swimming around in your head, liking some, disliking others ... 

... and yet we haven't even attempted to reach agreement on a definition of "culture" at this point.

Ah, oh, yes.

And furthermore, who said there had to be ten anyway? What's wrong with one, or three, or fifty seven?

My point is that it's arbitrary. My choice of metrics - their number and their nature - almost certainly differs materially from yours. Both of us can justify our choices. Greg might feel compelled to defend his choice of ten. Given sufficient spare time and an ample supply of our favorite beverages, I'm sure we could have discussed cultural metrics for hours between us but somehow I doubt we would reach a consensus, for various reasons, not the least of which is that, in regard to metrics, context matters. The cultural metrics that suit, say, a hi-tech start-up are likely to be different to those chosen by a government department, or an oil company, or a school.  Any one of those organizations may choose different cultural metrics as it matures. Things that happen to be in vogue today may well change tomorrow, next week, next year or whatever (remember Peters & Waterman's "In search of Excellence"? For a while, we obsessed about the characteristics that the book identified in excellent companies, but before long we realized there were many other important parameters too, and even Tom himself backtracked in his later books).

25 May 2015

Low = 1, Medium = 2, High = 97.1


Naïve risk analysis methods typically involve estimating the threats, vulnerabilities and impacts, categorizing them as low, medium and high and then converting these categories into numbers such as 1, 2 and 3 before performing simple arithmetic on them e.g. risk = threat x vulnerability x impact.

This approach, while commonplace, is technically invalid, muddling up quite different types of numbers:
  • Most of the time, numeric values such as 1, 2 and 3 are cardinal numbers indicating counts of the instances of something. The second value (2) indicates twice the amount indicated by the first (1), while the third value (3) indicates three times the first amount. Standard arithmetic is applicable here.
  • Alternatively, 1, 2 and 3 can indicate positions within a defined set of values - such as 1st, 2nd and 3rd place in a running race. These ordinal values tell us nothing about how fast the winner was going, nor how much faster she was than the runners-up: the winner might have led by a lap, or it could have been a photo-finish. It would be wrong to claim that the 3rd placed entrant was “three times as slow as the 1stunless you had additional information about their speeds, measured using cardinal values and units of measure: by themselves, their podium positions don’t tell you this. Some would say that being 1st is all that matters anyway: the rest are all losers. Standard arithmetic doesn't apply to ordinals such as threat values of 1, 2 or 3.
  • Alternatively, 1, 2 and 3 might simply have been the numbers pinned on the runners’ shorts by the race organizers. It is entirely possible that runner number 3 finished first, while runners 1 and 2 crossed the line together. The fourth entrant might have hurt her knee and dropped out of the race before the start, leaving the fourth runner as number 5! These are nominals, labels that just happen to be digits or strings of digits. Phone numbers and post codes are examples. Again, it makes no sense to multiply or subtract phone numbers or post codes. They don’t indicate quantities like cardinal values do. If you treat a phone number as if it were a cardinal value and divide it by 7, all you achieved was a bit of mental exercise: the result is pointless. If you ring the number 7 times, you still won’t get connected. Standard arithmetic makes no sense at all with nominals.
When we convert ordinal values such as low, medium and high, or green, amber and red, risks into numbers, they remain ordinal values, not cardinals – hence standard arithmetic is inappropriate. If you convert back from ordinal numbers to words, does it make any sense to try to multiply something by "medium", or add "two reds"? “Two green risks” (two 1’s) are not necessarily equivalent to “one amber risk” (a 2). In fact, it could be argued that the risk scale is non-linear, hence “extreme” risks are materially more worrisome than most mid-range risks, which are of not much more concern than low risks. Luckily for us, extremes tend to be quite rare! As ordinals, these risk numbers tell us only about the relative positions of the risks in the set of values, not how close or distant they are – but to be fair that is usually sufficient for prioritization and focus. Personally, a green-amber-red spectrum tells me all I need to know, with sufficient precision to make meaningful management decisions in relation to treating the risks.

Financial risk analysis methods (such as SLE and ALE, or DCF) attempt to predict and quantify both the probabilities and outcomes as cardinal values, hence standard arithmetic applies … but don’t forget that prediction is difficult, especially about the future (said Neils Bohr, shortly before losing his shirt on the football pools). If you honestly believe your hacking risk is precisely 4.83 times as serious as your malware risk, you are sadly deluded, placing undue reliance on the predicted numbers.

16 May 2015

Metrics to govern and manage information security

Section 9.1 of ISO/IEC 27001:2013 requires organizations to 'evaluate the information security performance and the effectiveness of the information security management system'.  The standard doesn't specify precisely what is meant by 'information security performance' and '[information security?] effectiveness' but it gives some strong hints:
"The organization shall determine:
a) what needs to be monitored and measured, including information security processes and controls;
b) the methods for monitoring, measurement, analysis and evaluation, as applicable, to ensure valid results;
c) when the monitoring and measuring shall be performed;
d) who shall monitor and measure;
e) when the results from monitoring and measurement shall be analysed and evaluated; and
f) who shall analyse and evaluate these results."
The standard specifies (much of) the measurement process without stating what to measure i.e. which metrics.  No doubt the committee would argue that it is not possible to be specific about the metrics since each organization is different - and there's a lot of truth in that - but it's a shame they didn't explain how to select metrics or offer a few examples ... which is where our security awareness paper originally delivered in August 2008 picks up the pieces.

We drew on the IT Governance Institute's advice on information security governance for inspiration, suggesting metrics corresponding to the four aspects identified in the ITGI paper (governance outcomes; knowledge & protection of information assets; governance benefits; and process integration).

[The original hyperlink to the ITGI paper now gives a 404 page-not-found error, unfortunately.  It was a good paper.  Perhaps they moved or updated it?]

07 May 2015

Infosec & risk management metrics

We've just republished the next in the series of management-level security awareness papers on metrics.  The latest one lays out a range of metrics for information security and risk management.

Leaving aside the conventional metrics that are typically used to manage any corporate function, the paper describes those that are peculiar to the management of information risk and information security, with an emphasis on business-focused metrics.

I spent last week teaching a CISM course for ALC in Sydney.  The business and risk focus is a unifying thread throughout CISM, from the governance and strategy angle through risk and security management to incident management.

In contrast to courses covering the more technical/IT aspects of information security intended for mid- to low-level information security professionals with operational responsibilities, CISM is intended for Information Security Managers and Chief Information Security Officers with governance, strategic and management responsibilities.  It promotes the value of elaborating on business objectives that are relevant to information risk and security management, and using those to drive the development and delivery of a coherent business-aligned risk-driven information security strategy.  Metrics are of course integral to the CISM approach, particularly governance and management metrics similar to those in the awareness paper.

24 April 2015

Resilience as a business continuity mindset

An article written in conjunction with Dejan Kosutic has just been published at ContinuityCentral.com
"Most business continuity experts from an IT background are primarily, if not exclusively, concerned with establishing the ability to recover failed IT services after a serious incident or disaster. While disaster recovery is a necessary part of business continuity, this article promotes the strategic business value of resilience: a more proactive and holistic approach for preparing not only IT services, but also other business processes before an incident in order that an organization will survive incidents that would otherwise have taken it down, and so keep the business operating in some form during and following an incident."
We explain how resilience differs from and complements more conventional approaches to business continuity.  It is a cultural issue with strategic implications and benefits for everyday routine business, not just in crisis or disaster situations. It has implications throughout the organization, including business activities/processes, systems, workers and relationships with third parties. It is an integral and essential part of risk management.


The article discusses resilience in the context of ISO 22301 and ISO27k, and includes a maturity model and metric to help organizations put the strategy into practice.



Dejan and I share a passion for this topic that I hope comes across in our writing. Comments welcome!

Regards,

21 April 2015

Awareness paper on authentication and phishing metrics

We've just republished a management-level security awareness paper on metrics relating to user authentication and phishing.

The introduction asks "How do we tell whether our authentication controls are effective?" and "What does 'effective' even mean in this context?" - two decent questions that could be addressed through suitable metrics.

Questions like these are central to the GQM (goal-question-metric) method (see IT Security Metrics by Lance Hayden), and not just literally in terms of their position in the handy acronym. They link the organization's goals or objectives relating to information security, to the information security metrics that are worth measuring.

In your particular circumstances, the effectiveness of authentication controls might or might not be of sufficient concern to warrant generating the associated metrics. Other aspects might take precedence, for example the amount invested in authentication controls, and the ongoing operating and maintenance costs of those controls. It's usually not too hard to think up a whole raft of aspects, parameters or concerns relating to the topic area, but focusing on the things that are likely to matter most to the organization (business priorities) is a good way to keep the list within reasonable bounds. Once you know what they are, the next step is to figure out the questions arising e.g. "Are we spending appropriately (neither too much nor too little) on authentication?"

From there, it's simply a matter of deciding what data would help address the questions, and those are your metrics!  Job done!  Errr, well, no, not quite: if you have several goals/areas of concern and numerous questions arising, each requiring multiple metrics to generate the answers, there is a distinct risk of being overwhelmed with possibilities. It is infeasible and in fact counterproductive to attempt to measure everything. Less is more! This is where the PRAGMATIC method comes into play as a way to whittle down the long list to a shortlist of metrics showing the most promise. The GQM approach also suggests filtering out the metrics that don't address the questions very well, and trimming down on metrics addressing questions that are only marginally related to the organization's business goals. Both approaches have their merits.



10 April 2015

3 more metrics papers

We've just published another three documents on security metrics, written and first released five years ago as part of the management stream in the NoticeBored information security awareness service.

The first paper was concerned with measuring integrity.  Despite being one of the three central pillars of information security, integrity is largely overshadowed by availability and, especially, confidentiality ... and yet, if you interpret 'integrity' liberally, it includes some extremely important information security issues. The 'completeness and correctness' angle is pretty obvious, while 'up to date-ness' and 'appropriateness' are less well appreciated.  Add in the character and trustworthiness of people, and integrity takes on a rather different slant (Bradley Manning, Julian Assange and Edward Snowden springing instantly to mind as integrity failures).  An 'honesty metric' is an innovative idea.

The integrity metrics paper also suggests measuring the integrity of the organization's security metrics program or system of measurements, on the basis that metrics ought to be accurate, complete, up-to-date and relevant. The metrics integrity issue is obvious when you think about it. Managing with poor quality information is less than ideal.  However, in our experience, information security metrics are mostly taken at face value: we usually focus on what the numbers are telling us without even considering that they might perhaps be wrong, misleading, incomplete or inconsequential. Worse still, we get so distracted by the fancy "infographics" that the information content is almost irrelevant.  That's hardly a scientific approach!  We have raised this issue before in relation to treating published security surveys as gospel, blythely ignoring the fact that most are statistically dubious if not patently biased marketing copy. Remember this the next time you search the web for pie charts to illustrate your security investment proposals, or the next time someone tries to persuade you to loosen the purse strings! 

A short, humdrum paper on IT audit metrics suggests a few ways to measure the IT audit function, such as "IT audit program coverage" as well as conventional management metrics.  

The third paper on malware metrics was virtually the same as the version released a year earlier. We made some changes the following year, partly due to the research and thinking that went into writing PRAGMATIC Security Metrics ... but you'll have to wait just a bit longer for the 2009 paper.

02 April 2015

Management without metrics - how?

The SEC (Security Executive Council - not the Securities and Exchange Commission!) boldly describes itself as "the leading research and advisory firm that specializes in security risk mitigation."  Their primary interest appears to be physical security, although they also make the odd nod towards IT security, business continuity and 'convergence'.

The SEC conducted an unscientific online poll, asking respondents to self-assess and report the capability maturity of their security programs using the classic 5 point SEI-CMM scale.  Unsurprisingly, the results show a vaguely normal distribution about the middle value ('defined'), skewed towards the low end of the maturity scale.

It appears they may have asked a separate question about metrics:
"When participants were asked about metrics (a higher level of maturity), 64% said they did not use business value metrics (metrics that are beyond initial "counting" of activities such as number of background checks performed or number of badges issued)."
So only about a third of their respondents have security metrics other than the absolute basics - a pathetically low proportion that begs the obvious question "How are they managing security without metrics?"

Answers on a postcard please.  Or comment below.

21 March 2015

Metrics matter (updated)

An article by Mintz Levin about the 2013 privacy breach/information security incident at US retailer Target stated that the company has disclosed gross costs of $252 million, with some $90m recovered from its insurer leading to a net cost of $162m, up to the end of 2014 anyway (the incident is not over yet!).

Given that the breach apparently involved personal information on about 40 million people, it's trivial to work out that the incident apparently cost Target roughly $6 per compromised record ($4/record net of insurance payouts) ... but before anyone runs amock with those headline numbers, let's delve a bit deeper.

First off, what confidence do we have in the numbers themselves? The article cites its sources as 8-K filings, in other words Target's official reports concerning the incident to the Securities and Exchange Commission. Personally, I'm quite happy with that: the dollar amounts are not mere speculation but (I believe, not being a specialist in US laws and regulations) have been carefully drawn up, audited and formally approved by management - specifically Target's CFO. We could pore over the filed 8-K reports to verify them since they are published on the SEC site, or we could simply accept Mintz Levin's word: they are a law firm so there's a degree of trust and confidence. 

I took the 40 million compromised record count from a convenient web page somewhere - not as easy to verify but there are many such pages reporting similar numbers, so let's assume the figure is based on a count and disclosure by Target.  And let's assume it's correct (yes, another assumption).

Now dig further. Having tried to track and calculate the financial costs from relatively small information security incidents myself, I appreciate just how tough that can be in practice. The costs fall across two main categories, direct and indirect or consequential. The direct costs are a bit of a nightmare to monitor when everyone is running around frantically dealing with the incident at its height, but they can be estimated retrospectively and tracked fairly accurately once things calm down: it's a matter of cost accounting. Simply stated, someone assigns the direct expenses associated with the incident to an accounting code for the incident, and the financial system tots up and spews out the numbers. There are several opportunities for substantial error in there (for instance, signficant costs wrongly coded or neglected, and investments in information security/privacy improvements that would have been made anyway, regardless of the incident, being charged against it in order to secure the budgets and inflate the insurance claims), but these errors pale into insignificance against the indirect or consequential costs ...

A serious information security incident that becomes public knowledge seems likely to have an adverse impact on the organization's image and hence its brand values, but how much of an effect, in dollar terms? It's almost impossible to say with any certainty. In the case of a major incident, the company's marketing and financial people could evaluate and estimate the effects using metrics such as customer footfall, turnover, profitability, market surveys and so forth ... but potentially there is a conflict of interest there since those self-same people are charged with maintaining or boosting the company's brands and value, hence they may be understandably reluctant to report bad news to management. Furthermore, there are no easy, generally-accepted, accurate or independently-verifiable ways to convert changes in most of these metrics (such as "brand recognition") into dollars without a great deal of argument and doubt.

On top of that, there is some truth to the saying that "There's no such thing as bad news". Publicity of incidents is also publicity for the organizations and individuals involved. Publicity equates to media exposure and brand recognition hence, paradoxically, bad incidents might actually benefit those involved.

That leads us to consider stock price as another possible measure of the gross effects of an incident, one that conveniently enough is already in dollars and is widely reported, with historical data just a few click away (e.g. see the 3-year Target share price graph to the left here, courtesy of those nice people at MarketWatch.com). Given the number of shares issued (requiring a few more clicks), it's not too hard to convert the share price at any point into a market capitalization value for the company, and thus to calculate the effect the incident had on that value, but now it gets really interesting. After the incident was initially disclosed and widely reported in 2013, Target's share price declined markedly and then recovered in 2014, and is now well above the 2013 peak. What relation does that have to the incident? Again, it's almost impossible to say because there are just so many factors involved: stockbrokers, dealers and investors take a professional interest in identifying, evaluating and predicting those factors, and some of them are very successful so you might try asking them about the incident, but don't be surprised if they confuse you with statistics while keeping their trade secrets to themselves!

The same issue cropped up in the Sony hack at the end of last year. Sony's share price (plotted on the right over the past 6 months) has moved quite consistently upwards. There was a noticeable dip around the year end but it pretty much recovered its original trajectory by the end of January. I'm quite sure I could fit a straight line trend to the data with little statistical variance. 

OK, is that all there is to it? Well, no, we're not finished yet, not by a long chalk. 

So far we've only considered Target's costs: what about those whose personal information was disclosed, and the banks and other companies who have lost out to identity fraud? How much has the incident as a whole cost? How on Earth can we measure or calculate that? Once again, the short answer is that we can only estimate at best. 

What price would YOU put on the personal aggravation and grief caused by discovering that YOUR privacy has been breached and you may be the victim of identity theft? Go ahead, think about it and name your price! If enough of us did so, we might generate some sort of mean value but it's obviously highly subjective and doubtless extremely sensitive to the context and the precise questions we pose - plus of course there's the issue of our sampling strategy and sample size, since we can't ask everyone. Unfortunately, even a small error in our per-victim cost estimate will be massively amplified if we multiply that by the 40 million, so we really ought to take more care over this if the numbers matter - which they surely do as we'll come on to in a moment.

First, though, consider that the relationship between the total cost of a privacy breach/incident and the number of records disclosed is generally implied, but that is another unproven and potentially highly misleading assumption. We don't actually know the nature of the relationship, and it is likely to vary according to a number of factors aside from just the number of records. Identities belonging to the rich and famous are probably worth much more to identity thieves than those belonging to the the poor, for example, so a breach involving data from high-worth individuals, organizations or celebrities seems likely to result in greater losses than one involving the same number of records for "ordinary" people. Different items or types of information vary markedly in their inherent value (e.g. contrast the value of someone's email address or phone number to their credit card number - and then consider the additional value to fraudsters of obtaining multiple items in linked records). One might argue on basic arithmetic that the per-record costs decrease exponentially as the number of records increases, or that the relationship is non-linear due to the additional impact of news headlines with nice round figures ("more than 40 million" is worse than "almost 40 million", and far worse than "40 thousand"!). 

In privacy breaches, the black-market price of credit card numbers etc. is sometimes used to estimate the overall costs (e.g. if 'the average' record is worth, say, $2 to criminals, then $40m records are worth $80m). That simplistic approach begs various questions about how we determine the black-market price (which, by its very nature, is not openly available information), and at what point we measure it (since the value of stolen credit card numbers declines quite rapidly as word about the incident spreads and victims, banks and credit card companies progressively identify and cancel the cards). Furthermore, the costs accruing to the victims (i.e. Target and its owners/stakeholders, the data subjects, the banks and other institutions involved, oh and the FBI, police etc.) as a result of the incident may be related to but almost certainly exceed the profits accruing to the identity thieves, fraudsters and assorted middle-men exploiting it. Society as a whole picks up the discrepancies in a diffuse fashion.

That brings us to our final issue. Who cares how much infosec incidents such as this actually cost anyway? It matters because the information gets used in all sorts of ways, for example to justify investment in information security and privacy controls, incident management, insurance premiums, identity theft cover, contingency sums and more. It gets used for budgeting and benchmarking, for policy- and law-making. It feeds into our general appreciation of the information risks associated with personal information, and information risks as a whole. 

Stepping back a pace or two, this whole issue could be considered the elephant in the room for information risk and security professionals. We put enormous effort into promoting and justifying investments in information security controls to reduce the probability of, and damage caused by, incidents, trying our level best to persuade management to take heed of our concerns, support our business cases and invest adequately in security, especially proactive measures, systematic approaches and good practices such as ISO27k ... but if we look coldly and dispassionately at the situation including the assumptions and arguments laid out above, it could be said that incidents are not nearly as bad as we tend to make out, in other words we are crying wolf.  

Oh oh!  I guess we ought to firm up some of those estimates and assumptions, pronto, before we all lose our jobs! Metrics do matter, in fact.

PS The 2015 Verizon Data Breach Investigation Report attempts to define the mathematical relationship between 'Payout' and 'Records Lost' in so-called data breach incidents (see figure 21 and associated text), but acknowledges that although they have improved their model, they still don't have a firm grasp of all the relevant factors. Perhaps this blog piece will prompt them to re-evaluate their assumptions and presumptions, maybe even to do the research given the data and other resources available to them. Don't hold your breath though. I fully expect the mythical linkage between incident costs and records compromised to persist for many years yet, despite my best efforts. Its the infosec equivalent of the search for the holy grail - the Monty Python version. 

03 March 2015

Comparative security metrics

In situations where it is infeasible or impracticable to quantify something in the form of a discrete count or a value in specific units, comparative or relative measures are a useful alternative. They are better than not measuring at all, and in some cases easier to comprehend and more useful in a practical sense. In this respect, we disagree with those in the field who fervently insist that all metrics must be expressed as numbers of units (e.g. "20 centimetres"). It seems to us "A bit longer than a pencil", while obviously imprecise, might be a perfectly legitimate and helpful measure of something (regardless of what that thing might be - a cut on your arm for instance).

Cardinal numbers and units of measure have their place, of course, but so do ordinals, comparatives and even highly subjective measures - all the way down to sheer guesswork (and, yes, 'down to' itself implies a comparative value). Douglas Hubbard's "How To Measure Anything" is an excellent, throught-provoking treatise on this very subject.

In information security, comparisons or relations can provide answers to entirely valid and worthwhile questions such as:

  • Are we more or less secure than our peers?
  • Are we getting more or less secure over time?
  • If we both sustain our present rate of change, how long will it be before we'll surpass our competitors' level of information security?
  • Are our information risks increasing or decreasing?
  • Which are our strongest and weakest areas or aspects of security?
  • Of all the myriad changes currently occuring in information security, what are the most worrying trends?
  • Does information risk X fall within or exceed our risk appetite or tolerance?
  • Which business unit, function, department or site is the most/least vulnerable?
  • Are we spending too little, about the right amount, or too much on information security
As part of an information security awareness case study on 'the Sony hack', a management discussion paper describes three types of comparative security metrics with several examples of each.


23 February 2015

Management awareness paper on contingency metrics

Here's the next security awareness paper in the series, describing metrics relating to contingency and business continuity management.

"Measuring the effectiveness of contingency arrangements is a tough challenge, not least because (like insurance policies) we hope we will never need to use them. However it makes sense to measure our investment in contingency plans and preparations, and to confirm whether management is sufficiently confident in them, prior to enacting them as by that stage it will be too late."

Possible contingency metrics suggested in the paper include:

  • RTO and RPO - classic disaster recovery metrics in their own right
  • Resilience - measured by incidents
  • Recovery - proportions of systems for which RTP/RPO are defined, tested and met
  • Costs - easier to measure than benefits, and yet an uncommon metric in practice
  • Management confidence - to what extent do managers believe in the congtingency arrangements?
There are many other possible metrics in this area.  What do you measure? Why?  What do your contingency or business continuity metrics tell you?  

11 February 2015

The art of security metrics

A security metrics opinion piece by Elden Nelson in CSO Online identifies and expands on the following four issues:
  1. "Communications problems are due to a tool-centric rather than risk-centric view of security."While I accept that tool-centrism is not good, I disagree with the casual but simplistic implications that 'a tool-centric view of security' is the cause of communications problems, or that a 'risk-centric view of security' is necessarily the alternative. It sems to me there are many problems in communicating security metrics. The security or security reporting tools per se are less of an issue, in my opinion, than factors such as many technologists' fundamental misunderstandings about their own roles in the organization, about business management, strategy, risk and statistics, plus their apalling communications skills. Furthermore, communications problems are surmountable: given enough time and effort, we can get better at communicating things, putting our points across, but what is it that are we trying to communicate? That, to me, is a much more significant issue with metrics. Turning the focus of metrics from tools to risks is an improvement, but is that sufficient? I don't think it goes far enough: risks don't matter so much as risks to, and opportunities for, the organization and achievement of its business objectives. Relevance is an issue.
  2. "The volume of security products in the market make seamless metrics and reporting very difficult."
    Following closely on the heels of the previous one, this issue is a red herring. Managers don't care about 'security products'. For the most part, they don't even merit a second thought, except perhaps when someone comes cap-in-hand for yet another sizeable investment in some perplexing security technology with the strong likelihood of their being nothing concrete to show for it (an inherent problem with security improvements and risk reductions). Technologists are obsessive about their tech tools, whereas managers are obsessive about the business, things such as targets and objectives, risks and opportunities, efficiencies and budgets, effectiveness and outcomes, compliance obligations, and most of all getting the most out of people, organizations and situations. The tools we use along the way are, for the most part, just the means to an end, not ends in themselves. It's not the computer screen, telephone or paper that matters but the information it conveys.
  3. "Aggregate security products for seamless metrics and better communication."
    What is it with 'seamlessness'? I literally don't understand why anyone would consider the 'absense of seams' relevant to metrics, nor why aggregating products is even mentioned, while the author makes no attempt to enlighten us. The third issue falls headlong into the trap we were warned about in issue one: information security metrics aren't about security tools or products. The Mona Lisa is not a globally renowned work of art due to the astounding features of Leonardo da Vinci's palette knife.
  4. "Security has moved to the central business functions—it’s no longer just an IT issue."
    Leaving aside the question of whether it was ever 'just an IT issue', IT security is history: today, enlightened professionals think and speak not in terms of IT security or cybersecurity but information security and information risk. The technology part is incidental, a mere commodity for the most part. Data is 'just ones and zeroes' with negligible inherent value, in direct constrast to the meaning, the knowledge, the intangible information content encoded in the numbers. The canvas beneath the Mona Lisa's image is, after all, just canvas. The paint is just paint. The physical representation of a lady sitting in a chair is largely incidental to the artwork.  Remember Magritte's "Ceci n'est pas une pipe"?

Security metrics are more representational than literal. Their purpose includes but extends well beyond the mere communication of facts. PRAGMATIC security metrics encourage their recipients to contemplate the meaning and implications for the business, leading to decisions, attitudinal shifts and (in some cases) changes to behaviours and activities. If you fail to appreciate the difference, and don't make the effort to provide relevant, topical information in a useable, meaningful form, your security metrics are doomed. Don't forget that other business information flowing around the typical corporation is, in effect, competing for the same head-space. Your security metrics need to make an impact - and, no, we're not talking about primary colors and animations, or smacking people in the head, tempting though that may be.

Management awareness paper on office information security metrics


The NoticeBored security awareness module from which we've plucked this management-level discussion paper covered information security issues relevant to the typical office or corporate workplace.

In effect, offices are information factories. Office information security controls are essential to keep the factory, its machine tools, operators and production processes running smoothly, efficiently and profitably, and to protect office-based and accessible information assets (paperwork, computer files, and white-collar workers) from all manner of risks.

Office security concerns include:
  • Intruders - burglars, industrial spies and 'lost' visitors wandering loose about the place
  • Fires, floods and accidents 
  • Various logical/IT security incidents affecting the office network and file system, workstations, email and other applications
  • Procedural issues such as workers' and visitors' failure to comply with office information security policies and procedures.
This short awareness paper outlined just a few office security metrics, without delving into details. At the time it was written (2008), we lacked the means to analyze metrics in much detail since the PRAGMATIC approach had not yet been invented. Looking back on it now, the paper is fairly typical of its day, quite naive in approach, leaving the reader to contemplate and perhaps choose between the metrics suggested.  

10 February 2015

63,000 data points

The 2014 Data Breach Investigations Report (DBIR) by Verizon concerns more than ~63,000 incidents across 95 countries that were investigated by 50 organizations, including Verizon of course.

Fair enough ... but what exactly qualifies as an "incident"?  According to the report:
  • Incident: A security event that compromises the integrity, confidentiality, or availability of an information asset. 
  • Breach: An incident that results in the disclosure or potential exposure of data. 
  • Data disclosure: A breach for which it was confirmed that data was actually disclosed (not just exposed) to an unauthorized party.
Those definitions are useful, although for various reason I suspect that the data are heavily biased towards IT (a.k.a. "cyber") incidents. 

~1,300 of the ~63,000 incidents were classified as breaches - an interesting metric in its own right: ~98% of incidents evidently did not result in the disclosure or potential exposure of data. For the beleaguered Chief Information Security Officer or Information Security Manager, that's a mixed blessing. On the one hand, it appears that the vast majority of incidents are being detected, processed, and presumably stopped in their tracks, without data exposure. Information security controls may have failed to prevent the 63,000 incidents, but it appears they did prevent 98% of them becoming actual breaches. That's cause for celebration, isn't it?

On the other hand, however, the 2% of incidents that actually did disclose or expose data clearly represent far more serious business impacts. Figuring out whether incidents are trivial and stoppable or are likely to become breaches is difficult at the time, hence there is little option but to respond to all incidents by default as if they are serious, resulting perhaps in a blasé attitude. 

Worse still, there is a distinct possibility that significantly more than 2% of the incidents were in fact breaches but were either not recognized or not acknowledged as such. The 2% represent abject failures of information security - hardly something that the CISO or ISM is going to admit! If they are responsible for reporting the associated metrics, these figures are dubious.  [I suspect a substantial proportion of the incidents classified as breaches were so classified because of the involvement of auditors and other independent parties, including customers and other business partners who were directly impacted and 'made a fuss'. I wonder how many purely internal breaches - breaches involving confidential business information/trade secrets as opposed to credit card numbers - were simply hushed-up and don't appear in the breach or data disclosure numbers? We won't find out from the 2014 DBIR, and to be fair we ]

Turning now to figure 16 in the report, I'm fascinated by the patterns here:


Take the second and third categories, for instance. Web App Attacks led to more than a third of breaches, yet represented only 6% of incidents. That tells me we have a serious problem with vulnerabilities in web applications. Conversely, although 18% of incidents were due to insider misuse, they caused only 8% of actual breaches - in other words, less than half of them caused real damage. The other categories in these graphs are equally interesting. Look at "cyber-espionage" for instance: only 1% of incidents caused nearly a quarter of the breaches!  [Contrary to what I said earlier, this seems to indicate that "cyber-espionage" is in fact being reported after all. Further, it points to the difficulties of being a CISO/ISM responsible for responding to and stopping such attacks, even though they are such a tiny fraction of incidents.]