25 July 2016

Blog merged into NBlog

To cut down on duplication and administration, I have decided in future to blog on security metrics in my main information security blog, NBlog (the NoticeBored blog) ... so this will be the final post here on the Security Metametrics blog.

I have merged the previous metrics blog items into NBlog, and I will continue blogging on security metrics alongside information security, governance, compliance, risk, ISO27k etc. whenever inspiration coincides with the free time to express my thoughts.  I'm still just as fascinated as ever by the topic

If you'd like to continue reading this stuff, please update your bookmarks and blog aggregators to point at blog.noticebored.com   



22 July 2016

Micro vs. macro metrics

Whereas "micro metrics" focus-in on detailed parts, components or elements of something, "macro metrics" pan out to give a broad perspective on the entirety. 

Both types of metric have their uses.

Micro metrics support low-level operational management decisions. Time-sheets, for example, are micro metrics recording the time spent on various activities, generating reports that break down the hours or days spent on different tasks during the period. This information can be used to account for or reallocate resources within a team or department or identify. Normally, though, its true purpose is to remind employees that they are being paid for the hours they work, or as a basis on which to charge clients. 

Macro metrics, in contrast, support strategic big-picture management decisions. They enable management to "see how things are going", make course-corrections and change speed where appropriate. The metric "security maturity", for example, has implications for senior managers that are lost on lower levels of the organization. I have a soft spot for maturity metrics: they score strongly on the PRAGMATIC criteria, enabling us to measure complex, subjective issues in a reasonably objective and straightforward fashion.

The sausage-machine metrics churned out automatically by firewalls, enterprise antivirus systems, vulnerability scanners and so forth are almost entirely micro metrics, intensely focused on very specific and usually technical details. There are vast oceans of security-related data. Lack of data is not a problem with micro metrics - quite the opposite.

Some security professionals are 'boiling the ocean' using big data analytics tools in an attempt to glean useful information from micro metrics but a key problem remains. When they poke around in the condensate, they don't really know what they're looking for. The tendency is to get completely lost in the sea of data, constantly distracted by shiny things and obsessing about the data or the analysis ... rather than the information, knowledge, insight and wisdom that they probably should have gone looking for in the first place.

It's like someone stumbling around aimlessly in the dark, hoping to bump into a torch!

Just as bad, when a respected/trusted metrics "expert" discovers a nugget and announces to the world "Hey look, something shiny!", many onlookers trust the finder and assume therefore that the metric must be Good, without necessarily considering whether it even makes sense to their organization, its business situation, its state of maturity, its risks and challenges and so forth ... hence they are distracted once more. As if that's not enough, when others chime in with "Hey look, I've polished it! It's even shinier!", the distractions multiply. 

The bottom-up approach is predicated on and perpetuates the myth of Universal Security Metrics - a set of metrics that are somehow inherently good, generally applicable and would be considered good practice. "So, what should we be measuring in security?" is a very common naive question. Occasionally we see various well-meaning people (yes, including me) extolling the virtues of specific metrics, our pet metrics (maturity metrics in my case). We wax lyrical about the beauty of our pet metrics, holding them up to the light to point out how much thy gleam and glint. 

What we almost never do is explain, in any real detail, how our pet metrics help organizations achieve their objectives. We may describe how the metrics are useful for security management, or how they address risk or compliance or whatever, but we almost invariably run out of steam well before discussing how they drive the organization towards achieving its business objectives, except for a bit of vague hand-waving, cloud-like. 

By their very nature, it is even harder to see how micro metrics relate to the organization's business objectives. They are deep down in the weeds. Macro metrics may be up at the forest canopy level but even they are generally concerned with a specific area of concern - information security in my case - rather than with the business.

I guess that's why I like the Goal-Question-Metric approach so much. Being explicit about the organizaiton's goals, its business and other high-level objectives (e.g. ethical or social responsibility and environmental protection), leads naturally into designing macro metrics with a clear business focus or purpose. 

Kind regards,
Gary

28 June 2016

ISO27k conference in San Francisco, end of Sept


It's a 2-day conference plus optional workshops the day before and training courses afterwards, in the final week of September at a smart purpose-built conference facility on the outskirts of San Francisco airport, not far beyond the boundary fence I think.  Standing speakers may need to duck, and shout.

There will be sessions on:
  • ISO27k basics
  • ISO27k implementation
  • ISO27k for cloud security
  • Integrating ISO 22301 (business continuity) with ISO27k
  • ISO27k metrics …

and more.

Walt Williams of Lattice, Richard Wilshire (ISO/IEC JTC1/SC27 project leader for the total revamp of ISO/IEC 27004 on “Monitoring, measurement, analysis and evaluation” – publication imminent), and Jorge Lozano from PwC are all presenting on metrics at the conference, and FWIW me too.  I’m hoping to persuade Krag to attend as well.   

Aside from the conference sessions, it is lining up to be The Place for security metrics newbies and wise old owls alike to put the world to rights during the coffee breaks, maybe over a meal, and then inevitably at a nearby airport hotel bar until the wee small hours.  Should be a hoot.

Join us?  Register by Aug 15th for the early-booking rate of $530 for the core conference.  Hopefully that leaves enough time to persuade the boss that it will be an invaluable personal development opportunity.  Essential.  Unmissable. 

Priceless.

24 May 2016

Fascinating insight from a graph

Long-time/long-suffering readers of this blog will know that I am distinctly cynical if not scathing about published surveys and studies in the information security realm, most exhibiting substantial biases, severe methodological flaws and statistical 'issues'. Most of them are, to be blunt, unscientific worthless junk, while - worse still - many I am convinced are conscious and deliberate attempts to mislead us, essentially marketing collateral, fluff and nonsense designed and intended to coerce us into believing conjecture rather than genuine attempts to gather and impart actual, genuine facts that we can interpret for ourselves.

Integrity is as rare as rocking-horse poo in this domain. 

Well imagine my surprise today to come across a well-written report on an excellent scientifically-designed and performed study - viz "The accountability gap: cybersecurity & building a culture of responsibility", a study sponsored by Tanium Inc. and Nasdaq Inc. and conducted by a research team from Goldsmiths - an historic institution originally founded in the nineeenth Century as the Technical and Recreative Institute for the Worshipful Company of Goldsmiths, one of the most powerful of London’s City Livery CompaniesThe Goldsmiths Institute mission was ‘the promotion of the individual skill, general knowledge, health and wellbeing of young men and women belonging to the industrial, working and poorer classes’. 

"Goldsmiths" (as it is known) is now a college within the University of London, based in Lewisham, a thriving multicultural borough South East of the City, coincidentally not far from where I used to work and live. I think it's fair to equate 'tradition' with 'experience', a wealth of culture, knowledge and expertise that transcends the ages.

I'm not going to attempt to summarize or comment on the entire study here. Instead I restrict my commentary to a single graph, screen-grabbed from the report out of context, hopefully to catch your imagination as it did mine:


That scatter-graph clearly demonstrates the relationship between 'awareness' (meaning the level of cybersecurity awareness determined by the study of over 1,500 qualified respondents - mostly CISOs and non-exec directors plus other senior managers at sizeable UK, US, Japanese, German and Nordic organizations with at least 500 employees) and 'readiness' (essentially, their state of preparedness to repulse and deal with cybersecurity incidents). It is so clear, in fact, that statistics such as correlation are of little value.

In simple terms, organizations that are aware are ready and face medium to low risks (of cybersecurity incidents) whereas those that are neither aware nor ready are highly vulnerable.

Even a correlation as strong and convincing as that does not formally prove a cause-effect relationship between the factors, but it certainly supports the possibility of a mechanistic linkage. It doesn't indicate whether cybersecurity awareness leads or lags readiness, for instance, but let's just say that I have my suspicions. In reality, it doesn't particularly matter.

Please download, read and mull-over the report.  You might learn a thing or two about cybersecurity, and hopefully you'll see what I mean when I contrast the Goldsmiths study with the gutter-tripe we are normally spoon-fed by a large army of marketers, press releases, journalists and social networking sites.

Take a long hard look at the methodology, especially Appendix B within which is the following frank admission:
"Initial examination of the responses showed that three of the Awareness questions were unsatisfactory statistically. (The three related problems were that they did not make a satisfactory contribution to reliability as measured by Cronbach’s alpha; they did not correlate in the expected direction with the other answers; and in at least one case, there was evidence that it meant diferent things to diferent respondents.) With these three questions removed, the Awareness and Readiness questions showed satisfactory reliability (as measured by Cronbach’s alpha)." 
Cronbach's (alpha) is a statistical measure using the correlation or covariance between factors across multiple tests to identify inconsistencies. The team used it to identify three questions whose results were inconsistent with the remainder. Furthermore, they used the test in part to exclude or ignore particular questions, thereby potentially warping the entire study since they did not (within the report) fully explain why nor how far those particular questions were out of line, other than an obtuse comment about differences of interpretation in at least one case. In scientific terms, their exclusion was a crucial decision. Without further information, it raises questions about the method, the data and hence the validity of the study. On the other hand, the study's authors 'fessed up, explaining the issue and in effect asking us to trust their judgement as the original researchers, immersed in the study and steeped in the traditions of Goldsmiths. The very fact that they openly disclosed this issue immediately sets them apart from most other studies that end up in the general media, as opposed to the peer-reviewed scientific journals where such honest disclosures are de rigeur.

I'd particularly like to congratulate Drs Chris Brauer, Jennifer Barth and Yael Gerson and team at Goldsmiths Institute of Management Studies, not just for that insightful graph but for a remarkable and yet modest, under-stated contribution to the field.  Long may your rocking horses continue defecating  :-)

23 March 2016

Another vendor survey critique

I've just been perusing another vendor-sponsored survey report - specifically the 2016 Cybersecurity Confidence Report from Barkly, a security software company.

As is typical of marketing collateral, the 12 page report is strong on graphics but short on hard data.  In particular, there is no equivalent of the 'materials and methods' section of a scientific paper, hence we don't know how the survey was conducted.  They claim to have surveyed 350 IT pro's, for instance, but don't say how they were selected.  Were they customers and sales prospects, I wonder?  Visitors to the Barkly stand at a trade show perhaps?  Random respondents keen to pick up a freebie of some sort for answering a few inane questions?  An online poll maybe?

The survey questions are equally vague.  Under the heading "What did we ask them", the report lists:
  • Biggest concerns [presumably in relation to cybersecurity, whatever that means];
  • Confidence in current solutions, metrics, and employees [which appears to mean confidence in current cybersecurity products, in the return on investment for those products, and in (other?) employees.  'Confidence' is a highly subjective measure.  Confidence in comparison to what?  What is the scale?];
  • Number of breaches suffered in 2015 [was breach defined?  A third of respondents declined to answer this, and it's unclear why they were even asked this]
  • Time spent on security [presumably sheer guesswork here]
  • Top priorities [in relation to cybersecurity, I guess]
  • Biggest downsides to security solutions [aside from the name!  The report notes 4 options here: slows down the system, too expensive, too many updates, or requires too much headcount to manage.  There are many more possibilities, but we don't know whether respondents were given free rein, offered a "something else" option, or required to select from  or rank (at least?) the 4 options provided by Barkly - conceivably selected on the basis of being strengths for their products, judging by their strapline at the end: "At Barkly, we believe security shouldn’t be difficult to use or understand. That’s why we’re building strong endpoint protection that’s fast, affordable, and easy to use"].
Regarding confidence, the report states:
"The majority of the respondents we surveyed struggle to determine the direct effect solutions have on their organization’s security posture, and how that effect translates into measurable return on investment (ROI).  The fact that a third of respondents did not have the ability to tell whether their company had been breached in the past year suggests the lack of visibility isn’t confined to ROI.  Many companies still don’t have proper insight into what’s happening in their organization from a security perspective.  Therefore, they can’t be sure whether the solutions they’re paying for are working or not."
While I'm unsure how they reached that conclusion from the survey, it is an interesting perspective and, of course, a significant challenge for any company trying to sell 'security solutions'.  I suspect they might have got better answers from execs and managers than from lower-level IT pro's, since the former typically need to justify budgets, investments and other expenditure, while the latter have little say in the matter.  The report doesn't say so, however.


Elsewhere the report does attempt to contrast responses from IT pro's (two-thirds of respondents, about 230 people) against responses from IT executives and managers (the remaining one-third, about 120) using the awkwardly-arranged graphic above.  The associated text states:
"When our survey results came in, we quickly noticed a striking difference in attitudes among IT professionals in non-management positions and their counterparts in executive roles.  These two groups responded differently to nearly every question we asked, from time spent on security to the most problematic effect of a data breach.  Stepping back and looking at the survey as a whole, one particular theme emerged: When it comes to security, executives are much more confident than their IT teams."
Really?  Execs are "much more confident"?  There is maybe a little difference between the two sets of bars, but would you call it 'much' or 'striking'?  Is it statistically significant, and to what confidence level?  Again we're left guessing.

Conclusion

What do you make of the report?  Personally, I'm too cynical to take much from it.  It leaves far too much unsaid, and what it does say is questionable. Nevertheless, I would not be surprised to see the information being quoted or used out of context - and so the misinformation game continues.

On a more positive note, the survey has provided us with another case study and further examples of what-not-to-do.

19 March 2016

How effective are our security policies?

On the ISO27k Forum today, someone asked us (in not so many words) how to determine or prove that the organization's information security policies are effective. Good question!

As a consultant working with lots organizations over many years, I've noticed that the quality of their information security policies is generally indicative of the maturity and quality of their approach to information security as a whole. In metrics terms, it is a security indicator.

At one extreme, an organization with rotten policies is very unlikely to be much good at other aspects of information security - but what exactly do I mean by 'rotten policies'? I was thinking of policies that are badly-written, stuffed with acronyms, gobbledegook and often pompous or overbearing pseudo-legal language, with gaping holes regarding current information risks and security controls, internal inconsistencies, out-of-date etc. ... but there's even more to it than their inherent quality since policies per se aren't self-contained controls: they need to be used which in practice involves a bunch of other activities.

At the other extreme, what would constitute excellent security policies? Again, it's not just a matter of how glossy they are. Here are some the key criteria that I would say are indicative of effective policies:
  • The policies truly reflect management’s intent: management understands, supports and endorses/mandates them, and (for bonus points!) managers overtly comply with and use them personally (they walk-the-talk);
  • They also reflect current information risks and security requirements, compliance obligations, current and emerging issues etc. (e.g. cloud, BYOD, IoT and ransomware for four very topical issues);
  • They cover all relevant aspects/topics without significant gaps or overlaps (especially no stark conflicts);
  • They are widely available and read … implying also that they are well-written, professional in appearance, readable and user-friendly;
  • People refer to them frequently (including cross-references from other policies, procedures etc., ideally not just in the information risk and security realm);
  • They are an integral part of security management, operational procedures etc.;
  • They are used in and supported by a wide spectrum of information security-related training and awareness activities;
  • Policy compliance is appropriately enforced and reinforced, and is generally strong;
  • They are proactively maintained as a suite, adapting responsively as things inevitably change;
  • Users (managers, staff, specialists, auditors and other stakeholders) value and appreciate them, speak highly of them etc.
As I'm about to conduct an ISO27k gap analysis for a client, I'll shortly be turning those criteria into a maturity metric of the type shown in appendix H of PRAGMATIC Security Metrics.  The approach involves documenting a range of scoring norms for a number of relevant criteria, developing a table to use as a combined checklist and measurement tool. Taking just the first bullet point above, for instance, I would turn that into 4 scoring norms roughly as follows:
  • 100% point: "The policies truly reflect management’s intent: management full understands, supports and endorses/mandates them, managers overtly comply with and use them personally, and insist on full compliance";
  • 67% point: "Managers formally mandate the policies but there are precious few signs of their genuine support for them: they occasionally bend or flaunt the rules and are sometimes reluctant to enforce them";
  • 33% point: "Managers pay lip-service to the policies, sometimes perceiving them to be irrelevant and inapplicable to them personally and occasionally also their business units/departments, with compliance being essentially optional";
  • 0% point: "Managers openly disrespect and ignore the policies. They tolerate and perhaps actively encourage noncompliance with comments along the lines of 'We have a business to run!'"
During the  gap analysis, I'll systematically gather and review relevant evidence, assessing the client against the predefined norms row-by-row to come up with scores based partly on my subjective assessment, partly on the objective facts before me. The row and aggregate scores will be part of my closing presentation and report to management, along with recommendations where the scores are patently inadequate (meaning well below 50%) or where there are obvious cost-effective opportunities for security improvements (low-hanging fruit). What's more, I'll probably leave the client with the scoring table, enabling them to repeat the exercise at some future point e.g. shortly before their certification audit is due and perhaps annually thereafter, demonstrating hopefully their steady progress towards maturity.

Regards,
Gary

25 February 2016

CIS cyber security metrics

The latest and greatest sixth version of the CIS (Center for Internet Security) Critical Security Controls (now dubbed the "CIS Controls For Effective Cyber Defense") is supported by a companion guide to the associated metrics. Something shiny in the introduction to the guide caught my beady eye:
"There are lots of things that can be measured, but it is very unclear which of them are in fact worth measuring (in terms of adding value to security decisions)."
Sounds familiar. In PRAGMATIC Security Metrics, we said:
"There is no shortage of ‘things that could be measured’ in relation to information security. Anything that changes can be measured both in terms of the amount and the rate of observable change, and possibly in other dimensions as well. Given the dynamic and complex nature of information security, there are a great number of things we could measure. It’s really not hard to come up with a long list of potential security metrics, all candidates for our information security measurement system. For our purposes, the trick will be to find those things that both (a) relate in a reasonably consistent manner to information security, preferably in a forward-looking manner, and (b) are relevant to someone in the course of doing their job, in other words they have purpose and utility for security management."
From there on, though, we part company. 

The CIS approach is highly prescriptive. They have explicitly identified and detailed very specific metrics for each of the recommended controls. For example, the metric associated with control 4.5:
"Deploy automated patch management tools and software update tools for operating system and software/applications on all systems for which such tools are available and safe. Patches should be applied to all systems, even systems that are properly air gapped."
asks 
"How long does it take, on average, to completely deploy application software updates to a business system (by business unit)?". 
To answer that particular question, three distinct values are suggested, viz 1,440, 10,080 or 43,200 minutes (that's a day, a week or a month in old money). It is implied that those are categories or rough guides for the response, so why on Earth they felt the need to specify such precise numbers is beyond me. Curiously, precisely the same three values are used in most if not all of the other suggested metrics relating to time periods ... which might be convenient but disregards the differing priorities/timescales likely in practice. I'd have thought some controls are rather more urgent than others. For instance, the time needed by the organization to restore normal IT services following a disaster is markedly different to that required by an intrusion detection system to respond to a identified intrusion attempt. These are not even in the same ballpark.

The same concern applies to the CIS' proportional metrics. The suggested three choices in all cases are "Less than 1%", "1% to 4%" or "5% to 10%".  

Note that for both types, answers above the maximum value are unspecified.

Note also that the response categories cover different ranges for those types of metric. The timescale values are roughly exponential or logarithmic, whereas the proportions are more linear ... but just as arbitrary. 

Oh and the timescales are point values, whereas the proportions are ranges.

The only rationale presented in the paper for the values is this vagueness:
"For each Measure, we present Metrics, which consist of three “Risk Threshold” values. These values represent an opinion from experienced practitioners, and are not derived from any specific empirical data set or analytic model. These are offered as a way for adopters of the Controls to think about and choose Metrics in the context of their own security improvement programs."
Aside from the curious distinction between measures and metrics, what are we to understand by 'risk thresholds'? Who knows? They are hinting at readers adapting or customizing the values (if not the metrics) but I rather suspect that those who most value the CIS advice would simply accept their suggestions as-is.

Later in the metrics paper, the style of metrics changes to this:
"CSC 1: Inventory of Authorized and Unauthorized Devices - Effectiveness Test. To evaluate the implementation of CSC 1 on a periodic basis, the evaluation team will connect hardened test systems to at least 10 locations on the network, including a selection of subnets associated with demilitarized zones (DMZs), workstations, and servers. Two of the systems must be included in the asset inventory database, while the other systems are not. The evaluation team must then verify that the systems generate an alert or email notice regarding the newly connected systems within 24 hours of the test machines being connected to the network. The evaluation team must verify that the system provides details of the location of all the test machines connected to the network. For those test machines included in the asset inventory, the team must also verify that the system provides information about the asset owner."
As I said, this is a highly prescriptive approach, very specific and detailed on the measurement method. It's the kind of thing that might be appropriate for formalized situations where some authority directs a bunch of subserviant organizations, business units, sites or whatever to generate data in a standardized manner, allowing direct, valid comparisons between them all (assuming they follow the instructions precisely, which further implies the need for compliance activities).

Anyway, despite my criticisms, I recommend checking out the CIS critical controls for cyber defense. Well worth contemplating.

20 February 2016

Zurich Insurance global cyber risk reports

Zurich Insurance published a web page with a bunch of graphs projecting the global costs and benefits of cybersecurity under various scenarios ... but what do they mean? What is the basis for analysis? I find the graphs confusing, almost devoid of meaning like so many infographics, a triumph of marketing gloss over substance. The page succeeded, however, in catching my beady eye.

Although Zurich neglected to provide a working hyperlink, Google led me inexorably to the research paper from which the graphs were plucked: Risk Nexus: Overcome by Cyber Risks? Economic Benefits and Costs of Alternate Cyber Futures is a report by the Zurich Insurance Group and the Atlantic Council's Brent Scowcroft Center on International Security plus the Pardee Center for International Futures at the University of Denver, a follow-up to their 2014 report: Beyond Data Breaches: Global Aggregations of Cyber Risk.   

Apart from casually referring to "cyberspace" as 'the internet and associated IT', the reports are littered with undefined/vague cyber terms such as "cyber risks", "cyber attacks", "cyber crime", "cyber incidents", "cyber shocks" and "cyber futures". You might be comfortable with "cyber" but replacing it with "Internet-related" suits me better since they are not talking about information or IT security in general, nor about cyberwar in particular - two other common cyber-interpretations.

The 2014 report

The 2014 report conjured up and considered a potential disaster scenario involving a major Internet-related incident at a large communications technology firm triggering cascading failures affecting the global economy, in other words a systemic risk with global repurcussions:
"Early on, we nicknamed this project ‘cyber sub-prime’ because we intended it to expose the global aggregations of cyber risk as analogous to those risks that were overlooked in the U.S. sub-prime mortgage market. Problems in that segment spread far beyond the institutions that took the original risks, and proved severe enough to administer a shock that reverberated throughout the entire global economy. At first, the term ‘cyber sub-prime’ was just a quirky nickname, but it soon became a useful analogy, helping us to gain additional insights into cyber risks based on extended parallels with the financial sector."
While there is value in drawing lessons from the global financial crisis, I wonder if maybe the research team has been blinkered into that particular mode of thinking or world view, ignoring other possible futures such as, say, terrorism or more gradual as opposed to sudden crises, overpopulation for example? 

Anyway, the report recommended "several concrete steps that must be taken to overcome these inevitable shocks of the future and prevent what could be called a 'cyber sub-prime' meltdown.  Recommendations to be resilient to cyber shocks include:
  • Putting the private sector at the center of crisis management, since government management of cyber risk lacks the agility needed
  • Developing plans within organizations that have system-wide responsibility that ensure the stability of the system as a whole, rather than risks to an individual organization
  • Creating redundant power and telecommunications suppliers and alternate ISPs connect to different peering points
  • Investing in trained teams ready to respond with defined procedures
  • Conducting simulations of the most likely and most dangerous cyber risks to better prepare"
I appreciate what they are getting at in the first bullet but I'm not sure I agree with it. The private sector may arguably be more 'agile' in managing Internet-related risks, but overall is it doing any better in fact? I see little evidence that the private sector is any more highly protected than the government sector, particularly given differences in the nature of their respective risks. Even if that's true, why did they ignore or discount the obvious strategic option of improving government sector Internet-related security, I wonder? Perhaps the fact that the research was funded by a private-sector insurance company has something to do with it ... 

Their other points about considering systemic risk and developing more resilient infrastructures, effective incident response and training exercises involving simulations are fine by me, conventional and widely supported. The possibility of complete, permanent failure of the Internet is but one of several extreme disaster scenarios that I recommend clients consider for information risk and business continuity management purposes. My key point is not to plan too narrowly for any one particular scenario (or in fact any of the unbounded set of credible situations that could lead to such an outcome, such as an all-out cyberwar) but to use a wide variety of diverse scenarios to develop more comprehensive resilience, recovery and contingency arrangements in a far more general sense. Preparing for the worst case has benefits under less extreme conditions too, while there are far too many scary possibilities to risk being unprepared for what actually transpires.

As to whether those five bullets constitute "concrete steps", I guess it's a matter of perspective or terminology. The report stops well short of providing pragmatic action plans and allocating responsibilities. Not so much rock-hard concrete as sloppy mud! [In contrast, take a look at the ICAO Global Aviation Safety Plan, a strategic approach to ensure continued safety in the global aviation industry, laying out specific actions, responsibilities and timescales: now that's what I call concrete!]

The 2015 report

The risk and economic modeling study evidently continued, leading to last year's report.  I'll leave you to cast a cynical eye over the latest report. I'm too jaded to take it seriously.

19 February 2016

Security awareness metrics

Some say that information security awareness is hard to measure, and yet a moment's thought reveals several obvious, straightforward and commonplace metrics in this area, such as:
  • Attendance numbers, trends, rates or proportions at awareness and training events;
  • Feedback scores and comments from attendees at/participants in said events, or concerning other awareness activities, promotions, media, messages etc.
  • General, broad-brush, state-of-the-nation security awareness surveys of various populations or constituencies conducted on paper or using electronic forms or polls;
  • More specific information recall and comprehension tests relating to awareness topics or sessions, conducted on paper or online (maybe through the Learning Management System);
  • Awareness program metrics concerning activities planned and completed, topics covered (breadth and depth of coverage), budget and expenditure ($ and man-days), comparisons against other forms of security control and against other awareness programs (in other fields and/or other organizations). 

With a little more thinking time, it's quite easy (for me, anyway) to come up with a broader selection of awareness metrics also worth considering: 
  • More elaborate versions of the above, perhaps combining metrics for more meaningful analysis - for instance using attendance records and feedback to compare the popularity and effectiveness of different types of awareness and training events, different topics, different timings, different presenters, different media etc.;
  • Page hit rates, stickiness and various other webserver metrics concerning the popularity of/interest in the information security intranet site, including various elements within it, such as the security policies and specific topic areas;
  • Metrics gleaned from personnel records (e.g. proportions of the workforce with basic, intermediate or advanced qualifications, or with skills and competencies relating to information security, privacy, governance, risk etc., and currency of their skills, knowledge, competencies and qualifications);
  • Targeted surveys/polls comparing and contrasting awareness levels between various groups (e.g. different business units, departments, teams, levels, specialisms, ages, sexes, cultures/nationalities etc.) or times (e.g. before, during and after specific awareness/training events, awareness focus periods, business periods etc.) or topics (e.g. phishing vs. other forms of social engineering, malware, fraud etc.);
  • Workforce security awareness/culture surveys and studies conducted in person by trained and competent survey/research teams (a more expensive method that can generate better quality, richer, more valuable information);
  • Maturity metrics using audits, reviews, surveys and self-assessments to determine the maturity and quality of the organization's overall approach to security awareness and training relative to the state of the art in awareness (as documented in various standards, books and websites);
  • Benchmarking - comparing information security awareness levels, activities, spending etc. against other fields (such as health and safety or legal compliance) or organizations, industries etc.;
  • Risk-based awareness metrics, perhaps assessing the relevance of employee awareness, understanding, knowledge, competence, responsiveness etc. to various information risks, issues or challenges facing the organization, giving a natural priority to the planned awareness and training topics and a basis for budgeting (including resourcing for the security awareness and training program);
  • Risk-based information security metrics looking at myriad sources to identify current information risks, trends, predictions, technology directions, emerging threats etc. (useful for strategic planning in information security, of course, with an obvious link through to the corresponding awareness and training needs);
  • Change metrics concerning change management and changes affecting the organization, especially those relevant to information risk, security, privacy etc., as well as measuring and driving changes within the awareness program itself;
  • Process metrics concerning various information risk, security, privacy, governance and compliance-related processes (again including those concerning awareness and training) and various parameters thereof (e.g. cost and effort, efficiency, effectiveness, consistency, complexity, compliance, creativity, risk ...); 
  • Quality metrics concerning the awareness content/materials including policies, procedures and guidelines: there are many possible parameters here e.g. the style of writing and graphics, professionalism, review and authorization status, breadth and depth of coverage, currency/topicality and relevance, readability (e.g. Flesch scores), interest/engagement levels, consistency;
  • Awareness surveys conducted by information security presenters, trainers and other professionals: people attending training courses, conferences, workshops and so forth are generally accustomed to completing survey/feedback forms concerning the events e.g. the quality and competence of the presenter/trainer/facilitator, the materials, the venue, the catering etc. and, fair enough, that's quite useful information for the planners of such events. Why not also get the people who present/train/facilitate/lead the events to rate their audiences as well, on parameters such as interest in the topic, engagement, knowledge levels, receptiveness etc.?  Your Information Security Management, Security Admin, Help Desk, PC Support, Risk and Compliance people will have a pretty good idea about awareness and competence levels around the organization. Management, as a whole, knows this stuff too, and so do the auditors ... so ask them!;
  • Customer contact metrics for the information security team including the security awareness people, measuring the nature and extent of their interactions with people both within and without the business (e.g. their attendance at professional meetings, conferences, webinars, courses etc.);
  • Various awareness metrics gleaned from Help Desk/incident records relating to events and incidents reported (e.g. mean time to report, as well as mean time to resolve, incidents), help requests (number and complexity, perhaps split out by business unit or department), issues known or believed to have been caused by ignorance/carelessness etc., as well as general security metrics concerning incident rates for various types of information security incident - another driver to prioritize the planning and coverage of your awareness activities.

I could continue but even my eyes are glazing over at this point, so instead I want to end with some quick comments about how to make sense of all those and other options, and how you might go about selecting 'a few good security awareness metrics' that might be worth actually using.

Two specific approaches I recommend are PRAGMATIC and GQM.  

GQM starts with some exploration and analysis of your organization's goals or strategic objectives for information risk, security, privacy, governance, compliance and all that jazz (especially how these aspects support or enable core business), leading to some fairly obvious high-level questions (e.g. "Are we sufficiently compliant with our legal obligations towards privacy?") and thence to the kinds of metrics that would generate the data that might address or answer those questions (privacy compliance metrics in that case).   At a lower level of detail, the same approach can be used to determine the goals, questions and kinds of metrics for security awareness.  [Sorry, I'm not going to do that for you - it's your homework for today!]  [For more on GQM, read Lance Hayden's book IT Security Metrics].

PRAGMATIC is a rational basis for choosing between a bunch of possible metrics and assorted variants, or to guide the creative development of new metrics, or to drive improvement by weeding out ineffective metrics and getting more value out of those that remain, using nine key criteria or parameters for metrics: Predictiveness, Relevance, Actionability, Genuineness, Meaninfulness, Accuracy, Timeliness, Integrity/Independence and Cost-effectiveness.  [For more on PRAGMATIC, read our book PRAGMATIC Security Metrics, browse this website or blog, or ask me!]

15 February 2016

We don't know, we just don't know UPDATED


Crime-related metrics are troublesome for several reasons.  

Firstly, crime tends to be hidden, out of sight, mostly in the shadows. An unknown number of crimes are never discovered, hence recognized/identified incidents may not be representative of the entire population. Criminals might brag about their exploits to their posse but they are hardly likely to participate willingly in surveys.

Secondly, criminals can't be trusted so even if they did complete the forms, we probably shouldn't swallow their responses. Mind you, if the surveys weren't designed scientifically with extreme care over the precise questions, proper selection of the samples, rigorous statistical analysis, honest reporting etc., then all bets are off. 

Thirdly, the police, governments/authorities, the news media, assorted commercial organizations, professions, industry bodies and pressure groups all have vested interests too, meaning that we probably shouldn't believe their surveys and assessments either, at least not uncritically*. Guess what, if an organization's income or power depends to some extent on the size of The Problem, they may, conceivably, allegedly, be tempted to slightly over-emphasize things, perhaps exaggerating, oh just a little and down-playing or ignoring inconvenient metrics and findings that don't quite align with their world view or objectives. [This one applies to me too as an infosec pro, but recognizing my inherent bias is not the same as counteracting it.]

Fourthly, the metrics vary, for example in how they define or categorize crimes, what countries or areas they cover, and the measurement methods employed. Are US homicide numbers directly comparable with murders in, say, the UK? Are they even comparable, period-on-period, within any constituency? Would deliberately killing someone by running them over 'count' as a car crime, murder, accident, crime of passion, and/or what?

Fifthly, the effects of crime are also hard to account for, especially if you appreciate that they extend beyond the immediate victims. Society as a whole suffers in all sorts of ways because of crime. These effects and the associated costs are widely distributed. 

Sixthly, and lastly for now, crime is inherently scary, hence crime metrics are scary or eye-catching anyway. We risk losing our sense of perspective when considering 'facts' such as the skyrocketing rates of gun crime, home invasions, child abductions or whatever in relation to all the normal humdrum risks of everyday life, let alone all those scares about smoking, obesity, stress, heart disease and cancer. The emotional impact of crime metrics and the way they are portrayed in various media introduces yet more bias. [By the way, the same consideration applies to security metrics: perhaps we should explore that tangent another day.]

So, with all that and more in mind, what are we to make of cybercrime? How many cybercrimes are there? How many remain unidentified? To what extent can we trust our information sources? How do we even define, let alone measure, cybercrime? What is The Problem, and how big is it? And does it really matter anyway if the answer is bound to be scary?

Well yes it does matter because all sorts of things are predicated on cybercrime statistics - strategies, policies (public, corporate and personal), risk assessments, investment and spending plans, budgets and so forth. 

The right answer might be: we don't know. Good luck with all those predicates if that's your final answer! Phone a friend? 50/50?

* Update Feb 20th: according to Cybercrime costs more than you think, "Cybercrime costs the global economy about $450 billion each year", a factoid used (for reasons that are not entirely obvious) to support a call for organizations to plan for incidents. Their sources are not clearly referenced but the paper appears to draw on a glossy report by Allianz, an insurance company with an obvious self-interest in pumping-up the threat level. The Allianz report in turn cited studies by the Ponemon Institute and by McAfee with the Center for Strategic and International Studies, three further organizations with axes to grind in this space. To their credit, the 2014 McAfee/CSIS study openly acknowledged the poor quality of the available data - for instance stating: "... we found two divergent estimates for the European Union, one saying losses in the EU totaled only $16 billion, far less than the aggregate for those EU countries where we could find data, and another putting losses for the EU at close to a trillion dollars, more than we could find for the entire world ..." They also noted particular difficulties in estimating the costs of theft of intellectual property, while simultaneously claiming that IP theft is the most significant component of loss. Naturally, such carefully-worded caveats buried deep in the guts of the McAfee/CSIS study didn't quite make it through to the Allianz glossy or the sales leaflets that cite it. It's a neat example of how, once you unpick things, you discover that incomplete and unreliable information, coupled with rumours, intuition, guesswork, marketing hyperbole and weasel words, have morphed via factoids, soundbytes and headline horrors into 'fact'. Hardly a sound basis for strategic decision-making, or indeed for purchasing commercial goods and services. 

10 February 2016

Cause =/= Effect

Animals like us are fantastic at spotting patterns in things - it's an inherent part of our biology, involving parts of our brains that are especially good at it. Unfortunately, while some patterns are significant, many are not, and our brains are not terribly good at differentiating between the two - in fact, we tend to overemphasize matches, believing them to be especially significant, meaningful and, in a sense, real.

It could be argued that both pattern-recognition and overemphasis on matches are the result of natural selection over millenia, since in the wild, anything that helps us quickly identify and respond to possible attacks by predators, even if there are none, is likely to increase our survival, within reason anyway. Arguably, this is what makes wild animals 'alert', 'nervous' or 'jumpy'. It's a fail-safe mechanism. It's also the root of the fear we feel when we think we are in a dangerous situation, such as walking down a dark alleyway in an unfamiliar city at night. The sense of physical danger heightens our senses and primes our fight-or-flight instincts with a boost of adrenaline. Running away screaming from a harmless vagrant is safer than ignoring potential threats.

However, what I've just done in that paragraph is invent a vaguely plausible scenario, outlined it briefly, and some of you now believe it to be true, based on nothing more than its apparent plausibility and my credibility (such as it is). The reason I mentioned running away screaming was to stimulate a visceral reaction in you: the strong emotions that situation invokes adds even more emphasis to the story.  It 'makes sense'. In fact, there are many other plausible scenarios or reasons why pattern-recognition and overemphasis might or might not be linked to anything but having described a particular pattern, that is probably now locked into your brain and perhaps given special significance or meaning.

To illustrate my point, look at pattern-recognition from the predator's perspective: predators need to recognize possible prey and respond ahead of competing predators ... but distinguishing edible prey from everything else (including other predators, animals with poisonous or otherwise dangerous defenses, and rocks) is a critical part of the predator's biology. Attacking anything and everything would be a fail-unsafe approach, the exact opposite of prey. In reality, there are very few 'pure' predators or prey: even prey animals need to eat, while apex predators at the very top of the food chain may have a fear of cannibalism or prey that successfully fights back, so the real world is far more complex that my simplistic description implies.

OK, with that in mind, take a look at this graph:


Sure looks like the red and black lines are related, doesn't it? They track each other, on the whole. Their patterns match quite closely over the 13 year period shown, implying that they are somehow linked. In that specific case, statistical analysis tells us that the two variables are indeed correlated with a probability of just under 79% where 100% represent total identicality (indistinguishable) and 0% represents total discrepancy (no relation whatsoever). 79% is a pretty high value, so it is entirely possible that the two variables are indeed linked. 

So, at this point we think we've found a link between <ahem> the annual number of non-commercial space launches globally and the annual number of sociology doctorates awarded in the US - for those are the numbers graphed! Hmmmm.

Yes, you might be able to come up with some vaguely credible reasoning to explain that apparent linkage, but be honest it would be a stretch of the imagination and would involve considerable effort to find, which you might be willing to do if you feel the pattern-match is somehow significant (!). Far more likely is that we've simply found a matching pattern, a sheer coincidence, a fluke. If we have enough data available and keep on searching, we can probably find other variables that appear to correlate with either of those two, including some with even higher coefficients of correlation ...

... which I guess is pretty much what someone has done - using automated statistical techniques to find correlations between published data. Have a browse through these spurious correlations for some 29,999 other examples along these lines, and remember all this the next time you see a graph or a description that appears to indicate cause-and-effect linkages between anything. We humans desperately want to see matches. We find them almost irresistable and especially significant, almost magical, verging on real. Unfortunately, we are easily deluded.

From that point, it is but a short hop to 'lies, damn lies, and statistics'. Anyone with an axe to grind, sufficient data and a basic grasp of statistics can probably find correlations between things that appear to bolster their claims, and a substantial proportion of their audience will be swayed by it, hijacked by their own biology. I rather suspect that civil servants, politicians and managers are pretty good at that.

By the way, although I recognise the bias, I am far from immune to it. I try to hold back from claiming causal links purely on the basis of patterns in the numbers, and phrase things carefully to leave an element of doubt, but it's hard to fight against my own physiology.

Think on.
Gary.

PS. Finding spurious matches in large data sets is an illustration of the birthday paradox: there is a surprisingly high probability that two non-twin students in the average class were born on the same day. 

PPS The 79% correlation in the example above is only a fraction beneath the 'magical' 80% level. According to Pareto's Principle (I'm paraphrasing), 80% of stuff is caused by 20% of things. It's a rule-of-thumb that seems to apply in some cases, hence we subconsciouly believe it can be generalized, and before you know it, it's accepted as truth. The fact that 80% + 20% = 100% is somehow 'special' - it's another obvious but entirely spurious pattern.

25 January 2016

Metrics thought for the day

Where relevant, using current business metrics (also) for information risk and security purposes can be cost-effective if suitable raw data are already being gathered: the additional analysis, reporting and use incur relatively little incremental cost, especially if largely automated.

Corollary: when searching for metrics in any area of information risk and security, don't forget to check through existing business metrics alread in use for anything suitable, either as-is or with minor changes.

It would be easier to identify such metrics if the organization maintained a metrics inventory or database ...