25 July 2016

Blog merged into NBlog

To cut down on duplication and administration, I have decided in future to blog on security metrics in my main information security blog, NBlog (the NoticeBored blog) ... so this will be the final post here on the Security Metametrics blog.

I have merged the previous metrics blog items into NBlog, and I will continue blogging on security metrics alongside information security, governance, compliance, risk, ISO27k etc. whenever inspiration coincides with the free time to express my thoughts.  I'm still just as fascinated as ever by the topic

If you'd like to continue reading this stuff, please update your bookmarks and blog aggregators to point at blog.noticebored.com   



22 July 2016

Micro vs. macro metrics

Whereas "micro metrics" focus-in on detailed parts, components or elements of something, "macro metrics" pan out to give a broad perspective on the entirety. 

Both types of metric have their uses.

Micro metrics support low-level operational management decisions. Time-sheets, for example, are micro metrics recording the time spent on various activities, generating reports that break down the hours or days spent on different tasks during the period. This information can be used to account for or reallocate resources within a team or department or identify. Normally, though, its true purpose is to remind employees that they are being paid for the hours they work, or as a basis on which to charge clients. 

Macro metrics, in contrast, support strategic big-picture management decisions. They enable management to "see how things are going", make course-corrections and change speed where appropriate. The metric "security maturity", for example, has implications for senior managers that are lost on lower levels of the organization. I have a soft spot for maturity metrics: they score strongly on the PRAGMATIC criteria, enabling us to measure complex, subjective issues in a reasonably objective and straightforward fashion.

The sausage-machine metrics churned out automatically by firewalls, enterprise antivirus systems, vulnerability scanners and so forth are almost entirely micro metrics, intensely focused on very specific and usually technical details. There are vast oceans of security-related data. Lack of data is not a problem with micro metrics - quite the opposite.

Some security professionals are 'boiling the ocean' using big data analytics tools in an attempt to glean useful information from micro metrics but a key problem remains. When they poke around in the condensate, they don't really know what they're looking for. The tendency is to get completely lost in the sea of data, constantly distracted by shiny things and obsessing about the data or the analysis ... rather than the information, knowledge, insight and wisdom that they probably should have gone looking for in the first place.

It's like someone stumbling around aimlessly in the dark, hoping to bump into a torch!

Just as bad, when a respected/trusted metrics "expert" discovers a nugget and announces to the world "Hey look, something shiny!", many onlookers trust the finder and assume therefore that the metric must be Good, without necessarily considering whether it even makes sense to their organization, its business situation, its state of maturity, its risks and challenges and so forth ... hence they are distracted once more. As if that's not enough, when others chime in with "Hey look, I've polished it! It's even shinier!", the distractions multiply. 

The bottom-up approach is predicated on and perpetuates the myth of Universal Security Metrics - a set of metrics that are somehow inherently good, generally applicable and would be considered good practice. "So, what should we be measuring in security?" is a very common naive question. Occasionally we see various well-meaning people (yes, including me) extolling the virtues of specific metrics, our pet metrics (maturity metrics in my case). We wax lyrical about the beauty of our pet metrics, holding them up to the light to point out how much thy gleam and glint. 

What we almost never do is explain, in any real detail, how our pet metrics help organizations achieve their objectives. We may describe how the metrics are useful for security management, or how they address risk or compliance or whatever, but we almost invariably run out of steam well before discussing how they drive the organization towards achieving its business objectives, except for a bit of vague hand-waving, cloud-like. 

By their very nature, it is even harder to see how micro metrics relate to the organization's business objectives. They are deep down in the weeds. Macro metrics may be up at the forest canopy level but even they are generally concerned with a specific area of concern - information security in my case - rather than with the business.

I guess that's why I like the Goal-Question-Metric approach so much. Being explicit about the organizaiton's goals, its business and other high-level objectives (e.g. ethical or social responsibility and environmental protection), leads naturally into designing macro metrics with a clear business focus or purpose. 

Kind regards,
Gary

28 June 2016

ISO27k conference in San Francisco, end of Sept


It's a 2-day conference plus optional workshops the day before and training courses afterwards, in the final week of September at a smart purpose-built conference facility on the outskirts of San Francisco airport, not far beyond the boundary fence I think.  Standing speakers may need to duck, and shout.

There will be sessions on:
  • ISO27k basics
  • ISO27k implementation
  • ISO27k for cloud security
  • Integrating ISO 22301 (business continuity) with ISO27k
  • ISO27k metrics …

and more.

Walt Williams of Lattice, Richard Wilshire (ISO/IEC JTC1/SC27 project leader for the total revamp of ISO/IEC 27004 on “Monitoring, measurement, analysis and evaluation” – publication imminent), and Jorge Lozano from PwC are all presenting on metrics at the conference, and FWIW me too.  I’m hoping to persuade Krag to attend as well.   

Aside from the conference sessions, it is lining up to be The Place for security metrics newbies and wise old owls alike to put the world to rights during the coffee breaks, maybe over a meal, and then inevitably at a nearby airport hotel bar until the wee small hours.  Should be a hoot.

Join us?  Register by Aug 15th for the early-booking rate of $530 for the core conference.  Hopefully that leaves enough time to persuade the boss that it will be an invaluable personal development opportunity.  Essential.  Unmissable. 

Priceless.

24 May 2016

Fascinating insight from a graph

Long-time/long-suffering readers of this blog will know that I am distinctly cynical if not scathing about published surveys and studies in the information security realm, most exhibiting substantial biases, severe methodological flaws and statistical 'issues'. Most of them are, to be blunt, unscientific worthless junk, while - worse still - many I am convinced are conscious and deliberate attempts to mislead us, essentially marketing collateral, fluff and nonsense designed and intended to coerce us into believing conjecture rather than genuine attempts to gather and impart actual, genuine facts that we can interpret for ourselves.

Integrity is as rare as rocking-horse poo in this domain. 

Well imagine my surprise today to come across a well-written report on an excellent scientifically-designed and performed study - viz "The accountability gap: cybersecurity & building a culture of responsibility", a study sponsored by Tanium Inc. and Nasdaq Inc. and conducted by a research team from Goldsmiths - an historic institution originally founded in the nineeenth Century as the Technical and Recreative Institute for the Worshipful Company of Goldsmiths, one of the most powerful of London’s City Livery CompaniesThe Goldsmiths Institute mission was ‘the promotion of the individual skill, general knowledge, health and wellbeing of young men and women belonging to the industrial, working and poorer classes’. 

"Goldsmiths" (as it is known) is now a college within the University of London, based in Lewisham, a thriving multicultural borough South East of the City, coincidentally not far from where I used to work and live. I think it's fair to equate 'tradition' with 'experience', a wealth of culture, knowledge and expertise that transcends the ages.

I'm not going to attempt to summarize or comment on the entire study here. Instead I restrict my commentary to a single graph, screen-grabbed from the report out of context, hopefully to catch your imagination as it did mine:


That scatter-graph clearly demonstrates the relationship between 'awareness' (meaning the level of cybersecurity awareness determined by the study of over 1,500 qualified respondents - mostly CISOs and non-exec directors plus other senior managers at sizeable UK, US, Japanese, German and Nordic organizations with at least 500 employees) and 'readiness' (essentially, their state of preparedness to repulse and deal with cybersecurity incidents). It is so clear, in fact, that statistics such as correlation are of little value.

In simple terms, organizations that are aware are ready and face medium to low risks (of cybersecurity incidents) whereas those that are neither aware nor ready are highly vulnerable.

Even a correlation as strong and convincing as that does not formally prove a cause-effect relationship between the factors, but it certainly supports the possibility of a mechanistic linkage. It doesn't indicate whether cybersecurity awareness leads or lags readiness, for instance, but let's just say that I have my suspicions. In reality, it doesn't particularly matter.

Please download, read and mull-over the report.  You might learn a thing or two about cybersecurity, and hopefully you'll see what I mean when I contrast the Goldsmiths study with the gutter-tripe we are normally spoon-fed by a large army of marketers, press releases, journalists and social networking sites.

Take a long hard look at the methodology, especially Appendix B within which is the following frank admission:
"Initial examination of the responses showed that three of the Awareness questions were unsatisfactory statistically. (The three related problems were that they did not make a satisfactory contribution to reliability as measured by Cronbach’s alpha; they did not correlate in the expected direction with the other answers; and in at least one case, there was evidence that it meant diferent things to diferent respondents.) With these three questions removed, the Awareness and Readiness questions showed satisfactory reliability (as measured by Cronbach’s alpha)." 
Cronbach's (alpha) is a statistical measure using the correlation or covariance between factors across multiple tests to identify inconsistencies. The team used it to identify three questions whose results were inconsistent with the remainder. Furthermore, they used the test in part to exclude or ignore particular questions, thereby potentially warping the entire study since they did not (within the report) fully explain why nor how far those particular questions were out of line, other than an obtuse comment about differences of interpretation in at least one case. In scientific terms, their exclusion was a crucial decision. Without further information, it raises questions about the method, the data and hence the validity of the study. On the other hand, the study's authors 'fessed up, explaining the issue and in effect asking us to trust their judgement as the original researchers, immersed in the study and steeped in the traditions of Goldsmiths. The very fact that they openly disclosed this issue immediately sets them apart from most other studies that end up in the general media, as opposed to the peer-reviewed scientific journals where such honest disclosures are de rigeur.

I'd particularly like to congratulate Drs Chris Brauer, Jennifer Barth and Yael Gerson and team at Goldsmiths Institute of Management Studies, not just for that insightful graph but for a remarkable and yet modest, under-stated contribution to the field.  Long may your rocking horses continue defecating  :-)

23 March 2016

Another vendor survey critique

I've just been perusing another vendor-sponsored survey report - specifically the 2016 Cybersecurity Confidence Report from Barkly, a security software company.

As is typical of marketing collateral, the 12 page report is strong on graphics but short on hard data.  In particular, there is no equivalent of the 'materials and methods' section of a scientific paper, hence we don't know how the survey was conducted.  They claim to have surveyed 350 IT pro's, for instance, but don't say how they were selected.  Were they customers and sales prospects, I wonder?  Visitors to the Barkly stand at a trade show perhaps?  Random respondents keen to pick up a freebie of some sort for answering a few inane questions?  An online poll maybe?

The survey questions are equally vague.  Under the heading "What did we ask them", the report lists:
  • Biggest concerns [presumably in relation to cybersecurity, whatever that means];
  • Confidence in current solutions, metrics, and employees [which appears to mean confidence in current cybersecurity products, in the return on investment for those products, and in (other?) employees.  'Confidence' is a highly subjective measure.  Confidence in comparison to what?  What is the scale?];
  • Number of breaches suffered in 2015 [was breach defined?  A third of respondents declined to answer this, and it's unclear why they were even asked this]
  • Time spent on security [presumably sheer guesswork here]
  • Top priorities [in relation to cybersecurity, I guess]
  • Biggest downsides to security solutions [aside from the name!  The report notes 4 options here: slows down the system, too expensive, too many updates, or requires too much headcount to manage.  There are many more possibilities, but we don't know whether respondents were given free rein, offered a "something else" option, or required to select from  or rank (at least?) the 4 options provided by Barkly - conceivably selected on the basis of being strengths for their products, judging by their strapline at the end: "At Barkly, we believe security shouldn’t be difficult to use or understand. That’s why we’re building strong endpoint protection that’s fast, affordable, and easy to use"].
Regarding confidence, the report states:
"The majority of the respondents we surveyed struggle to determine the direct effect solutions have on their organization’s security posture, and how that effect translates into measurable return on investment (ROI).  The fact that a third of respondents did not have the ability to tell whether their company had been breached in the past year suggests the lack of visibility isn’t confined to ROI.  Many companies still don’t have proper insight into what’s happening in their organization from a security perspective.  Therefore, they can’t be sure whether the solutions they’re paying for are working or not."
While I'm unsure how they reached that conclusion from the survey, it is an interesting perspective and, of course, a significant challenge for any company trying to sell 'security solutions'.  I suspect they might have got better answers from execs and managers than from lower-level IT pro's, since the former typically need to justify budgets, investments and other expenditure, while the latter have little say in the matter.  The report doesn't say so, however.


Elsewhere the report does attempt to contrast responses from IT pro's (two-thirds of respondents, about 230 people) against responses from IT executives and managers (the remaining one-third, about 120) using the awkwardly-arranged graphic above.  The associated text states:
"When our survey results came in, we quickly noticed a striking difference in attitudes among IT professionals in non-management positions and their counterparts in executive roles.  These two groups responded differently to nearly every question we asked, from time spent on security to the most problematic effect of a data breach.  Stepping back and looking at the survey as a whole, one particular theme emerged: When it comes to security, executives are much more confident than their IT teams."
Really?  Execs are "much more confident"?  There is maybe a little difference between the two sets of bars, but would you call it 'much' or 'striking'?  Is it statistically significant, and to what confidence level?  Again we're left guessing.

Conclusion

What do you make of the report?  Personally, I'm too cynical to take much from it.  It leaves far too much unsaid, and what it does say is questionable. Nevertheless, I would not be surprised to see the information being quoted or used out of context - and so the misinformation game continues.

On a more positive note, the survey has provided us with another case study and further examples of what-not-to-do.

19 March 2016

How effective are our security policies?

On the ISO27k Forum today, someone asked us (in not so many words) how to determine or prove that the organization's information security policies are effective. Good question!

As a consultant working with lots organizations over many years, I've noticed that the quality of their information security policies is generally indicative of the maturity and quality of their approach to information security as a whole. In metrics terms, it is a security indicator.

At one extreme, an organization with rotten policies is very unlikely to be much good at other aspects of information security - but what exactly do I mean by 'rotten policies'? I was thinking of policies that are badly-written, stuffed with acronyms, gobbledegook and often pompous or overbearing pseudo-legal language, with gaping holes regarding current information risks and security controls, internal inconsistencies, out-of-date etc. ... but there's even more to it than their inherent quality since policies per se aren't self-contained controls: they need to be used which in practice involves a bunch of other activities.

At the other extreme, what would constitute excellent security policies? Again, it's not just a matter of how glossy they are. Here are some the key criteria that I would say are indicative of effective policies:
  • The policies truly reflect management’s intent: management understands, supports and endorses/mandates them, and (for bonus points!) managers overtly comply with and use them personally (they walk-the-talk);
  • They also reflect current information risks and security requirements, compliance obligations, current and emerging issues etc. (e.g. cloud, BYOD, IoT and ransomware for four very topical issues);
  • They cover all relevant aspects/topics without significant gaps or overlaps (especially no stark conflicts);
  • They are widely available and read … implying also that they are well-written, professional in appearance, readable and user-friendly;
  • People refer to them frequently (including cross-references from other policies, procedures etc., ideally not just in the information risk and security realm);
  • They are an integral part of security management, operational procedures etc.;
  • They are used in and supported by a wide spectrum of information security-related training and awareness activities;
  • Policy compliance is appropriately enforced and reinforced, and is generally strong;
  • They are proactively maintained as a suite, adapting responsively as things inevitably change;
  • Users (managers, staff, specialists, auditors and other stakeholders) value and appreciate them, speak highly of them etc.
As I'm about to conduct an ISO27k gap analysis for a client, I'll shortly be turning those criteria into a maturity metric of the type shown in appendix H of PRAGMATIC Security Metrics.  The approach involves documenting a range of scoring norms for a number of relevant criteria, developing a table to use as a combined checklist and measurement tool. Taking just the first bullet point above, for instance, I would turn that into 4 scoring norms roughly as follows:
  • 100% point: "The policies truly reflect management’s intent: management full understands, supports and endorses/mandates them, managers overtly comply with and use them personally, and insist on full compliance";
  • 67% point: "Managers formally mandate the policies but there are precious few signs of their genuine support for them: they occasionally bend or flaunt the rules and are sometimes reluctant to enforce them";
  • 33% point: "Managers pay lip-service to the policies, sometimes perceiving them to be irrelevant and inapplicable to them personally and occasionally also their business units/departments, with compliance being essentially optional";
  • 0% point: "Managers openly disrespect and ignore the policies. They tolerate and perhaps actively encourage noncompliance with comments along the lines of 'We have a business to run!'"
During the  gap analysis, I'll systematically gather and review relevant evidence, assessing the client against the predefined norms row-by-row to come up with scores based partly on my subjective assessment, partly on the objective facts before me. The row and aggregate scores will be part of my closing presentation and report to management, along with recommendations where the scores are patently inadequate (meaning well below 50%) or where there are obvious cost-effective opportunities for security improvements (low-hanging fruit). What's more, I'll probably leave the client with the scoring table, enabling them to repeat the exercise at some future point e.g. shortly before their certification audit is due and perhaps annually thereafter, demonstrating hopefully their steady progress towards maturity.

Regards,
Gary

25 February 2016

CIS cyber security metrics

The latest and greatest sixth version of the CIS (Center for Internet Security) Critical Security Controls (now dubbed the "CIS Controls For Effective Cyber Defense") is supported by a companion guide to the associated metrics. Something shiny in the introduction to the guide caught my beady eye:
"There are lots of things that can be measured, but it is very unclear which of them are in fact worth measuring (in terms of adding value to security decisions)."
Sounds familiar. In PRAGMATIC Security Metrics, we said:
"There is no shortage of ‘things that could be measured’ in relation to information security. Anything that changes can be measured both in terms of the amount and the rate of observable change, and possibly in other dimensions as well. Given the dynamic and complex nature of information security, there are a great number of things we could measure. It’s really not hard to come up with a long list of potential security metrics, all candidates for our information security measurement system. For our purposes, the trick will be to find those things that both (a) relate in a reasonably consistent manner to information security, preferably in a forward-looking manner, and (b) are relevant to someone in the course of doing their job, in other words they have purpose and utility for security management."
From there on, though, we part company. 

The CIS approach is highly prescriptive. They have explicitly identified and detailed very specific metrics for each of the recommended controls. For example, the metric associated with control 4.5:
"Deploy automated patch management tools and software update tools for operating system and software/applications on all systems for which such tools are available and safe. Patches should be applied to all systems, even systems that are properly air gapped."
asks 
"How long does it take, on average, to completely deploy application software updates to a business system (by business unit)?". 
To answer that particular question, three distinct values are suggested, viz 1,440, 10,080 or 43,200 minutes (that's a day, a week or a month in old money). It is implied that those are categories or rough guides for the response, so why on Earth they felt the need to specify such precise numbers is beyond me. Curiously, precisely the same three values are used in most if not all of the other suggested metrics relating to time periods ... which might be convenient but disregards the differing priorities/timescales likely in practice. I'd have thought some controls are rather more urgent than others. For instance, the time needed by the organization to restore normal IT services following a disaster is markedly different to that required by an intrusion detection system to respond to a identified intrusion attempt. These are not even in the same ballpark.

The same concern applies to the CIS' proportional metrics. The suggested three choices in all cases are "Less than 1%", "1% to 4%" or "5% to 10%".  

Note that for both types, answers above the maximum value are unspecified.

Note also that the response categories cover different ranges for those types of metric. The timescale values are roughly exponential or logarithmic, whereas the proportions are more linear ... but just as arbitrary. 

Oh and the timescales are point values, whereas the proportions are ranges.

The only rationale presented in the paper for the values is this vagueness:
"For each Measure, we present Metrics, which consist of three “Risk Threshold” values. These values represent an opinion from experienced practitioners, and are not derived from any specific empirical data set or analytic model. These are offered as a way for adopters of the Controls to think about and choose Metrics in the context of their own security improvement programs."
Aside from the curious distinction between measures and metrics, what are we to understand by 'risk thresholds'? Who knows? They are hinting at readers adapting or customizing the values (if not the metrics) but I rather suspect that those who most value the CIS advice would simply accept their suggestions as-is.

Later in the metrics paper, the style of metrics changes to this:
"CSC 1: Inventory of Authorized and Unauthorized Devices - Effectiveness Test. To evaluate the implementation of CSC 1 on a periodic basis, the evaluation team will connect hardened test systems to at least 10 locations on the network, including a selection of subnets associated with demilitarized zones (DMZs), workstations, and servers. Two of the systems must be included in the asset inventory database, while the other systems are not. The evaluation team must then verify that the systems generate an alert or email notice regarding the newly connected systems within 24 hours of the test machines being connected to the network. The evaluation team must verify that the system provides details of the location of all the test machines connected to the network. For those test machines included in the asset inventory, the team must also verify that the system provides information about the asset owner."
As I said, this is a highly prescriptive approach, very specific and detailed on the measurement method. It's the kind of thing that might be appropriate for formalized situations where some authority directs a bunch of subserviant organizations, business units, sites or whatever to generate data in a standardized manner, allowing direct, valid comparisons between them all (assuming they follow the instructions precisely, which further implies the need for compliance activities).

Anyway, despite my criticisms, I recommend checking out the CIS critical controls for cyber defense. Well worth contemplating.