23 March 2016

Another vendor survey critique

I've just been perusing another vendor-sponsored survey report - specifically the 2016 Cybersecurity Confidence Report from Barkly, a security software company.

As is typical of marketing collateral, the 12 page report is strong on graphics but short on hard data.  In particular, there is no equivalent of the 'materials and methods' section of a scientific paper, hence we don't know how the survey was conducted.  They claim to have surveyed 350 IT pro's, for instance, but don't say how they were selected.  Were they customers and sales prospects, I wonder?  Visitors to the Barkly stand at a trade show perhaps?  Random respondents keen to pick up a freebie of some sort for answering a few inane questions?  An online poll maybe?

The survey questions are equally vague.  Under the heading "What did we ask them", the report lists:
  • Biggest concerns [presumably in relation to cybersecurity, whatever that means];
  • Confidence in current solutions, metrics, and employees [which appears to mean confidence in current cybersecurity products, in the return on investment for those products, and in (other?) employees.  'Confidence' is a highly subjective measure.  Confidence in comparison to what?  What is the scale?];
  • Number of breaches suffered in 2015 [was breach defined?  A third of respondents declined to answer this, and it's unclear why they were even asked this]
  • Time spent on security [presumably sheer guesswork here]
  • Top priorities [in relation to cybersecurity, I guess]
  • Biggest downsides to security solutions [aside from the name!  The report notes 4 options here: slows down the system, too expensive, too many updates, or requires too much headcount to manage.  There are many more possibilities, but we don't know whether respondents were given free rein, offered a "something else" option, or required to select from  or rank (at least?) the 4 options provided by Barkly - conceivably selected on the basis of being strengths for their products, judging by their strapline at the end: "At Barkly, we believe security shouldn’t be difficult to use or understand. That’s why we’re building strong endpoint protection that’s fast, affordable, and easy to use"].
Regarding confidence, the report states:
"The majority of the respondents we surveyed struggle to determine the direct effect solutions have on their organization’s security posture, and how that effect translates into measurable return on investment (ROI).  The fact that a third of respondents did not have the ability to tell whether their company had been breached in the past year suggests the lack of visibility isn’t confined to ROI.  Many companies still don’t have proper insight into what’s happening in their organization from a security perspective.  Therefore, they can’t be sure whether the solutions they’re paying for are working or not."
While I'm unsure how they reached that conclusion from the survey, it is an interesting perspective and, of course, a significant challenge for any company trying to sell 'security solutions'.  I suspect they might have got better answers from execs and managers than from lower-level IT pro's, since the former typically need to justify budgets, investments and other expenditure, while the latter have little say in the matter.  The report doesn't say so, however.


Elsewhere the report does attempt to contrast responses from IT pro's (two-thirds of respondents, about 230 people) against responses from IT executives and managers (the remaining one-third, about 120) using the awkwardly-arranged graphic above.  The associated text states:
"When our survey results came in, we quickly noticed a striking difference in attitudes among IT professionals in non-management positions and their counterparts in executive roles.  These two groups responded differently to nearly every question we asked, from time spent on security to the most problematic effect of a data breach.  Stepping back and looking at the survey as a whole, one particular theme emerged: When it comes to security, executives are much more confident than their IT teams."
Really?  Execs are "much more confident"?  There is maybe a little difference between the two sets of bars, but would you call it 'much' or 'striking'?  Is it statistically significant, and to what confidence level?  Again we're left guessing.

Conclusion

What do you make of the report?  Personally, I'm too cynical to take much from it.  It leaves far too much unsaid, and what it does say is questionable. Nevertheless, I would not be surprised to see the information being quoted or used out of context - and so the misinformation game continues.

On a more positive note, the survey has provided us with another case study and further examples of what-not-to-do.

19 March 2016

How effective are our security policies?

On the ISO27k Forum today, someone asked us (in not so many words) how to determine or prove that the organization's information security policies are effective. Good question!

As a consultant working with lots organizations over many years, I've noticed that the quality of their information security policies is generally indicative of the maturity and quality of their approach to information security as a whole. In metrics terms, it is a security indicator.

At one extreme, an organization with rotten policies is very unlikely to be much good at other aspects of information security - but what exactly do I mean by 'rotten policies'? I was thinking of policies that are badly-written, stuffed with acronyms, gobbledegook and often pompous or overbearing pseudo-legal language, with gaping holes regarding current information risks and security controls, internal inconsistencies, out-of-date etc. ... but there's even more to it than their inherent quality since policies per se aren't self-contained controls: they need to be used which in practice involves a bunch of other activities.

At the other extreme, what would constitute excellent security policies? Again, it's not just a matter of how glossy they are. Here are some the key criteria that I would say are indicative of effective policies:
  • The policies truly reflect management’s intent: management understands, supports and endorses/mandates them, and (for bonus points!) managers overtly comply with and use them personally (they walk-the-talk);
  • They also reflect current information risks and security requirements, compliance obligations, current and emerging issues etc. (e.g. cloud, BYOD, IoT and ransomware for four very topical issues);
  • They cover all relevant aspects/topics without significant gaps or overlaps (especially no stark conflicts);
  • They are widely available and read … implying also that they are well-written, professional in appearance, readable and user-friendly;
  • People refer to them frequently (including cross-references from other policies, procedures etc., ideally not just in the information risk and security realm);
  • They are an integral part of security management, operational procedures etc.;
  • They are used in and supported by a wide spectrum of information security-related training and awareness activities;
  • Policy compliance is appropriately enforced and reinforced, and is generally strong;
  • They are proactively maintained as a suite, adapting responsively as things inevitably change;
  • Users (managers, staff, specialists, auditors and other stakeholders) value and appreciate them, speak highly of them etc.
As I'm about to conduct an ISO27k gap analysis for a client, I'll shortly be turning those criteria into a maturity metric of the type shown in appendix H of PRAGMATIC Security Metrics.  The approach involves documenting a range of scoring norms for a number of relevant criteria, developing a table to use as a combined checklist and measurement tool. Taking just the first bullet point above, for instance, I would turn that into 4 scoring norms roughly as follows:
  • 100% point: "The policies truly reflect management’s intent: management full understands, supports and endorses/mandates them, managers overtly comply with and use them personally, and insist on full compliance";
  • 67% point: "Managers formally mandate the policies but there are precious few signs of their genuine support for them: they occasionally bend or flaunt the rules and are sometimes reluctant to enforce them";
  • 33% point: "Managers pay lip-service to the policies, sometimes perceiving them to be irrelevant and inapplicable to them personally and occasionally also their business units/departments, with compliance being essentially optional";
  • 0% point: "Managers openly disrespect and ignore the policies. They tolerate and perhaps actively encourage noncompliance with comments along the lines of 'We have a business to run!'"
During the  gap analysis, I'll systematically gather and review relevant evidence, assessing the client against the predefined norms row-by-row to come up with scores based partly on my subjective assessment, partly on the objective facts before me. The row and aggregate scores will be part of my closing presentation and report to management, along with recommendations where the scores are patently inadequate (meaning well below 50%) or where there are obvious cost-effective opportunities for security improvements (low-hanging fruit). What's more, I'll probably leave the client with the scoring table, enabling them to repeat the exercise at some future point e.g. shortly before their certification audit is due and perhaps annually thereafter, demonstrating hopefully their steady progress towards maturity.

Regards,
Gary