24 May 2016

Fascinating insight from a graph

Long-time/long-suffering readers of this blog will know that I am distinctly cynical if not scathing about published surveys and studies in the information security realm, most exhibiting substantial biases, severe methodological flaws and statistical 'issues'. Most of them are, to be blunt, unscientific worthless junk, while - worse still - many I am convinced are conscious and deliberate attempts to mislead us, essentially marketing collateral, fluff and nonsense designed and intended to coerce us into believing conjecture rather than genuine attempts to gather and impart actual, genuine facts that we can interpret for ourselves.

Integrity is as rare as rocking-horse poo in this domain. 

Well imagine my surprise today to come across a well-written report on an excellent scientifically-designed and performed study - viz "The accountability gap: cybersecurity & building a culture of responsibility", a study sponsored by Tanium Inc. and Nasdaq Inc. and conducted by a research team from Goldsmiths - an historic institution originally founded in the nineeenth Century as the Technical and Recreative Institute for the Worshipful Company of Goldsmiths, one of the most powerful of London’s City Livery CompaniesThe Goldsmiths Institute mission was ‘the promotion of the individual skill, general knowledge, health and wellbeing of young men and women belonging to the industrial, working and poorer classes’. 

"Goldsmiths" (as it is known) is now a college within the University of London, based in Lewisham, a thriving multicultural borough South East of the City, coincidentally not far from where I used to work and live. I think it's fair to equate 'tradition' with 'experience', a wealth of culture, knowledge and expertise that transcends the ages.

I'm not going to attempt to summarize or comment on the entire study here. Instead I restrict my commentary to a single graph, screen-grabbed from the report out of context, hopefully to catch your imagination as it did mine:


That scatter-graph clearly demonstrates the relationship between 'awareness' (meaning the level of cybersecurity awareness determined by the study of over 1,500 qualified respondents - mostly CISOs and non-exec directors plus other senior managers at sizeable UK, US, Japanese, German and Nordic organizations with at least 500 employees) and 'readiness' (essentially, their state of preparedness to repulse and deal with cybersecurity incidents). It is so clear, in fact, that statistics such as correlation are of little value.

In simple terms, organizations that are aware are ready and face medium to low risks (of cybersecurity incidents) whereas those that are neither aware nor ready are highly vulnerable.

Even a correlation as strong and convincing as that does not formally prove a cause-effect relationship between the factors, but it certainly supports the possibility of a mechanistic linkage. It doesn't indicate whether cybersecurity awareness leads or lags readiness, for instance, but let's just say that I have my suspicions. In reality, it doesn't particularly matter.

Please download, read and mull-over the report.  You might learn a thing or two about cybersecurity, and hopefully you'll see what I mean when I contrast the Goldsmiths study with the gutter-tripe we are normally spoon-fed by a large army of marketers, press releases, journalists and social networking sites.

Take a long hard look at the methodology, especially Appendix B within which is the following frank admission:
"Initial examination of the responses showed that three of the Awareness questions were unsatisfactory statistically. (The three related problems were that they did not make a satisfactory contribution to reliability as measured by Cronbach’s alpha; they did not correlate in the expected direction with the other answers; and in at least one case, there was evidence that it meant diferent things to diferent respondents.) With these three questions removed, the Awareness and Readiness questions showed satisfactory reliability (as measured by Cronbach’s alpha)." 
Cronbach's (alpha) is a statistical measure using the correlation or covariance between factors across multiple tests to identify inconsistencies. The team used it to identify three questions whose results were inconsistent with the remainder. Furthermore, they used the test in part to exclude or ignore particular questions, thereby potentially warping the entire study since they did not (within the report) fully explain why nor how far those particular questions were out of line, other than an obtuse comment about differences of interpretation in at least one case. In scientific terms, their exclusion was a crucial decision. Without further information, it raises questions about the method, the data and hence the validity of the study. On the other hand, the study's authors 'fessed up, explaining the issue and in effect asking us to trust their judgement as the original researchers, immersed in the study and steeped in the traditions of Goldsmiths. The very fact that they openly disclosed this issue immediately sets them apart from most other studies that end up in the general media, as opposed to the peer-reviewed scientific journals where such honest disclosures are de rigeur.

I'd particularly like to congratulate Drs Chris Brauer, Jennifer Barth and Yael Gerson and team at Goldsmiths Institute of Management Studies, not just for that insightful graph but for a remarkable and yet modest, under-stated contribution to the field.  Long may your rocking horses continue defecating  :-)