14 September 2013

Draw your own conclusions

There's as much an art to interpreting metrics and statistics as there is to designing and presenting them. Take this exploded pie for instance:


I plucked the pie chart image from a survey by Forrester on behalf of Blue Coat - in other words, Blue Coat paid them for the survey (we have discussed vendor-sponsored surveys before on this blog). The survey Key Drivers, Why CIOs Believe Empowered Users Set The Agenda for Enterprise Security was promoted on email via IDG Connect

Before we continue, what conclusions do you draw from the figure above? I appreciate I have taken it out of the context of the report but take another look at the graphic. Imagine you are a busy business manager briefly pondering a graphic similar to this, whether in a commercial survey, an in-flight magazine, or an internal corporate report from Information Security. What does it say to you?  What's your impression?

I spy with my beady eye that the largest slice was for the response 'some of the time', accounting for more than half of the responses. If I mentally add that 54% proportion to the 10% 'rarely' slice, those two responses together account for a little under two thirds of the responses - a clear majority as far as I'm concerned. Consequently, I would conclude that most respondents chose 'rarely' or 'some of the time', in other words most were of the opinion that information security did not inhibit or slow down important business initiatives.

However, the reason I plucked this particular figure from the report is that the legend to the pie, presumably written by a Forrester analyst, implies a markedly different conclusion:


According to the analyst, the key message is that security is an inhibitor, and frequently at that. The headline message is diametrically opposed to my understanding of the data.  Lucky I bothered to check the data!

Perhaps the analyst arrived at that curious conclusion because the 'all', 'most' and 'some' categories together account for 90% of the responses? Perhaps the conclusion just happened to fit the brief from Blue Coat when they commissioned the survey for their marketing? Hmmm. 

I notice also that the legend omits the words "slow down" and "important" from the question posed, on the not unreasonable assumption that the question in quotation marks was exactly as it was stated in the survey. I'll say no more on that point.

Anyway, digging a little deeper, there's still more insight to glean from figure 2 in the report.

Why do you think the pie has been exploded? The technique is often used to emphasize particular slices. In this case, my eye was drawn to the apparent balance between the (sum of the) three smaller slices and the main slice. Given that the main slice is labeled "Some of the time", it would be easy to infer that the other three therefore represent "Not some of the time", whereas they are not in fact a single category but opposite ends of the scale (one of the inherent drawbacks of any pie graph). In contrast, re-drawing the same data as a bar chart emphasizes the separation between the ends:


And what about the colors? Color can have a surprisingly important influence on the way we perceive metrics. We often use it to our advantage with RAG (red-amber-green) color coding. Just look at the visual impact if the pie was recolored thus:


In the same way that exploding the pie emphasized the 'some of the time' slice, I have deliberately called it out with an extreme red. If I could have figured out the HTML to make it flash, I might have done that too! It is patently biased. The original pie coloring was far more even-handed, but it's another potential issue to bear in mind.

Aside from the presentation style/format, we can glean yet more information from the original graphic. Two particular aspects caught my eye.

Firstly, the text below the pie appears to indicate that the sample size was 50 people, not just 50 randomly selected people but 50 "C-level and VP IT budget decision-makers at North American enterprises", a fairly specific demographic.

Presumably the 50 were already on Forrester's database due to some previous contact, but perhaps all or at least some of them were identified and contacted specifically for this study. Who knows, perhaps some of them were suggested by Blue Coat given that he who pays the piper calls the tune?

Although the caption mentions 50 people, we are not told how many were actually surveyed or responded. Maybe they asked 50 and only a dozen responded? Maybe they asked 1,000 but picked out the 50 for some unstated reason? I very much doubt that Forrester would pull stunts of that nature but the point remains that this is primarily a piece of marketing, not a scientific research paper. To give them their due, Forrester did incorporate a "Methods" section at the end of the report, stating:
"This Technology Adoption Profile was commissioned by Blue Coat Systems. To create this profile, Forrester leveraged its Forrsights Workforce Employee Survey, Q4 2012, and Forrsights Budgets And Priorities Tracker Survey, Q4 2012. Forrester Consulting supplemented this data with custom survey questions asked of 50 C-level and VP IT decision makers North American enterprises with 1000 employees or more. The auxiliary custom survey was conducted in March 2013."
That's nice to know, but not quite up to the standard of the materials and methods section in a typical paper in any mainstream scientific journal. A survey of just 50 people could be of questionable statistical value (depending on the assurance level required), and we're not told whether the survey was conducted online, through an automated survey tool, by telephone interview, face-to-face interview, or by some other means. Reproducing the actual survey form, with the actual questions posed, in the precise wording and sequence used, complete with any preamble, context or incentives, would have given me a lot more confidence ... but maybe that's just my scientific training showing through, my own bias. 

Secondly, I noticed that there are four categories in the pie chart, corresponding (presumably) to the four possible responses to the survey question. This implies the use of a Likert-like scale where there is no middle option, forcing respondents to choose options either above or below the notional center point of the scale. This was probably a deliberate choice on the part of the survey designer: it is commonly used to discourage people going for 'the easy option', the middle choice. I wonder what the results might have been if the survey had included a mid-point response, for instance "Very rarely", "Occasionally". "Some of the time", "A lot of the time" and "Almost all the time" ... which reminds me of the tricky issue of phrasing both the question stem and the answers. It would be easy to bias the responses, for example using "Never" as the lowest response, or indeed "All of the time" as the upper response - which, I note, was evidently one of the choices in Forrester's survey. "All of the time" leaves respondents almost no wiggle-room. It's similar to "Always".  It could be argued that it is not even a category but an end point to the notional scale.

Personally, I hate being shoe-horned into boxes. Sometimes I want to indicate a response that is at the upper or lower limit of a category, occasionally right on the boundary between categories, which isn't strictly possible with Likert-like scales. That's why I personally prefer continuous scales, percentage scales in particular. Measuring responses against a percentage scale generates more precise data, in my opinion, with hardly any extra effort on the part of subject or surveyor. The better automated survey tools allow the use of continuous scales, calculating percentage values from responses with no human effort at all (although I have yet to find a tool that allows responses that are below 0% or above 100% for those rare occasions when the respondent deems the scale range too limited!).

OK, enough already.  The take-home message from this rambling blog piece is to be aware of subtle and not-so-subtle biases in the way metrics are sampled, gathered, analyzed and presented. Bear this post in mind whenever you are giving or receiving statistical information. Better still, consult a trained statistician or survey engineer if the information is important, which it often is. My ramble has barely scratched the surface of an enormous topic.

Kind regards,
Gary Hinson  Gary@isect.com

PS  I have no ax to grind with Forrester or Blue Coat. The survey is worth reading, albeit with a hint of cynicism. I chose it simply as an example, typical of its kind, not a special case called out to embarrass anyone. Feel free to register for your own copy provided you don't mind disclosing your personal information ...

No comments:

Post a Comment

Have your say!