A Review of the Elearnity 9-Grid

In the first of a series looking at published research findings and statistics in the e-learning space, Peter Phillips takes a look at the Elearnity 9-Grid™

True Grid?

Elearnity are a long established and well-known learning analysts and consultants, who provide independent research in the UK and European Learning and Talent market, “drawing upon the insights and experience of our Corporate Research Network”.

You may have seen their recently published research results in the form of the Elearnity 9-Grid™. Here for example is their Grid for bespoke e-learning vendors.


Follow this link for the full report.

The report originally came out in July 2013, when Epic were in the process of an AIM listing. Here is a quote from their official Admission Document.  “Epic was rated first for ‘‘Potential and Performance’’ by Elearnity in July 2013.”

From a glance at the chart above, I suspect that is what most of us would have said if our bespoke e-learning services had appeared in the top right corner.

Elearnity retweeted the Epic statement without any qualification.


The 9-Grid™ is a serious and considered piece of research. But how meaningful is the information in the way it is presented? Does it really support Epic’s statement? Or is the medium obscuring the message? As a statistician I take a particular interest in the use and misuse of statistics, so let’s take a closer look.

“When I use a word”, Humpty Dumpty said, “it means just what I choose it to mean

My first query is with the terminology.

Here is another 9-Grid™, this time for LMS vendors. As you can see, CornerStone is the only vendor in the “Strategic Leader” section, high in both Performance and Potential, while only Cornerstone and NetDimensions are rated as strong performers.


Naturally Cornerstone reported this accolade on their web site and on Twitter. Why wouldn’t they?


What does ‘Performance’ mean to you? Perhaps the build quality, range of features or ease of use of the LMS, or some measure of fitness for purpose?

In Elearnity’s model, ‘Performance’ is defined as a combined measure of:

a) “How often do vendors get shortlisted and how often do they win?” and

b) “Corporate customer advocacy”

My own view is that (a) is a rather sweeping assumption that the amount of new business chased and won by competitive tender is a valid performance measure. Doesn’t this statement more reflect business strategy and marketing spend, not necessarily performance? Many LMS providers will say only a small proportion of their annual revenue comes from new business won through competitive tender, and some avoid that route altogether.

Point (b), I believe, is more appropriate in principle. There should be some correlation between performance and customer satisfaction, but this is more difficult to measure, and the data are unlikely to be statistically valid.

The Elearnity model combines these two elements to arrive at a simple overall score of 1, 2 or 3 for Performance.

Only Cornerstone and NetDimensions achieve a 3 in the latest published 9-Grid™. Does that make them the best performing LMSs in Europe, or does it suggest they have the biggest marketing budgets?

Equally, in terms of investing in bespoke e-learning, many clients might be more interested in a provider’s instructional design skills, subject matter expertise, creativity, and delivery within budget and on time, than in whether they get on lots of shortlists.


The other major axis on the 9-Grid™ is ‘Potential’. Again, what does that mean to you? Future growth prospects of the vendor perhaps, or their product development plans? Or is it the potential of the product or service to deliver added value features?

For bespoke e-learning, the Elearnity measure of ‘Potential’ is a combination of:

  • “the breadth of sector focus such as Finance, Retail, Telecoms etc.”
  • experience “developing for different learning contexts”
  • “sophistication” of learning created.

In my experience, most customers are at least as likely to be interested in whether the vendor understands their business and the subject matter, and may well prefer industry expertise to a broad spread across different industry sectors.

Welcome to the Fifth Dimension

There are another three dimensions to the 9-Grid™.

1) Cost of Ownership  – a perfectly good measure and there is no ambiguity in the Elearnity definition. But are the results over-simplified by being represented in a 1 to 3 score?

2) Presence – is a measure of the size of the business, again represented in a 1 to 3 score. Could this be the place to include those elements of number of tender lists and spread across sectors?

3) Direction of travel –  Finally, in a valiant attempt to include five metrics in a two dimensional Grid, Elearnity provide a measure of “our sense of their likely future direction of travel within the model”, by where they position the vendor’s blob within whatever square of the Grid they find themselves. So if a product is in the top right corner of a box, it doesn’t actually imply they have more ‘potential’ and ‘performance ’than someone in the centre or bottom left of the same box. Confused? Me too.

Other Dimensions?

There are likely to be other factors relevant to any buying decision but which are not covered by the Grid.

Take financial stability for example. Kallidus have an excellent record of eight years of consistent profitable growth. By contrast, in a quarterly report to the United States Securities And Exchange Commission (SEC) filed in November 2013, Cornerstone said, “we have a history of losses, and we cannot be certain that we will achieve or sustain profitability,” and that they “expect to continue to incur operating losses as a result of expenses associated with the continued development and expansion of our business.”

Do buyers take this sort of vendor information into their considerations?

Should quality of customer service and support also have more prominence in the model? Is this aspect adequately covered as an element of the 1 to 3 score within ‘Performance’.

With ‘Potential’ and ‘Performance’ as the two major axes, the natural conclusion is that the best place to be is top right of the Grid. That is the conclusion that Epic and Cornerstone came to, along with every vendor I personally have discussed the Grid with.

Opening up the debate

In conclusion, the 9-Grid™ is a well-intentioned attempt to present the results of Elearnity’s research in an eye-catching visual. They set out “to create a model that is focused on being accessible and understandable to Learning and Talent professionals”.

Have they succeeded? What’s your view?

3 responses to “A Review of the Elearnity 9-Grid”

  1. David Wilson says :

    Peter, firstly thanks for picking up on the Elearnity 9-Grid™ research; given this is a series, we appreciate you’ve chosen to talk about it first. I might conjecture that this is because it is really the only specific coverage of the UK/European Learning Technology space, and as such is hopefully the most relevant. Maybe despite your reservations, I would also hope it is the most insightful!

    But to pick up on your comments and criticisms – and we have discussed these offline to some degree, I think you need to more clearly differentiate between what vendors say about themselves and what we say about them. Both the Epic and Cornerstone examples reflect a marketing position they chose to take about themselves and was beyond our control. In fact if you read Cornerstone’s press release, the “only Strategic Leader” was from a quote from them not any analysis from us.

    You query our terminology, but then rather selectively use it. Our view of performance is based on our corporate research; in particular, which vendors are being considered, which ones are chosen, and what customers say about them afterwards. That seems entirely reasonable. Understanding whether a vendor is succesful in competing, winning and delivering is relevant regardless of whether its through competitive or non-competitive selection process.

    There is no overall score of 1, 2 or 3 for Performance or any of the factors in the model. Nowhere in the reports or the explanatory paper does it say this. The assessment of each factor is summarised into one of three bands: Higher, Medium, and Lower, based on an aggregate assessment relative to our view of market norms. We believe this is perfectly sensible, and much better than the linear scales and false sense of precision used by the main IT analysts in their technology assessments.

    Yes, context is critical. The point of any market analysis tool is to help organisations make sensible decisions about which vendors to best consider. We believe the Elearnity 9-Grid™enables organisations to do that in a more sophisticated and informed way. The Elearnity 9-Grid™ model is more topic-specific, more relevant to UK/European market AND it is a richer model that better identifies the potential trade offs between different options. By including factors such as Total Cost of Ownership and Market Presence, and our view of the vendors Performance/Potential Trajectory, we believe organisations are able to make better buying decisions, and better manage selected vendors to increase their impact and value. We also back all of this up through the Elearnity 9-Grid™ commentary, and through our Deep Perspectives, the detailed vendor reports.

    We tried many ways over many years to find a way of representing what is fundamentally a multi-dimensional problem in a meaningful way that the market and in particular, that HR, Talent and Learning professionals, can relate to. Vendors don’t always like where we’ve positioned them, but privately, will generally accept the analysis is accurate and relevant. At the end of the day, what matters most to us, is whether our clients find it useful. The Elearnity 9-Grid™ model might not be perfect, but we’ve had a lot of positive feedback from our corporate clients and from the market as a whole.
    I look forward to discussing with you further.

    Best Regards,

    David Wilson
    Elearnity’s founder and managing director

    • peterphillips535 says :

      Thanks for such a comprehensive response David.

      I should emphasise again that the my blog article was not a criticism of your underlying research, nor is it some sort of vendor sour grapes. Unicorn comes out pretty well in the Grid and you make a number of complementary comments about our services. I also believe your Deep Perspectives are very valuable, relevant and I don’t know of anything better in the European eLearning space.

      My critique is of the presentation.

      You will have to explain to me sometime how categorising your results into one of three bands (lower, middle, high or traffic light colours) differs from scoring them 1 to 3.
      In either case, I hold to my point that it is simplistic. Why you believe it is “much better” than a more refined scale for performance, cost of ownership or presence, is not obvious to me. In fact I think it detracts from the quality of your underlying research.
      Of course the Epic and Cornerstone statements were generated by them for their marketing purposes, but that really just emphasises my point (and the view I expressed to you several months ago), i.e. as presented the Grid is open to genuine misinterpretation. Those press releases were entirely predictable. Beyond your control perhaps, but then why retweet them without qualification?

      • David Wilson says :

        Peter – thanks for the positive comments re our research. We will continually strive to make it better, but the overall feedback has been very positive. We are aiming to deliver a more useful tool as well as something much more relevant to the UK/EMEA market. Hopefully your comments reinforce the value of doing that.

        At the end of the day, the Elearnity 9-Grid™ is a new model and will evolve and be refined so its always good to have constructive feedback. There are also lessons to be learned our end about managing vendor reaction, and especially as you say, about retweeting comments of which we would not formally approve.

        Regarding your view of our banding, I don’t see why you think it would be the same as a 1-2-3 rating. This is how you have chosen to describe it not how it is constructed or how we describe it. Each band is a range and not an absolute score – there is of course variation within the same band, but we have chosen to consider them as materially equivalent from the granularity of the model – in other words – you need more detailed and contextually specific analysis to really tell them apart. We don’t believe this is valid to do within a generic model.

        The alternative (as typified by the IT analysts) is a fixed linear scale with predefined weighting of sub-factors providing an absolute score which implies one solution is always better than another regardless of context. This is nonsense. We chose to use the banding to help clients focus on comparing similar solutions within zones, and bringing in other dimensions such as TCO to enable them to think through what is more appropriate to their context.

        Another important difference is that we can create custom 9-Grid™ models for individual clients, i.e. weighted to their specific context and needs and comparing incumbent solutions with external options. the 9-Grid™ is also an action tool – just like a 9-box model for talent and succession. Each zone has a range of actions that can be used to manage and increase the value of suppliers within that zone. This is explained in the documentation – but predictably most market coverage of 9-Grid™ has ignored these highly valuable dimensions to the model and just focused on visual positioning.

        I don’t see this as a fault of the model. Its a bit of bad learning from other’s tools. People like the graphic, great. But we need to also educate them on how to use it effectively as a decision or action tool to accelerate decisions and customer success. If we can achieve this, it will be a richer and more rewarding discussion for all!

        Many thanks for the opportunity to discuss this,

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s