BLOG: Tin Can & Creating A Continuous Improvement Culture

In the latest part in his blogs from last month’s NextGen LMS conference, Unicorn’s Stuart Jones asks how do we create a continuous improvement culture?

silvers_150x200Aaron Silvers, formally of ADL – the organisation responsible for SCORM and now stewards of the Tin Can standard.

Aaron recently set up his own company called Making Better  to help organisations improve their learning and development.

Aaron opened with a wonderful quote: “Perfect is the enemy of better” –Voltaire, La Bégueule

This works on a number of levels, not least that if you wait for perfection you will never deliver anything good. And secondly, without the ability to improve, nothing gets better.

Aaron’s entire talk mirrored a lot of the discussions we’ve been having at our Agile South Coast get together over the last few months – and that is get something out there, test it, improve it based around Lean Start Up and Lean UX principles.

Interesting to me that the eLearning industry is catching up with thinking from the software development industry – assuming Aaron can make this stick.

Aaron did make some interesting points about using Tin Can statements to capture the analytics for testing eLearning content.

I’m a little conflicted by this.

tincan2Tin Can is about the learner experiences, and if we are starting lean as Aaron’s talk introduced, then we should focus on the most important information we can use.

If we capture too much, we generate noise and if we start thinking about usability for example as Tin Can data, we will generate a lot of noise, most of which won’t be useful to anyone than a course builder, whereas one would argue the purpose of Tin Can recording experiences is it is the output we are interested in – what did they learn, what did they experience.

It is tenuous to me to be thinking inputs such as where the user clicked, how they clicked being a good use of Tin Can data. And that data is temporal – it is redundant the next time the course is edited, hence the portability of that information becomes irrelevant.

So Aaron, I have to disagree with these particular points right now, at least until there is a better way of classifying this data without hacking the spec as you suggested to me.

In terms of what Aaron’s clients need from a next gen learning management system, many reoccurring themes are on show:

• Analytics – using data in a way that drives positive change
• Managing competencies
• Badges and gamification
• Content management
• Mobile friendly and accessible content delivery
• Powerful search

This is a slightly different list from what they want:
• Tailored reporting
• Content authoring
• Suggestions and Recommendations
• Smart Offline Capability
• Bundle content (top down) and playlists (bottom up)
• Web and industry standards

Often the “want list” is phase two to enable the clients to get to the MVPs (Most valuable products) first.

Next Iteration of SCORM with Aaron Silvers – catch up on where xAPI is now, how we got there, and what’s next for xAPI.

Missed the rest of Stuart’s NextGen LMS blog this week? Don’t worry you can find them all at the UniChronicles here!

More from Stuart next week.

Tags: , , , ,

2 responses to “BLOG: Tin Can & Creating A Continuous Improvement Culture”

  1. Ben Betts says :

    Hi Stuart, thought provoking blog. I’ve been thinking in this area a lot recently and I’ve come down on Aaron’s side of it. Case in point; we track down to the click level with Curatr & Tin Can. Recently I spotted a few users on a particular instance hitting an MCQ like 40 times in a row and failing. Looking at the timestamps, they were clearly just point and clicking. Now, the obvious conclusion is that these learners were just being arses, they weren’t doing it properly. But then I reflected on Aaron’s work – he says:

    “For many years we’ve been making up quizzes to “test” learners assuming we as designers were doing everything right. If they mess up on these tests, we assume the learner screwed up. The shift I believe we must embrace is that learners will do what they do, and we need to start testing if we’re designing well enough to influence the outcomes we seek.”

    This is my fault. I thought my learning experience was fun and engaging, but clearly it wasn’t for these learners. My design was wrong. And I’ve used Tin Can to understand this. It’s now the key driver behind how we build. We can A/B test our designs, understand where learners get hung up in the system and to track through the flow of what a ‘successful’ learner does, versus someone who is not successful. Marketers would use GA for exactly this, it is such common practice as to be seen as a fundamental element in refining user experiences.

    Right now L&D still has these huge projects come along, where you ‘spec’ things out Waterfall style, spend $1m on developing the solution and then ‘drop’ it into the client and walk away. If it goes well you write a case study and enter some awards. If it goes badly, you do what you can to put it right. But most of the time its indifferent – no one really complains, but no one jumps for joy. It’s not surprising; without this level of refinement and agility, you could never get it right first time without being hugely lucky. I think we could take a much more iterative approach, and use very low level tracking to try and piece together what works and what doesn’t. It is just the heart of the agile approach being brought to bear – Michael Allen alludes to this with SAM over ADDIE.

    In terms of specifying things, we’d use Profiles, which are an underused but fundamental part of the spec to do this. See Rustici’s work for more info: https://registry.tincanapi.com/#home/profiles

    • Stuart Jones says :

      I’ve not seen the Profiles in action so maybe this will provide a good way of distinguishing learning and usability tracking data. However I still see the two being very different use cases – one is inputs (e.g. where I clicked), the other is outputs (e.g. what I learned). Only the latter has any meaningful longevity and given that Tin Can data is immutable and permanent, usability tracking would just seem to generate a lot of data that I don’t need to keep long term.This is especially true if I am porting my personal learning record to another employer.

      If, as many people are suggesting, that Big Data really is coming in this space, and there are some super clever analytics around this – perhaps this is not such a problem and you could do amazing things with the data – “people who took this course late at night were more likely to click in the wrong place than those who took the course in the morning” but I am sceptical this is any more than an academic’s pipe dream at the moment 🙂 Reporting is hard enough for people to understand as it is and correlation does not prove causation anyway.

      Things like Google Analytics already provide really good libraries for tracking usage like you say – perhaps a mashup between GA and Tin Can would be most useful in the short term and keep the Tin Can data to just experiences.

      But overall, I agree with the principles of Lean UX – build it, test it, fix it. Personally I think any sort of big design up front is wasteful and doesn’t provide the necessary feedback loop to take something from being fairly useful to awesome.

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s