Shifting the Goal Posts of Adoption
There’s an old economics joke:
After writing the final exam paper for his economics class, the lecturer accidentally e-mails it to all the students. In a panic, he calls the professor and tells her what he’s done. The professor replies calmly “Don’t worry! Keep the questions the same. All you have to do is change the answers.”
The same can be said for technology adoption.
Why then do the results vary so widely? A commonly quoted statistic is that adoption won’t go beyond 10% without change management. Yet when you speak to a technology vendor, they talk about effortless adoption of 80% (or more)!
For the same question, under identical circumstances, we get two completely different answers.
Where did it all go so horribly wrong?
The IT Perspective
I have a sneaking suspicion that this issue has its roots in a time before the words “customer” and “centric” were ever used in the same sentence.
Well before polite debates about user experience and big data, the people who built technology had one main goal – make it work. And what did they mean by “work”? There were checklists for that. Each component of the system (whatever that system was) had to do what it was designed to do. Only then were the parts put together, and collectively tested again. Then someone (that used to be me, a while ago) manually pushed buttons according to a carefully prepared script and checked that the system did what it was supposed to.
And where did all this get us? A system that worked!
Can People Use It?
The next step seemed quite innocent. Not only should it work, but people should be able to use it. And by “able”, we meant that users should be able to log in and press all the buttons (just like I used to do during testing). Even if it worked in the factory, it would be useless if people got an Access Denied message before they even started.
Do People Use It?
Even if the shiny new system works perfectly if people aren’t using it then what’s the point? This question primarily comes from the perspective of potential cost savings. If it’s turned off and nobody was ever using it, nobody cares and we save money!
Following this logic, you can quickly and easily test whether a system is valuable. If people are using the system, then they must be getting value from it. Zero-value systems are not used.
This is almost the end of the road for technology adoption. All that’s left to do is count the number of people who log in and do something each month (let’s call them ‘monthly active users’) and divide that by the number of people who can potentially use the system. What you’re left with is a neat score between 0% and 100%.
This way of measuring things has clear advantages – higher means better. You’ll also notice that the best-practice score for adoption is, by definition, “100%”.
The Customer Perspective
When customers talk about adoption, they don’t want to have a conversation about how many people clicked on buttons or opened a mobile app.
What customers want to talk about is how that shiny new system contributed to people being successful in their roles. In other words, how the system created sales opportunities, improved the employee experience, eliminated inefficiency, or reduced cost (or whatever contributes to success in your role).
Having a working system is absolutely a great starting point for contributing to success but it’s nowhere near enough. As with anything new, people have to understand the technology and how they can find value from it. That process is not instantaneous, and often people don’t ever get to a stage where they’re comfortable with it and integrate it into their work practice. They get stuck in a change management vortex, continuing to use the technology but never unlocking value.
With this definition in mind, what does “10%” actually mean? One view the proportion of people who have progressed beyond that tipping point and feel the technology is making their lives easier. Sharing links to documents rather than attachments – someone who hasn’t adopted that way of working might only do that for large documents (to avoid them being blocked), whereas someone who understands the benefit of keeping documents up-to-date might never send an attachment again.
For the data-driven decision makers, this is a much less exact science than the “who clicked buttons” metric. Which is precisely why it’s a better measure – humans are not so black-and-white.
This isn’t a problem with measurement – it’s a matter of fundamentally redefining what we mean by engagement. Sure, 80% looks and feels better than 15%, no matter how it’s measured. But lets at least measure the thing we care about.
With both these definition in mind, we can see where the gap occurs in all these adoption statistics. 80% of your people might be using technology, but only 10% are comfortable with using it and would describe it as beneficial to the work they do. For the other 70%, the technology is a burden or it’s more trouble than it’s worth.
With effort directed towards effective communication and a well-executed change management plan, it is possible to create high levels of adoption, even for complex products like Office 365. In any industry. With any type of workforce. But let’s not talk about numbers of 80% without recognising that behind that figure is an immense amount of effort to educate people about technology, how it impacts their work, and how they can be more successful with it.
When people understand how new technology will positively impact THEIR work, you get much better adoption outcomes. Read more about how to make context matter
In part one of this 3-part series, we share the lessons you need to ensure your adoption launch is a success
Related Service Offerings
Work with our team to understand the key business use cases that make sense for your organisation. Then let us map out a 12-week launch plan to help bring those scenarios to life