The Top Tasks data helped focus the conversation in Toyota about quality and reliability. In a company that is obsessed by quality, it became natural to ask: What is a quality digital experience? How do you measure digital quality?
To get the answer to these questions, Toyota went back to its roots. “The whole digital quality framework that we designed was very much in line with our thinking on vehicle quality,” Karen explains. “So I started to have meetings with the vehicle quality divisions in our team to understand what is your definition of quality? How do you measure quality?”
If any change in metrics is going to have a chance of success you need to bring the organization with you on the new journey you are heading out on. There’s no point being way out in front with great new ideas if those ideas aren’t understood or bought into by the rest of the organization.
Through lots of discussion and internal engagement, a consensus was developed that the old metrics based on traffic volume were insufficient to measure digital quality and reliability. A whole range of new metrics started being applied: page loading time, JavaScript errors, spelling mistakes, missing images, broken links, task completion, time-on-task, etc. A key objective of these metrics was to measure digital quality and reliability.
Interestingly, one of the things that really made Toyota pay attention to digital quality was the launch of their app. The app became a type of a bridge between the manufacturing product culture and the digital culture. The app felt more like a ‘product’ than the website. More executives began to think about the term “digital quality product alignment,” Karen explains. “We’re still, of course, building it step by step. I’m not saying we are there yet.” It’s a process that begins with lots of conversations across the entire organization in order to develop a unified understanding of quality: a quality car, quality service, quality app, quality website.
A key measure of quality is how quickly you can repair a problem once it’s identified. Eight out of ten times I identify a problem on a typical website, nothing happens. The search environment may be appalling, for example. The fix often doesn’t require new software, but rather improvements in metadata, tweaks to the search engine, etc. Six months later I’ll test the search again and it will be exactly the same. Still appalling. And nobody inside the organization will care. It’s not even that people don’t care. A poor quality internal search environment just doesn’t seem relevant. People don’t think that it has any impact, that it makes any difference one way or another.
Partly, it’s down to how digital is perceived as a series of projects. There was a project to install a new search engine. The new search engine has been installed. Job done. On to the next project. Budgets and teams for Web maintenance and continuous improvement are usually minuscule and often nonexistent.
Quality digital is more maintenance than creation. Yet digital has huge engines of creation and often nonexistent maintenance. And where maintenance and support do exist they are generally seen as low-level positions with low-level respect. That must change.
Karen Peeters ‘Discovering digital reliability and quality: the Toyota story’ (podcast)