Information is our vaccine
When it comes to the coronavirus / COVID-19, there is a lot we need to know, including information on:
- Vaccine development
- Social / physical distancing
- Financial support
- End date, new normal, etc. etc.
It’s never been more important for people to have fast and simple access to all the information they need. Until we have a vaccine, information is our vaccine. In democracies, transparency is our vaccine. Even when we have a vaccine, we will still need to provide lots of quality information. We will have to address fake news, antivaxxers, and those dark state actors waging misinformation wars.
The Web is a powerful way to quickly deliver information to a large audience. Yet it is the Web’s strengths that are also its weaknesses. The Web makes it easy and fast to publish, but often the easier and faster it is to publish, the harder it becomes to find the right information. The Web so often sacrifices quality control, structure and organization for speed. The Web has created a culture of speed publishing, where the imperative is to get as much as possible published as quickly as possible.
Organization, structure, editorial decision making, often basic editing, are neglected. The format, the tool, the app, the chatbot: the latest gizmo and cool thing dominates thinking and decision making. In much of Web management it is more important to launch an app or dashboard, website, video or podcast, than to ask why, what and how. Why do we need this? What is it going to do? How is it going to be organized in a way that is usable? How are we going to measure success? Basic questions. Rarely asked. Rarely answered.
In 25 years working on websites, one problem dominates, year in, year out. A problem nearly nobody wants to address, except in the most trivial of ways. Why? Because it’s not seen as cool, as innovative. Because there’s no bonuses for doing it well, no career progression because of it. It’s a thankless, really hard job. It’s a vital job. The problem? Confusing menus and links.
Confusing menus and links
Confusing menus and links cause untold problems to people trying to use websites. Yet, in 9 out of 10 Web projects I’ve worked on, the menus and links are thrown together in a haphazard and wholly unprofessional manner. It’s sad how bad we are at organizing content. It’s sad how little management cares about information structure, about classification, about metadata. The Web suffers—we all suffer—because of it.
There are two essential characteristics that make the Web the Web: links and search. Without links and search all we have is digitized print, and a great many websites are still glorified brochures as they digitize print content and print thinking. Competence in how to create and organize links, and how to optimize findability through search are critical skills that are in incredibly short supply in the majority of organizations I have dealt with over my career.
I have spoken to many people in government and health organizations during this pandemic. One phrase I kept hearing was “panic publishing”. In a pandemic, the last thing we want is to panic, to be chaotic. If we don’t want to panic, we have to plan. The first thing we should plan is what information is required and what is the best way to organize it so that it is easy and fast to find.
The World Health Organization (WHO) supported an initiative to do just that. We set out to discover what sort of information truly matters to the public and professionals when it comes to COVID-19, and how best to organize it. A multidisciplinary, cross-national effort was set in train involving collaboration from health agencies and experts in Ireland, Norway, Canada, UK, Belgium and New Zealand. We gathered data on searches, supports calls, requests, feedback. With representatives from these agencies and individual experts from many other countries, we sifted through this research using the Top Tasks information design framework, whose core purpose is to identify and then organize the information that truly matters to people
Hundreds of experts and members of the public were involved. Almost 25,000 professionals and members of the public participated in a collaborative design process. It was a continuous-improvement, rapid, iterative refinement effort. We never lost sight of the people who need this information and the language that they use because we were designing with them at every step of the process.
If there’s one thing we learned from the coronavirus disease it’s that it doesn’t see nations, states or borders. It just sees the human family, and only as a human family can we overcome it, only working together can we overcome it, only by collaborating and cooperating can we ever get back to that place we call normal.
If there’s one thing we learned from analyzing data from more than 25,000 people from over 100 countries, it’s that we share a great many of the same top tasks. Everyone wants to know when there’s going to be a vaccine. People everywhere want to know about end dates and potential new outbreaks. These are common tasks. We have so many more things in common than we have things that make us different. We have the same fears, hopes and anxieties. We have the same impulses. We have the same needs for the same type of information.
Our objective was simple but hugely ambitious:
To create a universal classification—an information genome—for COVID-19. One that can be used in any country, by any health body, to organize and present COVID-19 information so that it is comprehensive and easy to find.
Another related objective was to:
Create a universal classification for virus pandemic information, so that in the future we are better prepared, and will have a well organized, well tested classification that allows us to hit the ground running.
A classification is a foundation
I’ve been doing this sort of work for more than 25 years. It still shocks me how amateur the Web industry generally is. How little true professionalism exists. How much Web decisions are driven by ego, vanity, casual conversations over coffee, or a few rushed workshops, a quick scan through search analytics. The pressure is always to deliver a visual thing that looks pretty, whether that thing is a website or an app. It’s the pretty thing that matters, not what the thing does. The pretty thing matters because too many still measure success based on how something looks rather than how it works. Success is also measured on how quickly you can launch the pretty thing. It is the launch that becomes the thing.
It’s digital. It has to be fast. We don’t have time to do it right. We only have time to do it wrong. Fast. Fast. Fast. Speed. Speed. Speed. Pretty. Pretty. Pretty. Classification design is seen as too slow, too hard, too messy. It’s not pretty. It’s not design. Real design is making things pretty. And, of course, we now have AI and chatbots. Who needs classification? And anyway, we don’t have time. We have to get something up now because then we can prove that we’re doing something. Because it’s more important to be seen to be doing something fast and ‘innovative’ (anyone for some useless chatbots?) than to actually do something useful.
Not everyone behaves like this. But far too many do. Far too many.
Defining the information environment
One of the first steps towards developing a vaccine and drugs to treat COVID-19 was to map the genome of the virus. It is equally important, if we want to get a great classification, to map the information genome. Otherwise, we get ad hoc, panic-driven—and often ego-driven—publishing. Something is got up quickly, then someone says add this, and someone else says add that, and someone else says add the other, and before you know it, it’s confusing menus and links. Agencies and departments compete against each other for traffic. Even within the same organization, the social media team and the Web team and the app team compete against each other. The most important information—which was usually added at the beginning—is now often the hardest to find because on top of it have been piled the vast volumes of tiny task information.
Before we can design a comprehensive classification we must map the information environment. What is all the information that really matters when it comes to COVID-19? This is by no means an impossible task. There is no real excuse not to do it. It is the foundation of all good information design. Without this work we build on poor foundations and everyone will suffer as the environment grows in an uncontrolled, ad hoc manner.
Identifying the top tasks
Top Tasks is an information design framework that I have been developing over a twenty-year period. Its purpose is to identify what matters most (the top tasks) to people and—equally as important—what matters less (the tiny tasks).
When a tiny task goes to sleep at night it dreams of being a top task. Without designing a robust classification you will be flooded by the tiny tasks because they are vastly more numerous than the top tasks. Typically, we find that there are no more than three-to-five core top tasks, and about 10 other quite important tasks, bringing the top tasks family up to about 15. Even in the most complex of environments we rarely find more than 20 in the top tasks family. However, there can be literally hundreds of tiny tasks. Not just that, these tiny tasks are often pet projects of key stakeholders. Tiny tasks, knowing that they are tiny tasks, believe that their route to top taskdom is through publishing as much content as possible. It’s a recipe for chaos, a recipe that a great many websites assiduously follow.
The Top Tasks approach:
- Identify the top tasks.
- Design by giving the top tasks priority, and by downplaying or removing tiny tasks.
- Measure the performance of the top tasks based on people’s ability to quickly find and complete these tasks.
- Continuously improve the performance of the top tasks.
The defining characteristic of Top Tasks is a unique survey question. Once the list of potential top tasks has been selected (typically there are between 50 and 80), a representative sample of people are asked to quickly scan the entire list and to vote and choose no more than five of the most important tasks to them. This ‘crazy’ question (‘You can’t give people a list of 80 options and ask them to choose!’) has the vital function of giving you an ordered list, from the most important to the least important. We have carried out Top Tasks surveys in more than 100 countries. Typically, the top 5% of tasks will get as much of the vote as the bottom 50%. This is the linchpin of the approach:
- Focus on what matters.
- Remove focus on what matters much less.
Defining scope
Before we could define all the potential COVID-19 tasks, we had to define the scope. To get started you must answer these two questions:
- Who will be asked to vote?
- What is the task environment?
There are three major audiences for COVID-19 information:
- The general public
- Healthcare providers
- Academics, researchers
In a pandemic, all of them are critical but the dominant audience by far is the general public and the overwhelming way they access information is through their smartphones. When facing a pandemic, a Mobile First design approach becomes even more important.
The second part of deciding the scope is deciding what is the environment:
- Is it just about what COVID-19 information is available on a particular website?
- Is it about all possible information about COVID-19 that is available on the Web in general?
- Is it about all possible information about COVID-19, whether that information currently exists or not, whether it is available on the Web or not?
If you have a choice, always go for the third option:
- Keep the scope as broad as possible.
- Do not limit it to the Web—include physical and digital.
In this way, you have the chance of designing a classification that will be robust and future-proofed because you have thought about the entirety of the environment, you have explored the entirety of the environment, you have drawn tasks from the entirety of the environment.
Here’s how we ultimately introduced the question:
In relation to the coronavirus disease (COVID-19) pandemic, select up to 5 things that are most important to you.
Be prepared to spend time deciding on the target audience and scope, because the decisions you make here will frame how you move forward. Involve as many stakeholders as possible. This is key. This is crucial. If you start making decisions about what the scope and tasks are from one department’s perspective, the result will be skewed according to that department’s perspective. As a result, it will be difficult to get other stakeholders to accept and act on the results.
We were incredibly lucky in that we had active engagement from the governments of Ireland, Canada, UK, Norway, New Zealand and Belgium, as well as contributions from hundreds of Web design experts and individual health experts. The broader the group you can pull together the messier it will be, but have no doubt, you will develop a better and more robust task list; you will map the information genome more comprehensively.
Gathering tasks
What is a task? A task is whatever your customer wants to do. Here are some examples of COVID-19 tasks:
- Symptoms
- Testing
- Virus mutation
- Original outbreak source, patient zero
When you’re designing a task for digital you want to strip away all that is superfluous and get to the essence of what the task is about in the fewest possible words. You don’t need “check symptoms”. Be brutal with words. Strip the language to the barest, most essential core. “Symptoms”. That’s all you need.
There are two steps involved in building your task list:
- Gathering potential tasks.
- Refining and reaching a final list of tasks.
80% of the tasks that you gather should come from the people: from doctors, nurses, academics, students, mothers, fathers. 20% should come from the organization: from executives, strategies, etc. You must cover both sources but always make sure that the tasks—and the language of the tasks—are dominated by the world of the people who will want to complete these tasks.
The following are sources of tasks from the world of the people who want to complete the tasks:
- Search-data both from external search engines and from your own site search engine.
- Website/app data analysis: most popular sections, pages, downloads, etc.
- Surveys and research on people’s needs going back two to three years.
- Support or other sorts of help requests.
- Social media, blogs, communities, traditional media.
- Other health organizations dealing with COVID-19.
Remember, this is all about being broad and thinking beyond digital. You’re trying to understand what information a mother in Ireland wants when it comes to COVID-19, what information a researcher wants, what a nurse wants, what a doctor in Brazil wants. It’s not about what content you have right now. It is about what they want right now, or what they might want as the pandemic progresses. You’re trying to get ahead of the virus when it comes to information needs. If you don’t want panic publishing, you have to plan, think ahead, think big, think broad, think open.
Editing tasks
This is the most difficult part of the entire Top Tasks approach. It is about getting from the initial longlist of tasks that have been collected, to the final shortlist of 50–80 tasks that you will ask people to vote on. Get this right and you will have solid foundations. Get it wrong and you’re building a digital sandcastle.
The more people you involve in the shortlisting process—and the more diverse they are—the greater the chances of long-term success. Keeping the shortlisting group small will get you the quickest initial result. However, long-term you are likely to fail.
Below is a sample of the tasks we had to deal with:
- Asymptomatic
- At-risk groups
- Avoiding infection
- Caring for someone
- Cocooning
- Community support services
- Coping with
- Coping with coronavirus
- Coronavirus waste disposal
- Home support services
- How long does it live?
- Looking after someone
- Precautions to take
- Prevention
- Quarantine, lockdown
- Quarantining
- Symptoms
- Types of symptoms to contact doctor / hospital
- Virus survival
In editing the longlist, key things you want to look out for are duplicates and overlaps. For example, is “Types of symptoms to contact doctor / hospital” a sub-task of “Symptoms”? Is “Prevention” an overlap with “Avoiding infection”? Is “Looking after someone” an overlap of “Caring for someone”? Some overlaps are obvious. Some are more difficult to identify and agree on. It’s complex, hard work that needs three or four weeks to do well. (If you’d like to learn more about the shortlisting process you can read my book, Top Tasks.)
Top Tasks results
The essence of the Top Tasks survey is to get people to vote on a randomly presented task shortlist. They are asked to select up to five of the tasks that are most important to them. Typically, you need about 400 voters to get statistically reliable results. In the COVID-19 survey for WHO we had almost 3,000 people voting from over 100 countries.
The top tasks that got the first 25% of the vote were:
- Vaccine (development, availability, safety)
- Latest news, latest research (alerts, directives, updates)
- Transmission, spread, epidemiology
- Immunity, antibody testing (criteria, availability, accuracy)
- WHO guidelines, standards, decisions
- Symptoms, signs
- Research papers, studies
- End date, new normal, safe again
- Virus survival / viability / persistence on surfaces, in air
We have done 500–600 Top Tasks surveys over the last 20 years. On average, we expect to find three-to-five top tasks in the first 25% of the vote. So, while we did discover top tasks for COVID-19, it is a more complex landscape than we would typically find.
In unusual circumstances we discover what we call a super task in a Top Task survey. An example of a super task is illustrated in the following chart, where the blue line represents the super task that has emerged in this particular survey, and the red line represents the average of all surveys.
In a situation where you have a super task, the design approach is best represented by the following example.
“Book a flight” is the super task and it dominates the design. With the COVID-19 Top Tasks results, we had the opposite of a super task occurrence. The blue line in the following chart represents the COVID-19 voting pattern. The black dotted line represents the average of all surveys. It is clear we have a much broader group of top tasks.
If we follow the evidence—if we follow the data—then we need to create a design like the one the European Commission chose after they discovered that they too had a large number of top tasks.
We now have the parameters of our design approach. However, the above examples have missed something important. They illustrate what the design looks like on a laptop or desktop. 70% of the 30 million people who visit WHO websites on a weekly basis are using smartphones and a great many of them are coming from countries with poor Internet access. If ever there was a scenario to take a Mobile First design approach, then WHO COVID-19 information is it. Here’s how the Irish health agency presents its COVID-19 information:
Brutally simple, clear and to the point. You can scroll down the entire page and you won’t find a single image. Avoid unnecessary images such as hero shots. What you will find is a series of headings and text links focused on the top tasks. It the same approach as the UK, Canadian and New Zealand health agencies have taken. It is the same no-frills approach in the US CDC COVID-19 pages. It’s an emergency, after all. Clear, precise, stripped down, focused information with the purpose of helping you complete your task.
So, if this is how to do it in an emergency, why do we have to do it differently when it’s not an emergency? Clean, simple design delivers value in every environment.
While there may have been a lot of COVID-19 top tasks, there was tremendous consistency about what they were across demographics and other segments. The top tasks are the same:
- Whether people frequently or infrequently look for COVID-19 information
- Whether they are young or old
- Whether they are male or female
- Whether someone was voting in a personal capacity or whether they were concerned primarily about public health
- Whether someone considers themselves an essential worker or not
- Whether they live alone or with other people
- Whether they have children at home or not
- Whether they are a carer or not
- Whether they are employed or not
- Whether they have a disability or not
- Whether they are self-isolating or not.
What does this mean? It means we don’t need an audience-based navigation system at the top level of the classification. It means that we can have a single classification based on top tasks, and that, where appropriate at a deeper level of the classification, we can segment people based on their particular role.
Audience-based navigation is one of the most confusing classifications you can implement on the Web. I’m 25 years doing Web classification work and I can’t remember a single successful implementation of an audience-based classification. I could write a book on the disasters. The only time you should use an audience-based navigation is when the top tasks of different audiences are totally and absolutely separate.
The Top Tasks approach also allows us to dig deeper into the connections between one task and other tasks, and between tasks and demographics and segments. We can discover such things as:
- Those interested in vaccines are also very interested in preventative drugs. They tend to be male, a healthcare provider with the general public as their primary concern.
- Those interested in the latest news are very concerned about fake news and are more likely to be from the media.
- Those interested in new outbreaks, second waves are also very interested in knowing about an end date, as well as when a person is no longer infectious. They tend to be employed and female.
- Those interested in mental health care a lot about money issues, financial support, government strategy, end date. They tend to be female and focused on family.
Sorting the tasks into classes
Design with people. Not for people. Before we are German, Irish or Canadian, we are human. And humans think the same way. Dream the same way. Organize the same way. There are mental maps out there in humanity. We just need to discover them. With the Web, we have the platform and the tools to understand these mental maps.
There were a total of 79 tasks in the final shortlist.
- The first 25% of the vote went to 9 tasks.
- 13 tasks accounted for 25%-50%.
- 19 tasks accounted for 50%-75%.
- 38 tasks accounted for the final 25% of the vote.
- In other words, 9 tasks got as much of the vote as the bottom 38.
We are designing a classification for the entire COVID-19 information genome. However, we will orient that classification towards the top tasks. The top tasks will be easiest to find and at the top level of the classification. The tiny tasks will be at lower levels of the classification.
We have taken the first step in classification design by identifying the top tasks and the tiny tasks. The second step is to go back out to the people (families, healthcare providers, academics) and ask them to sort and organize the top tasks.
We decided to ask them to sort the first 75% of the vote: the top 41 tasks. We generally consider the top task family as the first 50% of the vote, but in this instance, because we have such a complex environment, we decided to be more comprehensive. Remember, we are not forgetting about the bottom 38 tasks. They will fit under the classification that emerges.
Using Optimal Workshop, people were presented with the following exercise.
On the left of the screen were the 41 tasks. People were asked to drag them into the center of the screen and sort them into classes. 840 people did this, which is the largest and most diverse group of people I have ever had for a sort. (Typically, if we get 50 people sorting we see good solid patterns emerge.)
Classification patterns quickly began to emerge. There are always patterns. One of the most important gifts that a digital designer is given is the ability to peer into the mental maps of people. Digital tools such as Optimal Workshop allow us to do a “pet scan” of the human brain as it classifies things. There follows one of these pet scans—what is called a dendrogram.
Anything above 50% shows a strong connection. For example, at the very bottom of the dendrogram we see that “Transmission, spread, epidemiology” is strongly linked in people’s minds with “Virus survival / viability / persistence on surfaces”. For a digital designer, we are looking at a treasure trove of insight.
Remember how I earlier wrote about how healthcare providers, academics and the general public all voted for the same top tasks. Well, we also found tremendous consistency between the mental models of how healthcare providers, academics and the general public classify these tasks. They organized things in the same way.
For 25 years, working in 40 or more countries, I have listened again and again to organizations tell me how different they are and how different each and every segment of their audience is. That is always the excuse for complexity, but it rarely stands up to the harsh light of good data. What I have learned is that when executives are telling me about the specialness of their audience they are really telling me about the specialness of themselves and their department and why they must do things their way and classify things their way and publish so much content their way. The sad reality is that most times it’s not about the audience, it’s about the executive. That must change.
The following similarity matrix includes data from everyone who sorted. It shows that there is an emerging class that includes symptoms and diagnosis. Transmission-type tasks may be part of this class, though there is a weaker linkage.
- 83% of people link “Symptoms, signs” with “Compare symptoms …”. That obviously makes sense.
- 72% link “Diagnosis” with “Symptoms, signs”.
- 48% link “Transmission, spread, epidemiology” with “Symptoms, signs” and only 42% link it with “Diagnosis”.
- 69% of people link “Incubation period” with “Diagnosis” and 73% link it with “Symptoms, signs”.
The more you go above 50%, the stronger the link. Above 40% is not bad. The question is: Do we have one class here: “Symptoms, Diagnosis, Spread”? Or do we have two? We decided that we would test the hypothesis of one class because ideally we want to keep the top-level classification as small as possible.
How small? In a highly complex environment such as COVID-19 we could have as many as 15–16 classes. However, the more you can keep your top-level classification under 10, the better.
What we also discovered was that healthcare providers, academics and the general public were linking tasks together into the same classes. There follows a similarity matrix for healthcare providers.
We see the same basic patterns. Diagnosis is strongly linked with symptoms, as it is with incubation period. Transmission is reasonably well linked with diagnosis and symptoms.
Here’s the similarity matrix for academics:
We see similar patterns of linking symptoms, diagnosis, incubation period, transmission, etc.
Here’s the similarity matrix for the general public:
It’s the same basic patterns. Not alone do people from all walks of life have the same tasks, they also link these tasks into the same classes. They have similar mental models. This gives us even more evidence that we can create a task-based classification and that this will work for everybody.
We also found that while “Myths, fake news” was grouped with “News”, the link was not strong. There were issues with “Infection hotpots” and “At-risk, vulnerable”; they didn’t clearly fit anywhere. These are what we call “orphan” tasks.
There will always be orphan tasks and there will never be a perfect classification. How you deal with exceptions, orphans and tiny tasks is really important. In too many classifications, the tiny task exceptions have far too much influence on the classification design because tiny tasks are often pet projects of senior managers.
Testing the hypothetical classification
From the sorting of tasks we get a hypothesis that must be tested. There are a couple of reasons for this. Firstly, you need to test how you have named the classes. Over years of doing this we have found that while people are genius at linking tasks, they are often idiot at naming the classes. We got suggested class name like: “Other, A, B, C, D”. We need to use judgment to come up with the class names and then test.
There are grey areas between where one class ends and where another begins. Do we really have one class here or should we have two? We need to test, test, test. We will have a hypothesis about where we expect people to go and then we will ask them to select which class they would click on to find the answer.
Based on the analysis of the sort, here’s the initial classification hypothesis:
- Symptoms, Diagnosis, Spread
- Mental & Physical Wellbeing
- WHO, Government Guidance, Education, Training
- Research, Statistics
- Vaccine, Immunity, Treatment
- Avoiding Infection
- News
- End Date, New Outbreaks
You test a classification through use. Classification success is about sending people in the right direction. If someone wants to know about COVID-19 symptoms, the job of the classification is to point them in the right direction for symptom information. Ultimate success, however, is when they read the symptom information and can correctly decide whether they have symptoms or not. If they do have symptoms they then need to know whether they can get tested, where, and if there is a cost, how much it is. This is how we must judge success. By the outcome of the completion of the task.
Now that we have the hypothetical classification, we must create a set of task instructions. Here are some examples:
- When is a vaccine likely to be available?
- Find the latest updates for COVID-19.
- Can you get infected by COVID-19 through the air?
- If someone has had COVID-19 are they safe from getting it again?
We make a hypothesis about what class we expect people to choose. For the instruction “Can you get infected by COVID-19 through the air?” we expect people to choose the class “Symptoms, Diagnosis, Spread”. If a significant number don’t choose that class, we need to understand why. We may need to change the classification. That’s why we must test.
In order to properly test a classification you need to select between 15 and 35 task instructions. It’s very important to test at least 15 tasks to see the interactions between tasks and classes. By having a wide range, you may observe, for example, that while a particular class is working well for a certain type of task, it is not working well at all for another type.
The number you select will be guided by:
- Top task importance: The top 50% of the tasks should have at least one instruction each. For particularly important tasks, you might have two instructions.
- Orphan tasks: As you created the classification you may have found tasks that were particularly hard to classify. These can be good candidates for task instructions.
- Avoiding too much tiny task influence: Remember, as you add instructions for tiny tasks, you are likely influencing the core design of the classification. The more tiny task instructions you add, the more the design is likely to lean towards tiny tasks.
With the WHO COVID-19 classification testing, we chose 31 instructions. In writing a task instruction, keep the following in mind.
- Keep them relatively simple. You’re testing the first click, not the entire journey for a task.
- Avoid words in the task instructions that also appear in the hypothetical classification. However, you don’t want to deliberately choose language that’s obscure. For example, it’s hard not to call a vaccine a vaccine. By definition, the classification should make it super easy to find the top tasks.
- Keep the language neutral. Use “someone” instead of he, she or you.
- Keep instructions under 15 words and ideally under 10 words.
The target is to achieve an 80% success rate, and ideally towards a 90% success rate. It is impossible to achieve a 100% success rate for a complex classification. For example, some people, no matter what the task, will choose a class like “News”, but you can’t make “News” the correct class for everything.
Before calculating the success rate you need to clean up the data. For whatever reason, we have found that there will always be a percentage of participants who either abandon the classification exercise, carry it out superhuman fast by blindly clicking, or just try to mess up the results by making random selections. For this reason, we have instituted a rule that those who achieve a success rate of less than 30% are removed from the data.
One of the ways of understanding how well your classification is working is by calculating a “magnetism score”, which is calculated as follows:
The average success rate a classification has for task instructions it is supposed to work for
minus
The average failure rate, when it is drawing clicks for task instructions it is not supposed to work for. (This is what I call a dirty magnet.)
This gives you a sense of which classes are performing well, and which might need some more work.
Round 1 classification testing results
Over 2,000 people completed the first round, which was quite extraordinary. We normally need about 50 for reliable results. We had an overall success rate of 60%. In other words, the classes we expected to be selected for the instructions were selected 60% of the time. Typically, we find success rates between 40% and 60% for round one, so our hypothesis was looking promising.
To get it over 80% we needed to make some adjustments. The classification causing most issues was “Symptoms, Diagnosis, Spread,” as we expected. The sorting data showed strong links between symptoms and diagnosis-type tasks. There were some—though weaker—connections between these tasks and transmission-type tasks.
There were three tasks in the Transmission cluster:
- Transmission, spread, epidemiology: 62% success rate
- Virus survival / viability / persistence on surfaces, in air: 30% success rate
- Virus mutation, new strains: 19% success rate
It was decided that for round two we would break this class into two:
- Symptoms, Diagnosis
- Virus survival, Spread, Mutation
Tasks about testing to see if you have the virus were strongly connected to the “Symptoms, Diagnosis, Spread” class and the “Vaccine, Immunity, Treatment” class. It was decided that there should be links to testing from both these classes. This is what I call a Twins situation. It is quite typical to discover two dominant paths for a particular task.
The task “At-risk, vulnerable” was performing poorly. This, again, was expected. It was the most orphaned task in the sort, meaning that it was the least linked with any other tasks. The reason it was not placed at the top level as its own class was because it was a tiny task in the WHO vote, coming 29th. However, data we received from a COVID-19 Top Tasks survey in Norway showed it was a number 4 task there. On the national Irish health COVID-19 website and on the UK NHS website, it is at the top level of the classification. As we are seeking to design a universal classification for COVID-19, it was decided to make “At-Risk, Vulnerable” a top-level classification.
Here is the classification for round two testing:
WHO, Government Guidance, Education, Training
Mental, Physical Wellbeing
Vaccine, Immunity, Treatment
Research, Statistics
Virus Survival, Spread, Mutation
Avoiding Infection
Symptoms, Diagnosis
News
End Date, New Outbreaks
At-Risk, Vulnerable
Round 2 classification testing results
After round two testing we had a success rate of 77%, which was 17% higher than for round one. Typically, after round two we have a success rate somewhere between 60% and 70%, so 77% was an excellent result. Our target was to get over 80%.
The classification changes we made worked. The three transmission-related tasks had an average success rate of 37% in round one. To address this problem, a new class was created: “Virus Survival, Spread, Mutation”. In round two the success rate almost doubled to 68%. There were no major indications that the new class was behaving as a dirty magnet (drawing clicks for tasks it was not supposed to draw clicks for). The other new class, “At-Risk, Vulnerable”, worked even better with a success rate of 82%. Its dirty magnet impact was negligible.
Behavior was very consistent between rounds. For example, in round one, 80% selected “Vaccine, Immunity, Treatment” for the task instruction, “Have any drugs been approved for use on COVID-19?” In round two, it was 81%. In round one, 79% selected “Research, Statistics” for the task, “Find a chart that shows the trend / curve of coronavirus cases over time in Brazil.” In round two, it was 81%.
Specific changes we made after analyzing round two results included:
- The task “Likely course of illness, outcomes, prognosis” had the instruction, “What is a typical prognosis, likely course for COVID-19?” “Symptoms, Diagnosis” was designated at the correct path. However, in rounds one and two, 17% of people selected “Vaccine, Immunity, Treatment”. Seeing that prognosis is part of the treatment cycle, it was decided to make this path also correct.
- The task “Transmission, spread, epidemiology” had the instruction, “Can you get infected by COVID-19 through the air?” “Virus Survival, Spread, Mutation” was designated as the correct path. The class “Avoiding Infection” was selected 27% of the time in round one and 33% of the time in round two. It was decided that it was logical to also make it a correct path.
- The task “Virus survival / viability / persistence on surfaces, in air” had the instruction, “How long can COVID-19 last on cardboard?” “Virus Survival, Spread, Mutation” was designated as the correct path. (In round one it was “Symptoms, Diagnosis, Spread”.) The class “Avoiding Infection” was selected 41% of the time in round one and 22% of the time in round two. It was decided that it was logical to also make it a correct path.
Once we made these changes, we re-ran the data and found that the success rate had climbed to 80%. As we were making these changes, Top Tasks results data came in from Canada, where over 6,000 people responded to the survey. As you can see from the following chart, the top two tasks are about financial support and money issues.
Financial support and money issues were tiny tasks in WHO and Norwegian COVID-19 Top Tasks studies. However, there were also indications from other data sources that financial support was in certain countries a super-important task. After consultation, it was decided to add the class “Financial Support” to the top level of the classification. In an ideal world, we would have gone to another round of testing. However, after consideration, it was decided that “Financial Support” was a sufficiently unique task and that it would fit into the top level without being disruptive.
While we will always have top tasks that are unique to a particular geography or environment, we did find that there was roughly a 70% alignment between the WHO, Canadian and Norway top tasks, and that’s pretty strong. We found interesting alignments throughout the data. For example, the top task of those 17 and younger in Canada was “End dates, new normal, safe again.” It was also the top task of Norwegians 17 and younger.
I’ve been doing this Top Tasks work for almost 20 years now. I’ve seen a lot of data from a lot of countries. If there’s one thing I’ve learned it’s this: We humans have so much more in common than that which separates us. And one of the most universal things we have in common is the intense belief that we are special, unique and different. If only we were more excited by what we share in common.
Anyway, here is the final classification we came up with for COVID-19. Feel free to use it. We hope it’s useful.
Coronavirus disease (COVID-1) pandemic classification
Links
COVID-19 Top Tasks WHO (PowerPoint)
Summary of the Top Tasks study carried out in April 2020 for WHO to identify COVID-19 top tasks for healthcare providers, academia and the general public.
Recording: Sorting of top WHO COVID-19 tasks into groups, classes
May 7, 2020: A summary of the top tasks survey results of healthcare providers, academia and public. Analysis of how people grouped tasks. Initial classification hypothesis for COVID-19, which will then go through multiple rounds of testing. Objective: to help WHO and healthcare providers create a comprehensive, intuitive classification so that people can more quickly and easily find what they are looking for in relation to COVID-19.
PowerPoint: Sorting Coronavirus WHO presentation
Raw Data from COVID-19 sorting of tasks (Excel)
Similarity Matrix COVID-19 sorting of tasks (Excel)
Recording: Round 1 Classification Results Top Tasks WHO COVID-19
May 14, 2020: Discussion of the results from the first round of hypothetical classification testing for coronavirus disease / COVID-19 information. Adapting classification based on results and preparation for round two testing. Objective: to help WHO and healthcare providers create a comprehensive, intuitive classification so that people can more quickly and easily find what they are looking for in relation to COVID-19.
PowerPoint: Round 1 WHO COVID-19 Classification Test presentation
Recording: Final results WHO COVID-19 top level classification
May 21, 2020: Summary of how the information architecture design evolved through testing and evidence-based research into a top-level classification.
PowerPoint: Final Classification results COVID-19 Top Tasks
Top Tasks results, classification results, testing task instructions (Excel)