I firmly believe that observing real users doing real tasks is the ‘gold standard’ for usability testing, particularly when the designers observe it themselves and see the problems only real users can find. 

However, sometimes full user testing falls outside the budget and the project manager will decide to use an expert usability assessment instead. 

This works well for websites where an expert usability consultant can put themselves in the shoes of the user and work through typical tasks identifying critical usability issues.

But what if the system supports far more complex tasks, which users take years to learn?

Usability checklist

One solution is to provide the expert users with a checklist based on International Standard ISO 9241-110 to help them judge how well their system meets best practice usability principles.

The seven principles which need to be checked are:

  1. Fit with user’s task. The interface should match the way users perform their tasks.
  2. Signposting. The users should be able to tell from the screens where they are in the task and what they can do next.
  3. Intuitiveness. The software should respond to the task context and follow accepted conventions for the target users.
  4. Learnability. The software should support learning.
  5. User control. The user should be able to control the interaction.
  6. Error tolerance. The software should minimise the likelihood and severity of errors.
  7. Customisation. Users should be able to customise the software to suit their skills and preferences.

For each principle the checklist needs several specific component questions. So for example, under principle two, Sign posting, the user is asked ‘Is it clear what input is required?’ To which the user answers ‘always’, ‘most of the time’, ‘sometimes’ or ‘never’. 

When users identify critical issues (for example, a screen where it is not at all clear what input is required) it is a good idea to take a screen shot to capture and illustrate the problem. 

At the end of each section, the user should be asked to give an overall judgement of the system (as a percentage) for that principle.

The results

The results are usually presented graphically and are particularly useful for benchmarking and for exploring why different users have a different experience.

For example, the diagram below shows three users’ judgements for a decision support system.

usability spider diagram

The end result is a structured record of expert users’ judgements on key usability issues and lots of specific examples of bugs for developers to fix. 

It is not quite as powerful as regular usability testing but it is practical, quick, efficient and revealing.