Heuristics are a well established and accepted list of UX principles used to assess how well a user interface has been designed for its intended purpose.
They have been in use for many years by UX practitioners and were more formally authored in the ‘90s by Jakob Nielsen and Rolf Milich. They have been recently updated with minor changes and are as relevant now as they were then.
A heuristic evaluation is what happens when an expert, or set of experts, reviews a product, with the likely result being a list of usability issues that need to be addressed.
Such an evaluation should support, rather than replace, user testing as part of the Human-Centred Design process.
Both are equally important and each will unearth crucial information for the UX designer and product owner.
An expert review can follow a heuristic evaluation, where heuristics have already been established. As a result, expert reviews may be less formal than a heuristic evaluation.
Things to consider
Not just anyone can do an evaluation. It requires skill, knowledge and experience from one or more experts in the field.
Expert reviews tend to be less formal and structured than a Heuristic Evaluation, so think about what it is that you are after and need feedback on.
There are pros and cons to holding Heuristic evaluations:
The evaluation process is technical and based on a specific, well-established set of criteria
More heads are better than one so to speak, and it is more likely all issues will be identified
The team of evaluators can focus attention on specific issues.
Issues can be identified early in development and their impact on the final UX can be determined.
Evaluations do not have issues with ethics, practicality, logistics and finance associated with other forms of analysis that need target users.
They can, however, be used in parallel with usability testing.
Assuming the correct heuristics have been identified, then evaluators can help to reach a more optimised design solution.
The quality of the output strongly correlates to the quality of the people doing the evaluation. Therefore time, effort and resource are required to locate and recruit those people.
You may find that evaluators raise a lot more items than those specific to usability. Hence, why it is important to define the scope of the evaluation.
It is important to define the correct number and type of heuristics at the outset.
Evaluators need to be skilled and diligent enough to find all of the matching usability issues for those heuristics.
Depending on the industry that you are working in, it may well be difficult to locate and/or pay for experts in that area - Finance or banking would be notable examples.
To get the best results, several evaluators may be needed, and it may therefore be more cost-effective to do usability testing instead.
Although based on science and experience, some of the output may still be considered subjective and prone to bias.
Conducting a Heuristic Evaluation
To conduct a thorough and successful Heuristic Evaluation, you should take into account the following steps:
1. Set precise objectives
Be clear of what it is you are going to assess, be it the whole experience, or a subset of it.
When you have decided what you are testing, you can then think about how you are going to test it.
2. Understand your users
This is a fundamental part of any UX activity. Have a full understanding of who the users, of the system you are evaluating, are.
Ensure you are aware of what their motives for using the system are and what their objectives are.
3. Select your evaluators
Ideally, you are looking for 3-5 people to conduct the evaluation, though you can do it with less.
Having more gives more breadth of expertise and provides more objectivity.
The people selected should have experience in usability and/or the industry from which the system will be targeted.
4. Define your set of heuristics
It’s good practice to apply between 5 and 10 heuristics to evaluate with.
These can be from the Nielsen-Norman heuristics that I talked about in my first post.
However, they are not exclusive and you can swap with others that may be more appropriate for the system you are evaluating.
5. Set your expectations
Before the evaluation begins, have a discussion with your evaluators on what your expectations are.
Are there specific tasks or elements of the experience you would like to be assessed?
Is there a specific measure you would like them to follow i.e. High / Medium / Low?
6. First stage evaluation
Evaluators interact with the system without any constraints applied.
They pinpoint themselves what they consider to be issues that require a more in-depth analysis.
7. Second stage evaluation
Issues identified in the first stage are then interrogated in more detail.
Ask specific questions:
Is it local to that part of the experience?
How does it affect the overall experience?
How severe are the issues?
What are potential solutions to the issue?
What are the implications of not fixing the issue?
8. Hold a feedback session
Get back together with your evaluators to talk about what they have found.
Ensure you understand what the issues are, how severe they are, and what is needed to resolve them.
Consider having a re-review, or a follow-up Expert Evaluation, when fixes have been applied.
This will ensure they have been successful and have not caused any additional issues.
Think about what time and budget you have available and go for an expert review or a UX evaluation.
Do your research and select the best people you can and who have expertise in usability and the relevant industry.
Carefully consider your assessment criteria and what aspect of your product offering you want to target.
Take the feedback seriously and do whatever you can to implement any recommendations.
Get in touch with the author
Darren Wilson, Managing Director at UXcentric
07854 781 908