How to Conduct a Heuristic Evaluation

In this week’s post I will be examining Jakob Nielsen’s Online Writings on Heuristic Evaluation and examine how an individual can use the information provided us in these short articles to create more user friendly interfaces or products.

We’re going to start off with the article entitled “How to Conduct a Heuristic Evaluation”.

In this article Nielsen describes Heuristic evaluation as a method for discovering the usability problems in a user interface design in order to fix them before the product is released to the public. His approach involves using a small group of evaluators to examine an interface and judge it based of a set of guidelines known as the Ten Usability Heuristic. These guidelines will be described and discussed in the next post.

Nielsen recommends a minimum of three evaluators for most reliable results as it is least likely that a single user will be able to identify the usability problems regardless of how good of an evaluator she is. He also addresses the subject of using many evaluators since results have shown that there is not much of a difference in the findings of a large group of evaluators when compared with that of a smaller group. In an overage of over six of his projects, Nielsen argues that single users were able to identify only 35% of the usability problems compared with approximately 75% when five evaluators were used. The general recommendation was the use of approximately evaluators for reliable results. Nielsen also recommends that evaluators go through the interface at least twice: the first pass being used to get a feel of the interface while the second pass will be used to allow the evaluator to focus on specific elements of the interface.

Unlike many other evaluations tests where participants are not allowed assistance, heuristic evaluations allow participants to receive support when needed. In many instances the evaluators are unfamiliar to the interface and would require assistance in order to use the interface and henceforth complete the evaluation. In summing their evaluations participants are expected to explain why they disliked certain part of the interface in respect to the set of heuristic guidelines they followed.

I’ll end this post with the question, why bother to conduct a heuristic evaluation of an interface? It could mean the difference between people enjoying and recommending your product or people criticizing your product thus branding it a failure.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s