First off, I learnt a new word! Our first learning objective for today’s lesson was to ‘explain usability heuristics’… Heuristic means any given approach to problem solving, learning, or discovery that employs a practical methodology not guaranteed to be optimal or perfect, but sufficient for the immediate goals. In the context of usability it means explaining:
“when is it a good idea to do user testing”
We are also going to look at defining each type of usability test and determine when each is appropriate and discuss common outputs and results, i.e.
“Why is the user having problems, why are they getting stuck, what is the reason for the usability test”
10 Usability Heuristics (Nielsen):
His principles for interaction design are called “heuristics” because they are broad rules of thumb and not specific ‘usability guidelines’. Even though devices have changed over time these 10 principles remain the same.
1 – Visibility of system status
- System should always keep users informed about status
- Provide appropriate feedback within reasonable time
Systems do this by using progress bars to show how much more there is to do (form filling in), or how long you have to wait for something to load. They reassure the user that something is actually happening, people are more likely to wait for something if there is a representation of ‘how long’.
An interesting point was brought up about ‘who’ users blame based on the design of the progress indicator; apparently when Facebook had a bespoke load wheel users blamed the system for a long wait where as with a native progress wheel the user blamed the device/ the internet.
2 – System word match
- System should user similar language as users
i.e. don’t use overly technical language, or system orientated terms. Don’t confuse people, follow real world conventions, making information appear in a natural and logical order.
3 – User control and freedom
- Support undo and redo
- Allow users to easily exit when mistakes are made
For example, being able to retrieve a deleted email on Gmail. Users need to be able to find a clearly marked “emergency exit” to leave an unwanted state without having to go through an extended dialogue.
4 – Error prevention
- Eliminate error-prone situations
- Enable confirmations before completing critical actions
There is a good discussion in this blog about the language to use in ‘do you want to’ pop up boxes by ‘8 Answers’ (This harks back to the ‘system word match’ heuristic).
5 – Consistency and standards
- System should use consistent language
- Similar events should look and sound similar
Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions. The use of a style guide is both an enormous time saver for dev/design as well as essential to creating consistency across a product or a service. The copy and the design should look and sound similar. Matt mentioned that at One Fine Stay before they settled on a style guide they literally had 50 shades of grey in use throughout the website! This heuristic can step in to the world of branding.
6 – Recognition over recall
- Minimise cognitive load by making options visible
- Provide clear instructions when necessary
- Don’t make users remember things
It is all about recognition over recall and the use of smart defaults. A good design machine learns from the user to make things easier for them.
7 – Flexibility and efficiency of use
- Support novice and expert users (you can see a doodle I made about this during class below!)
- Allow users to tailor certain actions
Accelerators (which are unseen by the novice user) can be used to speed up the interaction for the expert user. These must not be at the expense of the novice user. An example of an accelerator can be seen in the ‘buy now’ option on Amazon, however there is also a step by step e-commerce user journey for the novice user on Amazon.
8 – Minimal design
- Design should not contain more information than necessary
It is better to nail the design for a small subset of people. It can be hard to achieve minimal design during product/ service creation as everything can seem so important. It is essential to distil this down as every ‘extra’ unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
9 – Errors
- Communicate errors in plain language
- Indicate the problem and suggest a solution (or users will disappear!)
How products/ services deal with error presentation is unsexy but very important. Showcasing a solution can not only ‘save’ a user experience but ‘increase’ it also. Receiving a 404 message can be really annoying to a user, there is a trend to turn this experience from frustration to fun and useful/ reassuring! It allows developers to display their personality a bit as well as point the user in the right direction, here are some examples:
10 – Help and documentation
- Provide specific instruction when necessary
Some good pointers on how to do this well can be found on this website; User Onboarding.
Task: Pick a website and evaluate the site according to heuristics and provide a rating (1-5) for each heuristic.
My table partner and I went on lights4fun as I had been on it recently to order some fairy lights. We gave it: a
- 4/5 for system status – the checkout process was clearly shown
- 2/5 for minimal design – there was lots of conflicting information and a busy design
- 3/5 for error prevention – they automatically put batteries in the checkout to match up how many lights you ordered (bit cheeky) but it was easyish to delete them
The Components are:
- Problem: something in need of explanation
- Theory: an idea about how to explain it
- Testable propositions: means of obtaining information
- Test: action that elicits information
- Selection: interpretation of results
We then discussed why it is important to consider usability heuristics and carry out user testing some examples included:
- To inform a re-design
- To understand why there is an issue
- To gather opinions from fresh eyes to check that there isn’t anything which has been overlooked
- Learn user motivations and satisfactions
- Improve the business
- To possibly get positive feedback about the product/ service directly from users
- To gather all negative feedback so that it can be problem solved
- Quantitative data alone is not enough
There are 5 different ways of carrying out user testing:
Method 1 – Formal “old school”
Requires the use of a faculty and facilitators. It can be quite costly and produce a lab atmosphere where the tester could feel intimidated or uncomfortable. However, it can produce really good results, as with anything it just depends on what you are doing. For example Thomson Reuters revolutionised their offering through formal testing using eye tracking and motion cameras in their high-tech design labs over the last 4 years to the point where they now compete with Bloomberg. You can watch Rachel Jackson, Head of Design at Thomson Reuters talk about their change of approach here.
It is good to consider the fact that we are now moving in to a world where there is a lot more context and possible distraction during the use of a product or service what with mobility increase. So you may get better results when this is taken in to consideration.
Method 2 – Cafe Test “Gorilla”
This follows a similar principle to user interviews in the research process. It is a great way of getting some quick fire feedback, all you have to do is go to a public place and recruit participation in short, casual tests. This method will validate simple prototypes informally. My tutor Irene mentioned that she completed 11 interviews in 50 minutes at Kings Cross armed with 5 main questions that had a few sub questions about a prototype she was working on. She said it is better to have help when doing this but it is perfectly possible to complete alone. Recruitment demographic needs to be considered. This method can often be seen as risky when working with big brands.
Method 3 – Remote Testing
Remote testing runs like a normal test however you will be sharing a prototype or live site with a participant online over screen sharing software. It is useful when you want a wide geographic swath or customers are far away. Matt prefers to use the screen sharing functionality on Skype as you can still see the persons face when they are completing the test which can often indicate more about their actual opinion, or you could use something like Silverback.
Method 4 – Unmoderated
This method produces very quick results that come in a useful video format. You work with a company like ‘usertesting‘ to create a flow a user can complete on their own, you then get videos of their mouse as well as audio being picked up from their webcam. It can cost about $50 per user which is cheep compared with traditional methods. A downside however is that you will not be there to nudge the user back on course or get any physical ques about what they are actually thinking/feeling. Matt mentioned that a ‘show-reel’ of these type of video’s are a really good way of showing management the ‘user voice’ as well as giving evidence about what you have been up to! It is also a good way of reaching demographics without travelling half the way across the globe…
Method 5 – Face to Face
This is generally the recommended method and lies somewhere between gorilla and formal. It is also the method we will be using in our personal projects. It provides users with tasks to complete on a prototype. You encourage them to talk out loud as they move through interactions and observe their actions. It is important to only give them the minimal context required. It is totally fine to correct the user and put them back on course if they go off on to wide a tangent, although this can also sometimes be useful. You need to make the user feel comfortable by reminding them that nothing they say will be incorrect, the test is not a test of them. When carrying out the test, use user language to help make them feel comfortable and sure of them selves. The skill is in not giving them any leading questions or initial impressions but making them feel supported as they go through the task.
Task: Illustrate the testing process.
We worked in pairs again to take it in turns to run a fake test on the GA website. I came up with the following test tasks:
- Can you find out if there are any workshops to go to this weekend?
- Who is running the next UX course?
- What is General Assembly?
My Partner had some difficulty in finding the answer to the first question but she got there in the end, there is no calendar on the GA website so searching by date doesn’t really work very well… It was pretty easy to see who was running the next UX course. The tag line on the home page – “Learn technology, design and business skills from industry professional in our global community” answered the last question easy enough. Irene watched as I was asked questions and gave a few pointers to my partner:
- You have to be tough in keeping the user on task, ( I was being quite playful and perhaps not staying on task)
- The questions should present separate tasks to be completed and not ways of getting somewhere on the website, that is up to the user.