CSC 151-02, Fall 2006 : Schedule : Reading 33
Summary: We consider the concept of usability and different approaches for evaluating the usability of computer software or other tools.
"How does this thing work? What is that supposed to mean? What am I supposed to do now? There must be a better way to do this."
If you've ever found yourself saying these things while using a web site, computer software, or some other tool, you may have experienced poor usability. Usability is a property of tools that roughly corresponds to the idea of being "user-friendly" or "easy to use." Jakob Nielsen, a well-known advocate for web site usability, identifies five components of usability paraphrased below (Nielsen, 2003):
All of these aspects of usability are relative to some population of users: For example, a web site might be very usable for adults, but boring or difficult for kids without the same patience and reading ability. A camera might be very usable to a trained photographer, but problematic for novices. A voting system might be easy for many adults to use, but full of errors and frustration for the "vertically challenged" or those with limited vision or dexterity. And, different aspects of usability are most important in different contexts. A system to be used by cashiers at the grocery store must be very efficient, but a web-based shopping cart must be easy to learn how to use, or frustrated customers will take their business elsewhere.
November 14, 2006 is World Usability Day, an event intended to raise awareness of the importance of usability in our everyday lives and in supporting values such as democracy, fairness, and safety (UPA, 2006). Of course, more usable tools can make our lives more pleasant as well. In honor of World Usability Day, we'll do a laboratory exercise in which you will evaluate the usability of a common household tool.
There are many different ways of evaluating usability, depending on the designers' goals for the tool.
If our focus is on the efficiency of the tool, one approach is to recruit a number of representative users (such as expert or novice photographers), give them some tasks to complete with the tool, and measure how long it takes the users to complete the tasks. The less time, the more efficient the design. Another approach is to use psychological models to predict how long it will take users to complete low-level tasks. For example, one model predicts the amount of time required to enter a command given the exact sequence of keystrokes; another model predicts how long it takes to use a mouse to click on a button or icon, depending on how large and how far away it is. A common goal is for frequently-performed tasks (such as changing the style of some text in a word processor) to be faster than tasks that are performed less often (such as changing the page margins).
If our focus is on satisfaction, then of course we should ask users how they like using the tool. But if we are concerned about learnability or errors, then it is not enough to ask users whether the tool seems easy to learn or whether they think they make many mistakes. People have a difficult time remembering and verbalizing what they have done in the past, and an even harder time predicting what they will do in the future!
As with efficiency, we have both theoretical and experimental approaches to evaluating learnability and errors. In one more theoretically-oriented approach, called heuristic evaluation, usability experts assess the tool with respect to some rules of thumb, or heuristics. Heuristics such as "make system status visible" and "recognition rather than recall" (Nielsen, 1994) are intended to capture some properties of systems that make them easier to learn and less error-prone. Several experts review the tool looking for violations of these heuristics, which are potential usability problems, and rank these violations according to their severity.
Usability testing is a more experimental approach. As in measuring efficiency, we recruit representative users and give them tasks to complete. However, rather than timing the tasks, we observe the users' behavior. In some studies---particularly studies of web sites---computer software is used to record the sequence of mouse clicks and keystrokes made by the user. Later, usability experts analyze this data for errors and moments of hesitation in order to identify aspects of the user interface that are particularly problematic. In other studies, representative users are asked to think aloud as they perform the tasks. Think-aloud studies are useful for understanding why users hesitate or make mistakes, and therefore are particularly good for understanding the causes of usability problems.
Think-aloud studies are often the most helpful for improving a design, and the easiest to plan and perform. A remarkable amount can be learned from conducting think-aloud studies with just 3-5 participants (Nielsen, 2000).
In planning a usability study, we must ask ourselves several questions: What is the tool or system being studied? What criteria will we use to identify representative users? (Is there more than one type of representative user?) How will we recruit representative users? What tasks should the study address? Once a system, representative users, and tasks are identified, it is important to practice the study to work out software bugs, as well as problems with the tasks or the study procedure.
In conducting the usability study, there are several key roles (McCracken and Wolfe, 2004).
After the study is over, the evaluators compile a list of all the usability problems, particularly noting problems encountered by more than one participant. They often rank the severity of the problems, as in a heuristic evaluation, and may make recommendations for prioritizing the problems or how to fix them.
Designing software and web sites is an iterative process: We come up with a design, test it to find out what the problems are, fix those problems in a new design, and so forth until we are satisfied (or run out of time or money). Think-aloud studies and other forms of evaluation can be used at many different stages of design:
Several of these approaches to evaluating usability involve human beings as research subjects. The Belmont Report defines principles and guidelines for the ethical treatment of human subjects. An important issue for every study is informed consent: participants must comprehend the benefits and risks of the study, and must be allowed to agree or decline to participate, or end their participation once the study is underway, without coersion. Usually, each participant is informed of the risks and benefits of the research through an Informed Consent form, which the participant signs to indicate consent. Other important ethical issues include the anonymity and confidentiality of participants, any deception of participants as part of the study, the nature of physical and psychological risks for participants, efforts made to guard against those risks, and the tradeoff of those risks with anticipated benefits. Institutions that conduct research, including Grinnell College, must have an Institutional Review Board to supervise human subjects research with respect to these ethical concerns.
Usability studies typically do not involve deception and have minimal risks. However, informed consent and confidentiality are still significant concerns.
Since our laboratory exercises have education rather than research as their purpose, you do not need to sign informed consent forms, and we do not need approval for our exercises from the Institutional Review Board. However, we should respect confidentiality by handling the data we generate carefully and not identifying specific problems we encounter with specific people in the class.
Daniel D. McCracken and Rosalee J. Wolfe (2004). User-Centered Web Site Development: A Human-Computer Interaction Approach. Pearson Education, Upper Saddle River, NJ.
The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (1979). The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects. Accessed Sunday, 12 November 2006 at http://ohsr.od.nih.gov/guidelines/belmont.html
Jakob Nielsen (2003). Usability 101: Introduction to Usability. Alertbox, 25 August 2003. Accessed Sunday, 12 November 2006 at http://www.useit.com/alertbox/20030825.html
Jakob Nielsen (2000). Why You Only Need to Test with 5 Users. Alertbox, 19 March 2000. Accessed Sunday, 12 November 2006 at http://www.useit.com/alertbox/20000319.html.
Jakob Nielsen (1994). Ten Usability Heuristics. Accessed Sunday, 12 November 2006 at http://www.useit.com/papers/heuristic/heuristic_list.html
Usability Professionals' Association. About World Usability Day (2006). Accessed Sunday, 12 November 2006 at http://www.worldusabilityday.org/about
Janet Davis (email@example.com)Created November 12, 2006