Nielsen (1994). Chapter 5 – Usability Heuristics.

Nielsen, J. (1994). Chapter 5 – Usability Heuristics. In Usability Engineering (2nd ed., pp. 115-163). San Diego, California: Academic Press.

“… mumble screens …” (p 117) [Like ‘lorem ipsum’ -oki ]

“Screen layouts should use the gestalt rules for human perception [Rock and Palmer 1990] to increase the users’ understanding of relationships between the dialogue elements. These rules say that things are seen as belonging together, as a group, or as a unit, if they are close together, are enclosed by lines or boxes, move or change together, or look alike with respect to shape, color, size, or typography.” (p 117)

“Most systems doing this have only two levels of interface complexity: novice mode and expert mode, but in principle it might be possible to provide multiple nested levels of increased complexity. This nested design strategy is sometimes referred to as training wheels [Carroll 1990a].” (p 122)

“Not only did the training wheels users get started faster, but they also liked the system better and scored slightly higher on a comprehension exam after the study. Even better, the initial use of the training wheels interface did not impair users when they later graduated to the full system. On the contrary, users who had learned the basics of the system with the training wheels interface learned advanced features faster than users who had been using the full system all the time [Catrambone and Carroll 1987].” (p 122-123)

[Heading: “Response time” (p 135) …]

“… [Miller 1968; Card et al. 1991]: 0.1 second is about the limit for having the user feel that the system is reacting instantaneously … 1.0 second is about the limit for the user’s flow of thought to stay uninterrupted … 10 seconds is about the limit for keeping the user’s attention focused on the dialogue.” (p 135)

“Given that undo and escape facilities are generally available, users will feel encouraged to rely on exploratory learning since they can always try out unknown options, trusting in their ability to get out of any trouble without ill effects.” (p 138)

“… paying attention to the user’s new actions should get a higher priority than finishing the user’s old actions.” (p 139)

“Often, error messages can be worded such as to suggest that the problem is really the computer’s fault — as indeed it is since the interface in principle ought to have been designed to have made the error impossible.” (p 143-144)

“The problem turned out to be that the system required the commands to be typed in lower case and the user had typed them in upper case. This difficulty is known as ‘description error,’ [Norman 1983] since the descriptions of the two situations are almost identical and therefore likely to be confused.” (p 148)

“‘It is all explained in the manual’ should never be the system designer’s excuse when users complain that an interface is too difficult.” (p 149)

“Two main search tools are the index and the overview map of the structure of the information space. Overview maps in books are normally in the form of a table of contents, but a diagram may sometimes be helpful too. Indexes are so obviously useful that there should be no need to mention them, except that many manuals are still published without an index. The index to a printed manual should contain not only the system’s own terminology but also user-oriented task terminology as well as a large number of synonyms, including the terms commonly used by competing vendors for the same concepts, since some users will have also used those other vendors’ systems. Online documentation should also have a rich index with synonyms which can furthermore offer the user full-text search capabilities and hypertext linking between related issues.” (p 152)

[Heading: “Heuristic evaluation” (p 155) …]

“Heuristic evaluation is done by looking at an interface and trying to come up with an opinion about what is good and bad about the interface.” (p 155)

“The goal of heuristic evaluation is to find the usability problems in a user interface dsign so that they can be attended to as aprt of an iterative design process.” (p 155)

“… recognized usability principles (the ‘heuristics’).” (p 155)

“The figure clearly shows that there is a nice payoff from using more than one evaluator, and it would seem reasonable to recommend the use of about five evaluators, and certainly at least three.” (p 156)

“Heuristic evaluation is explicitly intended as a ‘discount usability engineering’ method [Nielsen 1989b, 1990a]. Independent research [Jeffries et al. 1991] has indeed confirmed that heuristic evaluation is a very efficient usability engineering method, and one recent case study found a benefit-cost ratio for a heuristic evaluation project of 48, with the cost of using the method being about $10,500 and the expected benefits being about $500,000 [Nielsen 1994c]. As a discount usability engineering method, heuristic evaluation is not guaranteed to provide ‘perfect’ results or to find every last usability problem in an interface.” (p 160)

“… ‘double experts’ with expertise in both usability in general and the kind of interface being evaluated.” (p 161)

“Another way of utilizing different kinds of expertise is the pluralistic usability walkthrough technique [Bias 1991], … he advocates evaluating a single screen design at a time, having the full group discuss each screen before the evaluators move on to the next screen.” (p 162)

See this page at https://kinasevych.ca/index.php