Reading Reflection 3

Cooper, A., Reimann, R., & Cronin, D. (2007). Implementation models and mental models, About face: The essentials of interaction design (27-40). Indianapolis: Wiley Publishing, Inc.

In Chapter 2, Cooper (2007) identifies the difference between an Implementation Model and a Mental Model. However, for designers there is another model called the Represented Model, where they must choose how to represent the working program. Cooper says that, “One of the most important goals of the designer should be to make the represented model match the mental model of users as closely as possible” (p. 30). Following other principles I’ve learned in design classes, it makes sense. After all, as a designer, you should design for your audience. If the mental model represents the user’s vision, then designers should create representations closer to that vision.

Cooper also talks about Mechanical-Age and Information-Age objects, where interfaces are sometimes limited by what we know and expect from the past. He suggests that any Mechanical-Age representations be improved on with Information-Age enhancements. It hadn’t really occurred to me, but I can see how designers can easily restrict themselves with what is already known. It seems that Information-Age enhancements need a little bit of thinking outside the box and innovation.

Fu, W., & Pirolli, P. (2007). SNIF-ACT: A cognitive model of user navigation on the World Wide Web. Human-Computer Interaction, 22, 355-412.

The authors that wrote this paper created a model aimed to predict user navigation on the Web and to understand the cognitive approach of users when navigating. They discovered that information scent cues (which is related to the relevance of link texts to user information goals) was a better predictor of navigation than position. Because their first model was based solely on information scent cues, they created a second one that took position into account, which ended up being a better predictor than the others.

The greatest concept I took from this was that users tend to “satisfice” when navigating. That is, after a brief time scanning a page, users will choose the link with the greatest information scent from only the select few that have been evaluated, rather than putting extra effort into finding the best link on the whole page.

I had known about satisfying before, but I think this was a good example of the concept, and that the authors’ findings really put it into perspective.

Nielsen, J. (2005). Jakob Nielsen’s online writings on heuristic evaluation. Retrieved from

Nielsen covers several points on usability testing, namely focusing on Heuristic Evaluation and User Testing. Heuristic evaluation involves a small set of evaluators to examine an interface in terms of usability (3-5 people recommended). User testing involves an observer/experimenter that interprets user actions related to usability issues. He suggests using both since some problems can only be identified by one of them.

Nielsen also mentions severity of usability problems, which he defines as a combination of frequency, impact, and persistence. I didn’t know about severity ratings before reading this, but he suggests that all problems should be found first, then evaluators should rate them in terms of severity before scored are averaged.

The webpage also included an article on Technology Transfer that looked at the usability of usability methods. Nielsen found that user testing and heuristic evaluations were rated most useful because of that quality of data they generated. Newer methods also tended to be rated lower. Aside from the findings, I was glad to see mentioned that companies realize the need for increased usability.

Vorvoreanu, M. (2010). Understanding NSF investments: Heuristic evaluation.

The evaluation gave a great example of what Nielsen explained on his page, including usability principles and severity ratings. I personally haven’t seen any other heuristic evaluation reports, but this looked well organized and it was easy for me to follow along, especially with the snapshots. I noticed Dr. V went with her signature colors for the layout, too. 😉

Questions I have: Is this report what you show to clients? Or was this assembled for our class or a portfolio of some sort? I’m curious how the evaluation is used or what process you go through once the report is completed.