RAA 5: User-Centered Design and Usability Testing of a Web Site

Corry, M., Frick, T., & Hansen, L. (1997). User-centered design and usability testing of a Web site: An illustrative case study. Educational Technology Research and Development, 45(4), 65-76. doi:10.1007/BF02299683

The authors of this article were given several tasks from administrators at Indiana University. They were to determine how useful the current university website was through needs analysis and usability tests, and then develop a new site that would better meet the information needs of users.

A needs analysis was first conducted. The authors interviewed 35 campus departments to determine most frequently asked questions. These questions were put onto index cards and were used in card sorting by frequency, in which over 30 categories were revealed. These findings were used to create a first paper prototype.

Usability testing was then conducted with 21 people, through usage of paper versions of both the original website and the new prototype. Participants could only view one page at a time and were asked a think aloud while they answered 15-20 questions for each website.

A second phase of usability testing was then conducted with 16 participants, focusing only on the newer website. Changes that were made before testing included renaming links, reducing multipage nodes to a single page, and organizing university departments into a long list of alphabetized links.

Once usability testing using paper prototypes were completed, the authors conducted another usability test with an online version of the newer website, using 11 participants. You can tell that this article is dated because the website was tested on Lynx, Mosaic, and Netscape browsers by all participants.

Lastly, a second testing with the computer prototype was conducted to look at the changes that were made to fix the problems identified in the previous phase.

Main Findings
The first paper prototyping and usability testing revealed that the proposed website was more usable than the existing, when finding most-frequently asked information. In general, participants were often faster and more successful when completing tasks with the new prototype.

Results of the second usability testing helped identify more links that were confusing and/or misleading.

As for the usability testing on the computer prototype, there were several problems identified including too many key presses and scrolling to navigate. These problems often had to do with the browsers they were using.

In the second phase of testing the computer prototype, there were higher success rates than the phase before it due to clearer navigation and terminology, fewer keystrokes required, and more of a breadth-based navigation structure.

I thought this article had a lot of commonalities with what our Computer Interaction Design class was doing right now. The authors basically used an iterative process to clarify and reorganize the information architecture of the university’s website. Similarly, our class is taking the information from nanoHUB.org and using card sorting and usability testing to validate our own information architecture. That being said, this was a helpful reading to further understand the process we will be going through in class.

I would also like to mention that this article did well in putting the information we learned about IA into context. For example, using breadth rather than depth for navigation structures, and limiting information to one page because users will often ‘suffice’ and not even bother looking at the next page. Overall this reading was a very good supplement to our current course content, despite being dated. But then again, I guess that shows how some design guidelines tend to be timeless.


Reading Reflection 3

Cooper, A., Reimann, R., & Cronin, D. (2007). Implementation models and mental models, About face: The essentials of interaction design (27-40). Indianapolis: Wiley Publishing, Inc.

In Chapter 2, Cooper (2007) identifies the difference between an Implementation Model and a Mental Model. However, for designers there is another model called the Represented Model, where they must choose how to represent the working program. Cooper says that, “One of the most important goals of the designer should be to make the represented model match the mental model of users as closely as possible” (p. 30). Following other principles I’ve learned in design classes, it makes sense. After all, as a designer, you should design for your audience. If the mental model represents the user’s vision, then designers should create representations closer to that vision.

Cooper also talks about Mechanical-Age and Information-Age objects, where interfaces are sometimes limited by what we know and expect from the past. He suggests that any Mechanical-Age representations be improved on with Information-Age enhancements. It hadn’t really occurred to me, but I can see how designers can easily restrict themselves with what is already known. It seems that Information-Age enhancements need a little bit of thinking outside the box and innovation.

Fu, W., & Pirolli, P. (2007). SNIF-ACT: A cognitive model of user navigation on the World Wide Web. Human-Computer Interaction, 22, 355-412.

The authors that wrote this paper created a model aimed to predict user navigation on the Web and to understand the cognitive approach of users when navigating. They discovered that information scent cues (which is related to the relevance of link texts to user information goals) was a better predictor of navigation than position. Because their first model was based solely on information scent cues, they created a second one that took position into account, which ended up being a better predictor than the others.

The greatest concept I took from this was that users tend to “satisfice” when navigating. That is, after a brief time scanning a page, users will choose the link with the greatest information scent from only the select few that have been evaluated, rather than putting extra effort into finding the best link on the whole page.

I had known about satisfying before, but I think this was a good example of the concept, and that the authors’ findings really put it into perspective.

Nielsen, J. (2005). Jakob Nielsen’s online writings on heuristic evaluation. Retrieved from http://www.useit.com/papers/heuristic.

Nielsen covers several points on usability testing, namely focusing on Heuristic Evaluation and User Testing. Heuristic evaluation involves a small set of evaluators to examine an interface in terms of usability (3-5 people recommended). User testing involves an observer/experimenter that interprets user actions related to usability issues. He suggests using both since some problems can only be identified by one of them.

Nielsen also mentions severity of usability problems, which he defines as a combination of frequency, impact, and persistence. I didn’t know about severity ratings before reading this, but he suggests that all problems should be found first, then evaluators should rate them in terms of severity before scored are averaged.

The webpage also included an article on Technology Transfer that looked at the usability of usability methods. Nielsen found that user testing and heuristic evaluations were rated most useful because of that quality of data they generated. Newer methods also tended to be rated lower. Aside from the findings, I was glad to see mentioned that companies realize the need for increased usability.

Vorvoreanu, M. (2010). Understanding NSF investments: Heuristic evaluation.

The evaluation gave a great example of what Nielsen explained on his page, including usability principles and severity ratings. I personally haven’t seen any other heuristic evaluation reports, but this looked well organized and it was easy for me to follow along, especially with the snapshots. I noticed Dr. V went with her signature colors for the layout, too. 😉

Questions I have: Is this report what you show to clients? Or was this assembled for our class or a portfolio of some sort? I’m curious how the evaluation is used or what process you go through once the report is completed.