Good/Bad Design 10: MAMP

Last night, I wasted 4 hours trying to figure out how to connect to mySQLServer.

I recently downloaded MAMP, which is Apache, MySQL, PHP for Mac. Everything worked fine and dandy – I had green lights for both servers:

MAMP - Green Light

But on the start page, whenever I clicked on “myPHPAdmin”, I was given an error that said it couldn’t connect. I then googled for hours – many people have had the same problem, but solutions that I actually understood didn’t work. After about four hours, I had my roommate look at it. We deleted the program (for the second time), reinstalled it, but no results.

Finally, my roommate happened to click on this little number:


And for some reason, it worked. I don’t know why that link overcame a faulty connection to the mySQL server, but it did. It frustrates me that I wasted four hours clicking the main link above it and racking my brain when the solution was just a few pixels below. Well, I certainly feel dumb.

I don’t know much about programming, but it’s certainly poor usability when similar links work in different ways. Also, what was that popular design saying by Krug? Oh right, “Don’t make me think!”

Good/Bad Design 9: AmazonLocal

If you don’t know what AmazonLocal is, the easiest way to describe it would probably be to relate it to services like Groupon or LivingSocial. Basically, you can sign up to get notifications on deals in your area, and “save up to 75% on local restaurants, spas, entertainment, and more.”

I sometimes get these emails, although I’m not sure why because I don’t ever recall signing up for it. I didn’t bother unsubscribing though; I usually just ignore and delete them. What I found interesting was that apparently Amazon noticed! One day I received this in an email:

AmazonLocal Notification

I’m pretty sure my eyebrows rose upon reading this. They’ll stop sending me emails on their own accord? That’s the first I’ve seen a company do so.

Anyway, perhaps this is a better example of good public relations than design, but the fact that AmazonLocal was realizing that their emails didn’t interest me and acted accordingly made me want to applaud them a bit. Definitely increased my user experience due to their attention to my needs and wants. Nice.

Good/Bad Design 8: Apple Help Menu

I was working in inDesign the other day when I needed to use Spell Check on my work, yet didn’t know where to find it. Rather than hunting aimlessly through the menu structure, I went to the Help menu to type in my search. Using the Help menu is an action I rarely do; I usually know what I’m looking for or don’t trust the application to give me a straight answer. A reasonable reaction, I think. After all, Cooper says that Help menus are more often created poorly and historically known to not be very helpful.

But what I found through my search was that the menu not only changed results according to my input, but it would highlight and point to the menu item I was looking for. I thought it might have been an Adobe feature, but later I discovered that it was just my iMac. 😛

Help Menu

So from a usability standpoint, the Help menu not only helps users find what they’re looking for, but also shows them where it is by highlighting it and providing a blue arrow that moves slightly to catch your attention. Cooper states that Help menus should aid the user in understanding the program, and I would certainly say that this does a good job of that.

Help Menu 2

RAA 5: User-Centered Design and Usability Testing of a Web Site

Corry, M., Frick, T., & Hansen, L. (1997). User-centered design and usability testing of a Web site: An illustrative case study. Educational Technology Research and Development, 45(4), 65-76. doi:10.1007/BF02299683

The authors of this article were given several tasks from administrators at Indiana University. They were to determine how useful the current university website was through needs analysis and usability tests, and then develop a new site that would better meet the information needs of users.

A needs analysis was first conducted. The authors interviewed 35 campus departments to determine most frequently asked questions. These questions were put onto index cards and were used in card sorting by frequency, in which over 30 categories were revealed. These findings were used to create a first paper prototype.

Usability testing was then conducted with 21 people, through usage of paper versions of both the original website and the new prototype. Participants could only view one page at a time and were asked a think aloud while they answered 15-20 questions for each website.

A second phase of usability testing was then conducted with 16 participants, focusing only on the newer website. Changes that were made before testing included renaming links, reducing multipage nodes to a single page, and organizing university departments into a long list of alphabetized links.

Once usability testing using paper prototypes were completed, the authors conducted another usability test with an online version of the newer website, using 11 participants. You can tell that this article is dated because the website was tested on Lynx, Mosaic, and Netscape browsers by all participants.

Lastly, a second testing with the computer prototype was conducted to look at the changes that were made to fix the problems identified in the previous phase.

Main Findings
The first paper prototyping and usability testing revealed that the proposed website was more usable than the existing, when finding most-frequently asked information. In general, participants were often faster and more successful when completing tasks with the new prototype.

Results of the second usability testing helped identify more links that were confusing and/or misleading.

As for the usability testing on the computer prototype, there were several problems identified including too many key presses and scrolling to navigate. These problems often had to do with the browsers they were using.

In the second phase of testing the computer prototype, there were higher success rates than the phase before it due to clearer navigation and terminology, fewer keystrokes required, and more of a breadth-based navigation structure.

I thought this article had a lot of commonalities with what our Computer Interaction Design class was doing right now. The authors basically used an iterative process to clarify and reorganize the information architecture of the university’s website. Similarly, our class is taking the information from and using card sorting and usability testing to validate our own information architecture. That being said, this was a helpful reading to further understand the process we will be going through in class.

I would also like to mention that this article did well in putting the information we learned about IA into context. For example, using breadth rather than depth for navigation structures, and limiting information to one page because users will often ‘suffice’ and not even bother looking at the next page. Overall this reading was a very good supplement to our current course content, despite being dated. But then again, I guess that shows how some design guidelines tend to be timeless.

RAA 4: The Use of Guidelines in Menu Interface Design

Souza, F., & Bevan, N. (1990). The use of guidelines in menu interface design: Evaluation of a draft standard. Proceedings of the IFIP TC13 Third International Conference on Human-Computer Interaction, 435-440. Retrieved from

This article reported that only few designers religiously follow design guidelines. For this reason, the authors evaluated the extent that designers are able to use such guidelines to offer new improvements in clarity and efficiency.

By developing the guidelines further, it would help improve accuracy and present information in a way that makes them more usable. However, refinement doesn’t mean that designers would necessarily use them in their interface design processes.

Three interface designers were given a set of 87 guidelines in which they marked any difficulties or terms they found unclear. Later, they were observed during a study in which they were to identify and redesign problems using a whiteboard, but were encouraged to use the guidelines. At first they were not obligated to follow them, but were asked to think aloud their reasoning. Afterwards, they were told to change the new interface by applying all the guidelines one by one.

Main Findings
91% of guidelines resulted in errors with at least one designer, however only 11% of the guidelines were actually violated by their new design. The authors found that the designers tended to misinterpret the guidelines and mainly focus on prior designer experience. Examples the paper provided show lack of clarity for conditions and nature of guidelines and difficulties with certain terms.

As a designer the results were not surprising. Personally, I often rely on past experiences rather than the clarity of guidelines. I also think that guidelines are just that– guidelines: encouraged to be followed but you should know when they can and should be broken.

The difficulties the designers had were also relatable. For example, reading about the design process for class and applying them later in class is a completely different matter. I often find myself not knowing how to effectively apply a process until experiencing it firsthand.

I would say that clarifying guidelines is a good proactive and should be done, but this study revealed that following them isn’t completely necessary to make an exceptional design.

Good/Bad Design 7: Instructions

I spoke to two of my friends today and asked them what they thought was an example of good or bad design; things they would use everyday and possibly either get frustrated with or say, “Huh, that’s convenient.”

Their answers were mostly focused on bad designs as they spouted heated indignations about many things. These ideas strangely ranged from technology in general and how it makes us lazy, to the design of strapless bras. (Don’t ask me how we ended up on that subject because I’m not so sure I even know.)

crib instructions

Regardless, one large tangent we traversed was the poor design of instructions. Paper instructions, specifically. My friends had recently encountered the problem of trying to put together a baby crib and could not understand how it was so difficult to “put together four posts of wood.” They said they felt that the instructions were impossible to follow and had several opinions on the matter:

  • Picture instructions are good, except when the images are so difficult to decipher that you have no clue what you’re looking at.
  • It helps when different angles are portrayed.
  • Some people need textual instructions and therefore it should always be included.
  • It helps when the size of small parts, like screws, in the instructions match the ones in real life.
  • Descriptions need to be simplified and made easier to understand.

All very good points. This is obviously a list of frustrations, of which can be easily broken down into usability heuristics:

  • Picture instructions are good, except when the images are so difficult to decipher that you have no clue what you’re looking at.
    Match between system and real world: Create a clear connection between the system (instructions) and what the user understands and knows (or in this case, has to work with).
  • It helps when different angles are portrayed.
    Flexibility and efficiency of use: Different angles could help accelerate the amount of time this task requires by providing this information.
  • Some people need textual instructions and therefore it should always be included.
    Help: Even when a system is better without documentation, you should still provide that information to help those that need it.
  • It helps when the size of small parts, like screws, in the instructions match the ones in real life.
    Recognition: The system should use concepts familiar to the user and make objects, actions, and options visible to minimize the user’s memory load.
  • Descriptions need to be simplified and made easier to understand.
    Match between system and real world: The system should use words and phrases that make sense and are natural to the user.

Of this list, not every one of Jakob Nielsen’s usability heuristics are touched on, but I would definitely say they still apply.

Reading Reflection 7

Cooper. (2007). About Face 3.
Chapter 7

In Chapter 7 of his book, Cooper talks about taking the requirements from scenarios and using them to design. The designer needs to decide on what form the design will take, how it will be used, the input methods of the users, and elements and functions that are to be included. This is done by using information from previous stages and applying design principles to create low-fidelity models. It makes sense that detailed designs are to be avoided at this time, and I liked Cooper’s suggestion of using whiteboards to sketch and cameras to capture ideas for reference.

In general, the Framework phase is about defining the tone and types of interactions that will be in the design. The line between what you should focus on and the detail you should not include was different from what I had guessed, but Cooper does a decent job of defining it. I had thought something such as “visual language studies” would be saved for the refinement phase, but if this phase is focusing on the overall tone, then I suppose it would be included.

Sharp, Rogers, & Preece. (2007). Interaction Design.
Chapter 11: Design, prototyping, and construction

Other than the overall topic, this reading was similar to Coopers in various ways. They both discussed speaking with stakeholders about your ideas, understanding the interactions and functions you will include before designing, and considering interfaces to set the tone and suggest possible behaviors. One similarity that really stood out to me was the advantage of using low-fidelity prototypes – it not only is cost efficient and quick, but causes the designer to focus more on functions and user goals than pixels and widget design.

The chapter described low-fidelity prototypes as representations that doesn’t use any of the actual materials that would be on the final product. This reminded me of the Art and Design course I took last year where my partner and I made a prototype washing machine built from cardboard, Styrofoam, paper, tape, and a yoga ball. It was not at all what we intended the product to be, but it allowed us to test the dimensions of our design with actual people and target problems with it.

Interesting Blog 5: Boxes and Arrows

boxes and arrows

Boxes and Arrows is an online journal filled with peer-written design articles by contributors that tend to have experience in the industry. Anyone can suggest a topic, and readers can comment on it or even write an article on what was suggested. That being said, Boxes and Arrows has many pieces worth reading. There is a lot of information to be absorbed from this site: articles, stories, case studies, ideas, and more. I especially enjoyed the article Are your users S.T.U.P.I.D? where the author provides acronyms to help designers consider their audience and design. Pretty creative, if you ask me.

boxes and arrows homepage

I have yet to fully explore this blog seeing as there is so much to read, but thankfully it is well organized. If you have any questions on design, be it graphic design to information design, it looks like this is the site to go to!

Interesting Blog 4: Usability Blog


Usability Blog is written by Paul Sherman, founder of a user experience consulting firm, ShermanUX. Sherman has been in the usability industry for the past 12 years and fills his blog with numerous posts of good and bad design examples. They include snapshots of various websites, objects, infographics, and more, along with a brief blurb on his opinion.

I suggest my classmates to take a look at this blog for not only design tips but to get some ideas for Good/Bad Design examples to post. He touches on a few things I never really thought about, like repetitive “My”s in a menu or physical obtrusion to an interface. Maybe something mentioned in Sherman’s blog will remind you of another site that fails or succeeds in the same thing.

On another note, be sue to look at Sherman’s explanation of severity ratings. I think this applies to us all very well, since we have a few more usability reports coming up.

RAA 3: The effects of cognitive ageing on use of complex interfaces.

CognitionReddy, G., Blackler, A., Mahar, D., & Popovic, V. (2010). The effects of cognitive ageing on use of complex interfaces. Proceedings of the 22nd Conference of the Computer-Human Interaction Special Interest Group of Australia on Computer-Human Interaction, 180-183. doi:10.1145/1952222.1952259

Many older adults find it difficult to use modern products because of their function and interface design. Past research shows that the decline in cognitive functioning as you get older affects your speed and accuracy when using complex technological products, but another study shows that effectively using a product is not generation-specific or different depending on your age.

Therefore, the authors of this paper wanted to look deeper at how cognitive aging and prior technology experience affect using such complex interfaces.

37 subjects between ages 18 and 83 participated in the experiment. First, they were given an information package, consent form, eye acuity test, and a questionnaire on their technology experience. Then they were given trials that involved tasks with a body fat analyzer. Task time, errors, body language, and facial expressions were recorded at this time. After, the subjects were given a post test on their technological experience, and used an application to measure “different aspects of cognitive function” (p. 181).

Main Findings
The researchers found a significant negative correlation between experience with technology and task time. Younger people were more likely to be tech-savvy and take less time.

There was also a strong negative correlation between sustained attention, task time, and errors. Older adults were less able to sustain attention.

Lastly, older adults tended to make more errors than the younger participants and also took longer to recover from errors. The authors state that this opposes the Processing-speed Theory proposed by Salthouse (1996), which suggests that there would be minimal differences between older and younger performance if given unlimited amount of time. The experiment had no time restrictions, and therefore the findings contradict this.

Again, due to the subject being elderly and technology, this does relate to my thesis. I thought that the contradiction to the Processing-speed theory was an interesting find and that the experiment resulted good ground work on how complex systems are more difficult for older adults due to cognitive aging. I also thought it was surprising that they had older subjects (aged 60-83) that had decent experience with technology. Although the graph they supplied made it obvious that the time it took to complete tasks were higher than that of younger participants. Maybe the older participants just felt the need to take their time, and the younger ones just wanted to get out of there? A possible threat to internal validity, I suppose.