Good/Bad Design 8: Apple Help Menu

I was working in inDesign the other day when I needed to use Spell Check on my work, yet didn’t know where to find it. Rather than hunting aimlessly through the menu structure, I went to the Help menu to type in my search. Using the Help menu is an action I rarely do; I usually know what I’m looking for or don’t trust the application to give me a straight answer. A reasonable reaction, I think. After all, Cooper says that Help menus are more often created poorly and historically known to not be very helpful.

But what I found through my search was that the menu not only changed results according to my input, but it would highlight and point to the menu item I was looking for. I thought it might have been an Adobe feature, but later I discovered that it was just my iMac. 😛

Help Menu

So from a usability standpoint, the Help menu not only helps users find what they’re looking for, but also shows them where it is by highlighting it and providing a blue arrow that moves slightly to catch your attention. Cooper states that Help menus should aid the user in understanding the program, and I would certainly say that this does a good job of that.

Help Menu 2

Breakthrough Idea 1: Artificial Intelligence

Human computer interaction encompasses and overlaps many fields, one being artificial intelligence. Research in AI is very much HCI-based, but focuses more on human cognition and problem-solving / decision-making skills than it does with design and interfaces. AI isn’t all about futuristic robots either; AI technologies can save money and help businesses become more efficient by doing certain tasks, whether it is data mining, training, or helping an organization make a decision.

I think one great example of this was the Microsoft article we had to read in class about Models of Attention in Computer and Communication. The paper mentioned various applications that uses different approaches to decide when, how, and whether or not a message or notice should be sent to a user. In other words, the application would receive a message, and make a decision based on what the user was currently doing and the content/importance of the message. Because we’re talking about a computer understanding attention patters in people, I would definitely consider this to be a form of AI.

I don’t really know too much information in AI technology, but I would say that it indeed has it’s ups and downs. I think that it would be great if computers could make our lives easier by predicting what we want, but to an extent. Humans are fickle creatures, and I don’t know if technology in the future will ever reach the point where it could mimic a human being completely and flawlessly (after all, we’re pretty flawed ourselves), but if it could, I don’t know accepted it would be.

Even with the applications presented in the Microsoft paper, I don’t think I would feel completely comfortable with a computer choosing when and what I should see or not see. But perhaps that’s a reaction to it being unusual and different than what I am used to. Changes and new technology is always met with some kind of resistance, but I don’t think that should keep us from researching and trying to improve it. Maybe someday robots will walk among us and we won’t think twice.

Check out this video of a robot adjusting to rough terrain. Definitely cool, but the way it moves is almost uncanny because it’s so life-like, yet obviously not: