Breakthrough Idea 2: LED Contacts

Whoa. Just, whoa.

An old high school classmate of mine recently posted this surprising link on Facebook: LED Lights Make Augmented Vision a Reality. The article dates back to January so I may be behind the times, but this is the first I’ve heard of any breakthrough like this.

My first reaction: It’s like Minority Report! Soon we’ll be watching TV with contact lenses and later we’ll be interacting with interfaces with just our eyeballs. Nifty.

The article mentions augmented reality, but aside from that, just think of what implications this will have in HCI. Interfaces floating in front of our eyes… The design of such will likely change as the way we use them changes. No longer inhibited by monitors and their constraints, designers would have to accommodate for this. However, I imagine design principles to remain the same since they are based on human perception, but they would likely be applied differently.

The only concern I have is that floating interactive interfaces, such as ones in Minority Report, seem to require more effort to use. Grabbing a window physically by extending your arm seems to ask a lot of the user. Perhaps this won’t be the case, however, and that technology will only require miniscule movements to function – a twitch of a finger, perhaps.


Breakthrough Idea 1: Artificial Intelligence

Human computer interaction encompasses and overlaps many fields, one being artificial intelligence. Research in AI is very much HCI-based, but focuses more on human cognition and problem-solving / decision-making skills than it does with design and interfaces. AI isn’t all about futuristic robots either; AI technologies can save money and help businesses become more efficient by doing certain tasks, whether it is data mining, training, or helping an organization make a decision.

I think one great example of this was the Microsoft article we had to read in class about Models of Attention in Computer and Communication. The paper mentioned various applications that uses different approaches to decide when, how, and whether or not a message or notice should be sent to a user. In other words, the application would receive a message, and make a decision based on what the user was currently doing and the content/importance of the message. Because we’re talking about a computer understanding attention patters in people, I would definitely consider this to be a form of AI.

I don’t really know too much information in AI technology, but I would say that it indeed has it’s ups and downs. I think that it would be great if computers could make our lives easier by predicting what we want, but to an extent. Humans are fickle creatures, and I don’t know if technology in the future will ever reach the point where it could mimic a human being completely and flawlessly (after all, we’re pretty flawed ourselves), but if it could, I don’t know accepted it would be.

Even with the applications presented in the Microsoft paper, I don’t think I would feel completely comfortable with a computer choosing when and what I should see or not see. But perhaps that’s a reaction to it being unusual and different than what I am used to. Changes and new technology is always met with some kind of resistance, but I don’t think that should keep us from researching and trying to improve it. Maybe someday robots will walk among us and we won’t think twice.

Check out this video of a robot adjusting to rough terrain. Definitely cool, but the way it moves is almost uncanny because it’s so life-like, yet obviously not: