The Wearable Computing Conundrum

Posted on

Wearable computing seems to be the buzz word of the year, with everyone and their dog contemplating the next watch, eyewear, sensor, or implant that will come from Apple, Samsung, Google, or Microsoft. It's an intriguing notion, having a computer as part of your attire, no longer weighing down in your pocket. But how would it work, really?

When Google Glass was released, the tech industry pundits went wild. Here was the "next big thing", Google beat Apple to the punch, and everyone will want to have Glass on their faces in the next few years. But since then early adopters have started to realize something: wearing glasses all the time isn't fun. Not only that, but Glass can be distracting, particularly when someone is overloaded with information and their attention is focused on Glass instead of what it does.

For example, let's say you get prompted by Glass to view a video or social media update while waiting for your food, while your wife/girlfriend/significant other is trying to talk to you. Suppose instead you are in a meeting? How about in class? Teaching a class? With the pervasive nature of mobile computing and it's impact on attention currently, I doubt people will be more responsible with wearable computing like Glass. Don't believe me? How many accidents happen when someone is texting while driving? Shouldn't they know better? Don't they still do it?

Don Norman had a great article in Technology Review about wearable computing, and the dangers of poor design. He points out the design flaws of doing something just because it's possible, and not exploring all the ramifications of it before hand. For instance, Glass in the hands of developers put full email on the heads-up display, immediately pulling attention in that direction. The Glass engineers originally wanted to limit email to titles and small snippets, and therefore limiting the distraction. Developers, on the other hand, didn't see the problem, and full email made it's way.

But he also points out the potential benefits, such as an augmented reality view where note-taking was displayed at roughly the same distance as a blackboard of a lecturer that helped focus on the lecture while taking notes. The heads-up display had a minor impact on the experience, augmenting it instead of taking it over.

So how do I see wearable computing being used? I see three uses for wearable computing: sensors, heads-up display, and discreet user input.

First, for discreet input. There has to be a way to create a user interface that provides access when you want it, and easy to use. But the problem that it brings up is, what? It would have to be something you can wear, something that you can put away but pull out when you need it. Ideas that I can see are:
1. Keyboard tie: The tie could be a keyboard, and even have tactile feedback if need be. The problem is, fingerprints on the tie would be obvious, and it would encourage fiddling with a tie. That, and many in the tech industry are very anti-tie.
2. Keyboard in your Pocket: Laughable, as it would be very awkward in any company.
3. Trackable lights/reflectors/sensors on the fingers: What if you lose them when eating your bucket of chicken?
4. Tracking finger movements: Possible, but how do you track them, and how much computing power do you need to translate finger movements to typing?
5. Light Pen, HUD, and Handwriting Recognition: This I think is perhaps the most likely, as the tech should already be here. You would have a light pen that would track movements as hand writing, and in order to know what you are writing it will display on your heads up display (HUD). Again, you would need the combined computing power and a HUD to get this to work.
6. Something Else: Obviously there are lots of other ideas, many of which I could never think of on my own. I applaud those who are trying to work out other methods of input, and look forward to seeing their ideas.

Heads Up Displays are problematic. Here is your window into the computing power, your display, your source of information both distracting and augmenting. How do you implement it? It would have to be in your field of vision somehow, and Google bet on one screen slightly out of your field of view. Here are my ideas:
1. Glasses: Teleprompters have glass that shows text on one side, nothing on the other. Much like the heads up displays we see in futuristic war movies, tracking enemy fighters in front of you. For this to work, the display would need to augment what you see, and not provide irrelevant information to distract you from the task at hand. Very difficult, as often we want to be distracted (Facebook at work, anyone?).
2. Integrated displays: We sit in front of many different screens and potential displays all over the place. Computer screens at work, the television at home, even windows and windshields. It would require something like AirPlay or Chromecast as an interface, know when you want to see the display, and preferably provide an overlay to what you are currently seeing for augmenting the task at hand. It wouldn't require as much focus as glasses, because you have to sit in front of a screen in order to use it, and therefore you are dedicating your focus. For on the go, instead of using a heavy phone, you could get a small screen that would just be a display.
3. Contact lenses: This would be very difficult to do with our current technology, but it would solve the problem of needing glasses for your display.

Sensors to me would mean a lot of things. It could mean pulse and blood pressure sensors in your watch, heart monitors in your shirt, temperature sensors in your coat, etc. It could also mean heat transfer technology in your clothes that power your wearable computers, electronic fabric that builds circuit boards and such within the fabric of clothing that take care of processing. Antennas would be distributed widely through clothing, increasing reception.

The problem is, while this technology is available, it's not nearly where we need it to be to work as wearable computers. Not only that, but we don't have enough research to know if having wifi antennas, cellular antennas, and such constantly on and enveloping us, would work let alone be safe.

Wearable computing is very much in it's infancy, and there are so many ideas and directions it could go. Are we ready to declare who got it right, and who didn't? Remember that Microsoft first released the tablet PC, which failed to gain traction, until Apple released the iPad. Early releases and proof of concepts are often not the next big thing, but the thing that leads to a refined next big thing.