From Keyboard-Driven UI to Google Glass: What’s the Next Metaphor?

On the eve of the release of Google Glass we’re continuing to hear tech pundits go on about how nobody except wanna-be hipsters are going to wear Google Glass. Just for the record, many of these folks are the same ones who went on record in 2007 saying that no one is going to want to listen to music on their cell phones. Oops. Then there’s all the noise about Apple’s unannounced iWatch with hours of podcast-time spent speculating on what the interface will look like and what revolutionary thing Apple will have to do to have another hit on their hands. Ugh. What we’re experiencing is the confusion and slight panic as we continue to transition away from the desktop-metaphor toward … well, toward something else.

I used Google Glass – Joshua Topolski/The Verge

The touch-revolution that exploded with iOS and then Android broke us free from the constraints of the mouse/keyboard/menu user interface, but what the next metaphor is going to be is yet not completely clear. While everyone focuses on whether we will continue to swipe our devices or wave at them mid-air or talk to them, the real danger is whether I am now just taking my desktop-interface and trying to do all the swiping and talking to it without the benefit of a mouse and keyboard? Clearly that won’t really work. Again, what’s the metaphor?

Having experimented with “portable computing” since the mid-80s Kaypro/TRS-100 days I observed that the challenges that we need to overcome were based on the technology limitations in visual feedback (monitor sizes and screen resolution) and user input (mice, keyboard, voice-recognition and gesture-based user-interface). Oh, and then there were the battery/power limitations needed to run these screens and devices. In those early days of 20-pound transportable computing that meant a command-line-interface, a 10-inch green-screen CRT attached to a full-sized IBM-style keyboard and a large sew-machine-sized metal case with a really long extension-cord plugged into the wall. The metaphor was that your computer was an electronic typewriter that we typed our work into with a keyboard, then sent our work to a printer to create a “real world” copy. Period. Then Apple’s Macintosh, with its bit-mapped CRT screen, mouse and keyboard, stole the desktop metaphor from Xerox PARC. So we added to the desktop metaphor taking our electronic paper, sticking it in our electronic typewriter, writing the paper and then saving it in an electronic filing cabinet. Interestingly Microsoft successfully stole the interface/metaphor from Apple and dominated what was then called micro-computing, but then stumbled when they tried to shoehorn the desktop metaphor into their tablet-class devices in the late 1990s. Microsoft’s error wasn’t just that the devices were underpowered and overly expensive, but that the metaphor didn’t work.

Apple’s brilliance with the iPhone and then the iPad was to not port its desktop metaphor to the tablet but to remove all of the unnecessary layers between the user and the device. So instead of manipulating a pointing device on a literal table to move the cursor on the screen, why not point directly on the screen with your finger to do the same thing. And instead of always having a menu-bar on the top of the screen and a keyboard at the bottom, only show me those items that I need to accomplish the task I’m performing. It might not seem like much but there’s a different experience composing on an iPad without having a keyboard stuck in between oneself and ones works, versus using even a MacBook Air. When I first started using computing devices to write in the 1980s I could only dream of the day when I could take a small 10-inch device to the coffee shop and compose to my heart’s content for hours and hours.

So the limitations remain to be how big does the screen need to be to be useful to me and also, if I’m doing input, am I using a virtual keyboard and touch or some other method such as mid-air swipe or voice? When I first saw the Google Glass announcement last year I wondered if Google had suddenly made Apple’s pursuit of retina displays and Samsung’s “bigger is better” approach moot. Why compress more pixels onto a screen if you can do the same thing with a virtual heads-up device? This would certainly kill the fab-let devices because who wants to carry around a bigger device when you can do the same thing virtually? Truth be told, Google Glass isn’t trying to package a high-resolution desktop screen via it’s heads-up display, but it certainly points to the potential, if one were able to project such a full interface into the user’s eyes instead of using a slab of plastic and circuitry. As for the size limitation based on how small is too small for one to use a virtual keyboard, that becomes moot to whomever cracks out-of-the-box voice-recognition. Voice dictation built into iOS and Google Voice are gaining in usefulness day-by-day. Resistance to wear Google Glass on our heads and talk to our technology takes us back to the problem of what the next user-interface metaphor is going to be.

PDA – Personal Digital Assistant – When PDAs appeared in the late 1990s they were often little more than glorified calculators with rudimentary contacts and calendar thrown in, while the best of them functioned as digital Franklin Day Planners. But perhaps the next real metaphor will be to have your wearable technology function as a virtual personal assistant to whom you talk to and interact with. This would be generations beyond simple voice recognition (which isn’t simple at all), and approaching the Intelligent Agents hinted at in the Apple Knowledge Navigator video of the late 1980s. While the video stumbles over its desktop heritage with its trash can icon, printing function and saving files to external storage, what we’re really looking at is technology with personality that can interact with humans and understand inexact human speech patterns. Apple’s Siri and Google Voice are rudimentary first steps in this direction. We’re obviously not there yet, but technology that removes the mechanical layers between us and the tasks we’re trying to accomplish points to the direction of the next technological metaphor: the virtual personal assistant.

Pushing the question even further, I don’t know that I’m looking forward to a future where we’re all talking to our devices. What if we could the need to talk to our devices and communicate with them without speaking out loud. The following is an old Geek Brief video during which neural interfaces were explored with a company called Emotiv.

GBTV #411 | GeekBrief.TV – HID –

It’s difficult to say how many years or decades we may be away from this technology being mass-produced, but the implications of breaking down the layers between ourselves and our technology point to huge changes in how we will interact with our technology. Inventor, futurist and author of The Singularity is Near, Ray Kurzweil just posted on his blog, “Brain-computer interfaces inch closer to mainstream, raising questions.” Instead of surrounding myself with four computer screens, quietly typing on a keyboard at my special sit/stand desk, maybe my future self will sit in some comfortable hideaway with a big smile on my face, eyes closed and arms folded while I silently compose my thoughts onto a device in my pocket with no obvious devices hanging about my person beyond my unremarkable-looking reading glasses. That would be amazing.

This next video is the one that inspired my future thinking. It’s from a TED talk by John Underkoffler who was early on the scene exploring gesture-based user-interfaces. Back in the day this required several high-quality cameras, special gloves and standing in an exact spot between the array of cameras.

John Underkoffler points to the future of UI – TED Video

It may seem impractical to have run all of these experiments. I mean, why fix the keyboard/mouse interface if it’s not broken? I think these explorations into possible future interfaces points to the fact that what we’re trying to accomplish is not “sit at a desk/type at a keyboard” data input. What we’re trying to accomplish is making sense of the data that’s already there and manipulating it in a way that wouldn’t work with keyboard driven-input. Voice-control and gesture-based interfaces are intermediary steps. But the point actually isn’t about the interface, but about what we’re trying to accomplish. If it’s just raw data input with no higher function than why move away from the keyboard/mouse/big screen? But because we want to remove the layers and just do the thing, we are heading toward thought-driven interfaces that call upon higher functions beyond calculations and uncurated data.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.