Last week I talked a bit about the future of UI design. I criticized Microsoft for the lack of vision but I never really tried to counter it with my own. Alas, I am not an UI designer. I do not ride the bleeding edge of technology, and I don’t get paid for imagining the interfaces of the future. I’m just a devilishly handsome, and humble back end programmer cum sysadmin. But if you asked me what my vision of future user interface was, I would probably say: none. By that I mean there would be no discernible interface to talk about. The whole point of UI design is to make technology intuitive and easy to use. So the ultimate UI would simply be almost imperceptible to the user. Let me give you and example.
Most mortals have a very specific vision of “the future” as a place where we basically just have conversations with our computers. A common future tech scenario pushed by many UI imagineers looks a little bit like this.
The Man of the Future, let’s call him Bob, comes back from work. His house intelligently detects his presence, turns on the lights and his house greets him with a pleasant female voice. Our hero makes a bee line for the fridge to fix up an evening snack for himself, while he conversationally chats with the house AI.
“Anything interesting in the news?” he asks while pulling milk out of the fridge. The friendly female voice gives him a brief summary. The fridge door temporarily becomes a screen displaying an array of articles and video clips. Bob lazily slides his hand over the display, scrolling through the data, and then waves it away. Fridge door becomes a door again – complete with digital sticky notes, and a holographic garbage pickup schedule.
“How are my appointments for tomorrow?” he queries as he starts to assemble a sandwich. The house dutifully reminds him about a big meeting he has the city at 8am sharp. Bob nods.
Instantly a holographic display pops out in his field of vision, showing him a map with the best routes. He grabs his sandwich, and inspects the map for a few seconds than waves it away.
“Load it into my GPS.” he orders while carrying his sandwich into the living room. As he sits down on the couch, he politely asks the house to put on the TV. The far wall with dissolves into a huge panoramic 3D display. Bob finishes his sandwich watching some random move or TV show.
“Call my wife.”
Few seconds later the face of his significant other appears as an insert in the corner of the huge TV display.
Sounds vaguely familiar, right? You have probably seen some version of this vision somewhere before – perhaps in a SF movie or novel. But it is a short-sighted crock of shit. It is a pedestrian vision of the future which assumes that voice commands and gestures are the “most natural” and intuitive interface methods. It’s almost like having no UI at all. But alas, it is still there. Bob still has to order his house around, and explicitly state his orders. In fact using voice commands as input is painfully slow and it has no “speed dial” feature. If I had a talking house, as a power user I would train it to allow me to trigger specific tasks just by subtle hand gestures – like snapping my fingers, pointing or clicking my tongue because this whole talking to a semi-turingrade machine would totally annoy me. So in a way, I would be actually effectively circumventing the UI that was supposed to be imperceptible.
Here is my alternate vision of the future in which the UI layer simply goes away. There are no screens, no overlays, no HUD’s, no swiping, no gestures and no voice commands. The UI is made redundant. Granted, this comes from this transhumanist, progressive worldview of mine that has been fueled by a steady diet of science fiction. But hey – this is my vision.
In my version of the far future, Bob does not need to ask the house for his news. He simply becomes aware of them. His augmented mind reaches out to the internet and grabs news feeds from his favorite sources. They are not textual or video streams though. Bob is no longer forced to consume his media via the five analog senses humans were born with. He gets his news modeled as complex data packages that are side-loaded directly into his exo-memory. He does not need to read or view them like we, low-fi humans do – once they are downloaded and cached he recalls them just like any other memory – only more vivid and detailed. He can consume gigabytes of data in mere seconds. And since most of Bob’s private memory exists in the cloud, he does not really have to worry about storage.
The new Bob does not need to check his appointments. He knows about them. As the time nears, the appointment becomes a focus point within his awareness. The neural wetware he was born with can sometimes be fuzzy, and forget important things – especially when it gets preoccupied with personal hobbies, obsessions and etc. However the digital part of his mind never forgets. So the appointment is there – impossible to overlook or forget, unless he wills it to be forgotten. It is not an overlay, not a blinking icon. It is just the awareness of what, when and where.
Similarly, Bob does not need a watch. His exo-cortex has a built in time sense. Bob simply knows what time is it – always. Just like birds can always tell where North is, Bob can tell the time. When he travels, his time sense automatically adjusts to the correct time zone. His circadian rhythms may lag behind, but the digital part of his consciousness constantly syncs up with time servers. After all, time keeping is such an integral part of our lives. We surround ourselves with clocks now, so it would be foolish not to build it into ourselves.
Directions would be pointless for the neo-Bob as he can track the route in real time. Imagine Google maps, but modeled just like the news casts I mentioned earlier. Data not made to be consumed via the sense of sight, but destined for direct upload into the transhuman mind. Bob not only knows the road, he also knows the shortcuts, and he becomes aware of traffic jams as he is en-route. All without distractions such as overlays that could affect his driving.
Finally, he does not need to call his wife. He simply reaches out with his mind, and the connection is established between them. She becomes aware of his desire to communicate, and can accept or deny by simply willing it to happen. When they do establish a link, it is not a voice or video chat, a direct point-to-point conversation facilitated by their exo-cortexts. They can subvocalize if they want to, but they can also share images, smells, emotions or entire libraries of ideas. Their minds may touch this way for only a brief second or two, bur the amount of data exchanged in the process may amount to a long, heartfelt face-to-face conversation.
Artificial intelligence plays a large part in this, but it stays hidden. It works behind the scenes to facilitate, catalog and route information. But the goal is seamless integration of the human mind with the data-rich environment he creates. It is about data absorption, and rapid exchange without being slowed down by clunky interfaces operated via analog appendages better suited to biological tasks such as preparing and consuming food, or holding loved one’s hand for comfort.
Granted, all of this is strictly science fiction now. Whether or not all of this is possible is besides the point. The point is about having a vision of the future that goes beyond what we have now. The Microsoft showcase video I linked to last week is an example of limited, short sighted vision that aims to solve all our problems with “pictures under glass” interfaces only slightly more advanced than the ones we have right now. My vision goes the opposite way – far above and beyond and to the edges of imagination. It is my attempt at creating a polar extreme opposite to that video and the common notion that talking AI is the end-all, be-all of UI. The ultimate UI is no UI at all – it is a seamless meld of man and the machine.