Two weeks ago I talked about really shallow vision of future UI paradigms. Last week, I talked about my own vision in which UI ceases to exist. Sadly there is a big gap between these two extremes, and in the next few decades we will be really busy bridging it. UI design is a fascinating field right now, because we are nearing a point in time when we will need to make some radical decisions with regards to the way we interact with our machines. As our computing devices become smaller and more mobile, and our processing needs increase the current interaction paradigms will soon turn into productivity bottlenecks. One of the interesting research areas that might bloom into something useful in the near future is augmented reality.
At it’s simplest, AR lets us superimposed data onto real world spaces in real time. Right now there is a limited amount of things we can do with it, because the technology is tied to our phone, and we hardly ever look at the world through it, unless we are about to take a picture. To get more use out of it, we would need to start wearing HUD Glasses or contact lenses – kinda like characters of Halting State or Rainbow’s End. And we would only wear them until we would figure out how to pipe data directly into the optical nerves, or interface with the brain directly.
A small research team in Tokyo University is trying to disentangle augmented reality from mobile devices, goggles and displays. AR is supposed to deliver data and interfaces onto real world spaces, so they do exactly that with motion sensors and projectors:
The idea is astonishingly simple in it’s simplicity. The system promises to deliver a usable user interface onto real life objects. Every available flat surface can become a multi-touch display. Every household object can become a part of the user interface. In my previous posts we talked about the importance of haptic feedback, but Invoked Computing sidesteps the whole issue. You want a real keyboard, just grab one. Grab an old Model-M, snip the cord and use it anywhere in the house.
Best part is that we already have cheap, mass market motion sensors that could be hacked to do this. Granted, the input fidelity is probably not as fine grained as invoked computing creators would like, but Kinnect could be rigged to capture a lot of gestures to make this sort of interface usable.
Granted, Invoked Computing has one significant problem – it is tied to stationary hardware, and it does not fit our increasingly mobile lifestyle. We are slowly phasing out desktop computers, and we are figuring how to move hard-core gaming to mobile platforms. So creating an UI that has to rely on servo driven projectors and ceiling mounted sensor arrays seems like a step back. Augmented Reality delivered via goggles is not as intuitive and does not give you that sort of haptic feedback but it does not trap you in your room.
So I don’t see it taking off. But, I think they do have an interesting idea. Incorporating real world objects as part of your UI is a good idea. There is no reason why the sensor array in your HUD googles could not identify you picking up a real-world pen as a command to kick-start the hard-writing recognition software in the background. Or to interpret a typing like motion to search for keyboard like shape under your fingers, and if there is none to project one.
As a side note, I find it highly amusing that the very first demonstrated use of this new UI paradigm is to emulate a banana-phone.
And now the obligatory banana phone flash classic.
Sorry, I had to.