Invoked Computing: augmented reality with bananas

Two weeks ago I talked about really shallow vision of future UI paradigms. Last week, I talked about my own vision in which UI ceases to exist. Sadly there is a big gap between these two extremes, and in the next few decades we will be really busy bridging it. UI design is a fascinating field right now, because we are nearing a point in time when we will need to make some radical decisions with regards to the way we interact with our machines. As our computing devices become smaller and more mobile, and our processing needs increase the current interaction paradigms will soon turn into productivity bottlenecks. One of the interesting research areas that might bloom into something useful in the near future is augmented reality.

At it’s simplest, AR lets us superimposed data onto real world spaces in real time. Right now there is a limited amount of things we can do with it, because the technology is tied to our phone, and we hardly ever look at the world through it, unless we are about to take a picture. To get more use out of it, we would need to start wearing HUD Glasses or contact lenses – kinda like characters of Halting State or Rainbow’s End. And we would only wear them until we would figure out how to pipe data directly into the optical nerves, or interface with the brain directly.

A small research team in Tokyo University is trying to disentangle augmented reality from mobile devices, goggles and displays. AR is supposed to deliver data and interfaces onto real world spaces, so they do exactly that with motion sensors and projectors:

The idea is astonishingly simple in it’s simplicity. The system promises to deliver a usable user interface onto real life objects. Every available flat surface can become a multi-touch display. Every household object can become a part of the user interface. In my previous posts we talked about the importance of haptic feedback, but Invoked Computing sidesteps the whole issue. You want a real keyboard, just grab one. Grab an old Model-M, snip the cord and use it anywhere in the house.

Best part is that we already have cheap, mass market motion sensors that could be hacked to do this. Granted, the input fidelity is probably not as fine grained as invoked computing creators would like, but Kinnect could be rigged to capture a lot of gestures to make this sort of interface usable.

Granted, Invoked Computing has one significant problem – it is tied to stationary hardware, and it does not fit our increasingly mobile lifestyle. We are slowly phasing out desktop computers, and we are figuring how to move hard-core gaming to mobile platforms. So creating an UI that has to rely on servo driven projectors and ceiling mounted sensor arrays seems like a step back. Augmented Reality delivered via goggles is not as intuitive and does not give you that sort of haptic feedback but it does not trap you in your room.

So I don’t see it taking off. But, I think they do have an interesting idea. Incorporating real world objects as part of your UI is a good idea. There is no reason why the sensor array in your HUD googles could not identify you picking up a real-world pen as a command to kick-start the hard-writing recognition software in the background. Or to interpret a typing like motion to search for keyboard like shape under your fingers, and if there is none to project one.

As a side note, I find it highly amusing that the very first demonstrated use of this new UI paradigm is to emulate a banana-phone.

And now the obligatory banana phone flash classic.

Sorry, I had to.

This entry was posted in futuristic musings. Bookmark the permalink.



5 Responses to Invoked Computing: augmented reality with bananas

  1. ” UI design is a fascinating field right now, because we are nearing a point in time when we will need to make some radical decisions with regards to the way we interact with out machines.”

    Did you mean with our or without? >.> I am hoping “with our”, unless you know something I don’t. >.>

    If you have ever read NetForce that’s kinda how I view our future of design, we will eventually move out of this dangerous physical world… and into a second life (sorry) which can be very much based on our thought process.

    I *wish* our UI would go back to command-line like.

    Reply  |  Quote
  2. Jereme Kramer UNITED STATES Google Chrome Linux says:

    Something much closer to the interface you envisioned are the developing brain computer interfaces. I had the opportunity to work in the Leuthardt lab mentioned in the partially-invasive section, although my project was completely unrelated to BCI’s. Even if implanted ecog grids aren’t your taste, I witnessed some highly functional hat-like devices that with some calibration allowed lab members to play a somewhat complicated skiing game. Something interesting to check out is this page where the author hacked a sensor to fire a nerf gun based on his thoughts.

    Dr. Leuthardt is generally a really cool guy. Due to his work I’ve heard several people refer to him as the world authority on mind reading. If I remember correctly, he’s in 5th place for the number of patents held in the US. He recently finished a sci-fi book due out next year called RedDevil_4 which promises to be a great read. You may be interested in his blog Brains and Machines.

    Reply  |  Quote
  3. Luke Maciak UNITED STATES Mozilla Firefox Windows Terminalist says:

    @ Travis McCrea:

    Yes, it was supposed to be “with our” :P

    Also, what do you mean you wish. Command line UI is doing perfectly fine on the server side right now and not going anywhere for a while. While the consumer facing UI paradigms are evolving and in constant flux, the state of the art for system administration has been the same for a while now. Mostly because it just works. CLI is clean, lean and fast. Has little to no memory footprint, it lends itself to scripting and demands at least borderline competence from the user.

    Reply  |  Quote
  4. Luke Maciak UNITED STATES Mozilla Firefox Windows Terminalist says:

    @ Jereme Kramer:

    Wow, I haven’t really been paying attention to the brain wave interfaces, but it seems that we are further along than I thought. This is fascinating stuff.

    Also, Leuthardt seems like an awesome guy! :)

    Reply  |  Quote
  5. I think most things would be better if we abandon the GUI (except for things like games). I mean web browsing:
    >Google “What is Love?”

    0. Top Result
    1. Next Result
    2. Next Result

    >Terminally-Incoherent.com
    —Terminally Incoherent—-
    —site tag here——————
    Latest Entry

    >Travelocity YVR > SEA
    ….
    1. $200 Alaska Airlines
    2. $400 Delta
    3. $300 Air Canada
    >1
    How would you like to pay?
    …..

    I mean the internet would be so much faster, there would be less risks… you would still be able to get all the content you want to get. Sure this would never work today, but it would be good to start pushing it on our children that way it doesn’t seem so weird to them.

    Reply  |  Quote

Leave a Reply

Your email address will not be published. Required fields are marked *