Future of UI Design

Let me start this off by sharing a cute “futuristic” video about possible future of the mobile technology. Please keep in mind that this was created by folks at Microsoft so you won’t actually see any innovative ideas or ground shattering paradigm shifts in there. Microsoft basically created a vision of future which is safe – one which it understands.

Let me ask you a question? Did anything in that video impress you? Did anything in there made you go “wow, this is cool?” I personally was mostly shaking my head. The only neat thing in the whole video was the way separate devices paired up and exchanged data without any hassle. But it worked that way because it was a proof of concept demo. More or less this sort of thing is a bit of a pipe dream – primarily because different devices from different vendors simply do not interface like that. I don’t see that changing anytime soon, so this grand unified vision of swiping data from your phone onto your desk is a bit unrealistic. Even if both were made by Microsoft and were running the same type of OS, data transfer would still have to be more complex than that because of security concerns, authentication and practical issues. For example, today it is much easier to transfer data via Cloud based sync than via direct device-to-device connection which may seem counter intuitive to the laymen, but makes perfect sense to us programmers.

In cloud scenario both devices are individually authenticated against the cloud and can access the data directly – there is no transfer necessaryu. In direct transfer scenario, they would have to locate each other, authenticate, establish encryption, etc.. So the whole swiping your graph onto someones screen is basically just pointless eye candy. Yes, it looks cool but in real life it would actually be much, much more efficient. One engineer edits the graph on his tablet, and the other one sees the changes being applied in real time on his desktop because they are both using a cloud based colab software. There is no futuristic swiping – it just happens. And we already have this technology right now. It’s called Google Docs, Office Live, etc..

My problem with this video is that it basically shows us our current technology, throws in some absolutely pointless visual effects and pretends like it is the future. No it is not. What is the utility of having 3D holographic overlays hovering over your phone / tablet display? They look cool, but how do they improve your workflow? How do they make your life easier? They serve no purpose other than aesthetics. And while aesthetics are important, they must go hand in hand with functionality. This is something Apple has nailed in the recent years – their UI’s are pretty, but also very solid. Micrsosoft in their fervor to catch up, has completely missed this point and so far has been building pointlessly flashy overlays on top of their traditionally clunky interfaces. This video is a proof of that. If this is the future, then we have actually gone backwards in terms of productivity.

There is absolutely nothing inspiring in this video. It is dull, boring and needlessly flashy. It does nothing more but showcase technology we already have right now. We can already do everything that was shown in that video with our current generation mobile devices – it is just not as effortless. Why? Because we are using multi-function, real world operating systems and not clinically clean mock-ups designed for a TV spot. It is obvious that in real world scenario, getting things done on your phone will take a few more clicks. It is also obvious that finding a recipe for a cake is not going to involve flipping trough a virtual, touch interface driven picture gallery, but browsing the actual web, with an actual web browser. Not something you can easily do with one finger, but arguably much, much more flexible and functional.

Have you noticed that the most common interaction in this video are finger swipe, click and pinch gestures? You know, the same gestures employed by all the devices we have right now? What very common interaction is conspicuously downplayed in the video? Typing. Why? Because typing on touch devices really sucks. Yes, modern tablets and smart phones are really good at predictive suggestions and use adaptive floating hit-boxes but touch typing on a smooth glass surface is still inefficient and inconvenient. Typing on touch screens is almost always hunt-and-peck affair, simply because we can’t feel the keys under our fingers. And this is a huge problem with current touch technology – it provides no haptic feedback. Making fully functional touch interfaces that you can actually feel would be a huge breakthrough. In fact, we already have the technology to do it. It’s just very immature, and has not had much of mainstream media exposure. Perhaps because it is difficult to wow the audiences with haptics using a video showcase. Tactile feedback simply does not look exciting at all when you watch it on the screen. Swiping graphs between monitors on the other hand kinda does.

Brett Victor wrote a really insightful deconstruction of this video, explaining why the “innovation” shown in this video is essentially a dead end. He calls this mode of interaction “Pictures Under Glass”, and he in neither excited or inspired by it. He finds it rather ironic that the interfaces of the future are going to be less expressive than what we have right now.

Let’s face it – despite being around for over 40 years now (first mice prototypes were designed in the 60’s), a standard mouse is still much more comfortable pointing device than modern multi-touch screen. Why? Tactile feedback is a big part of it. Pinpoint accuracy is another. Our fingers are simply not designed for detail work – that’s why we have developed precision tools – tweezers, pliers, blades, needles, etc. Smooth, glassy interfaces attempt to turn the cock back and try to convince us we can solve our precision problems with simple poking, prodding and sliding.

In essence this video is not showing us the future, but the present, just with a prettified Hollywood style veneer. If we want to actually improve our UI’s we need to go beyond pictures under glass. We need to look into haptics, tactile feedback and virtualizing precission tools. We need to research interfaces that revolve around eye tracking, brainwave pickups that interface with HUD displays. We need efficient ways of delivering tactile feedback, preferably without using auxiliary hardware – so neural stimulation, subdermal implants, etc.. That’s the future.

But hey, what else did you expect from Microsoft funded initiative. These guys wouldn’t know future if it jumped out of the bushes and slapped them in the face. You need proof? Just look how well Microsoft is doing in the search and smart phone markets. Those are the technologies that completely blindsided them and caught them with their pants down. They are not the kind of company that will lead us in the mobile revolution. Not by a long shot.

This entry was posted in Uncategorized. Bookmark the permalink.



28 Responses to Future of UI Design

  1. MrJones Mozilla Firefox Windows says:

    You nailed it!
    There is absolutely nothing new in this video. The only real things different from today are the 2mm thick handheld devices and the ultrahigh screen brightness.

    Ive got one good example that shows how the future is supposed to work:
    its the “shake you phone to clear a written text” in ios. This one feature, though it is pretty boring and almost useless, is a step into the right direction.
    What do you think?

    Reply  |  Quote
  2. Mart SINGAPORE Mozilla Firefox Windows Terminalist says:

    Agreed. I wasn’t impressed at all by that video. A layman probably would be.

    Seems more like what Hollywood envision the future might be, not from one of the largest software companies in the world. Some of the “future-tech” shown above seems too similar to what I’ve seen in Iron Man.

    Reply  |  Quote
  3. Luke Maciak UNITED STATES Google Chrome Linux Terminalist says:

    @ MrJones:

    Yes and no. The shake accelerometer based interactions can be annoying. I’ll use the shake motion sometimes, but when Google Maps tells me to move my phone in a figure 8 motion to re-calibrate compass I’m like “fuck that”. So some of these will work, and some are just too silly and embarrassing to do in public.

    There are bunch of interactions that work very, very well but we sort of got rid of them when we entered this glorious new age of mult-touch displays which are nice but limited. I mean notice how all touch displays are driven by 3 basic gestures:

    – swipe
    – pinch
    – tap

    Our hands are so expressive we can use them to talk to each other (via sign language) but when using modern devices we are relegated to two fingers and 3 basic motions, and we are forced to design interfaces around them. It forces our hand when it comes to interface design too. Look how few touch based apps have power-user shortcuts and features.

    Now compare that to keyboard driven apps such as vi or Emacs which can pack incredible amount of powerful features, and bind them to key combos that power users can execute in seconds. It is simply impossible to have this level complexity in a touch app – unless of course you emulate a keyboard.

    To me, typing is an interaction that will not, and should not go away. It works incredibly well, but we need to design touch keyboards with tactile feedback. Desktops and laptops won’t go away until we have those.

    Similarly, a mouse has tactile feedback – which is why it allows us to be so precise. It’s weight and friction are parts of the equation, which is why high-end mice come with sets of weights you can use to get just the right amount of resistance. Swiping finger on smooth glass is just an imitation here. We need UI that we can feel under our fingers.

    Future should not be about fancier looking swipe transitions but about figuring out ways to allow us to touch-dial our smart phones, and launch apps without looking at them.

    Apple (as usual) is ahead of the game with Siri. They are hedging their bets with voice commands. You can’t touch-dial your iPhone but now you can tell it to dial a number with high degree of accuracy. And it is smart enough to learn your vocal quirks, it may be a decent enough replacement. Still, I really think we need haptics to become a smart-phone standard.

    Reply  |  Quote
  4. Luke Maciak UNITED STATES Google Chrome Linux Terminalist says:

    @ Mart:

    Yes, you are right. lol The semi-transparent phones and the 3d pop-out graphics were taken directly from iron man. :)

    Reply  |  Quote
  5. Chrissy UNITED STATES Google Chrome Mac OS says:

    It sure did look fancy, but I agree that it looked like a Hollywood-type view of the future.

    I also could not make sense of how switching applications was done. On each device it looks like a jumbled mess with no sense of separation.

    Reply  |  Quote
  6. Morghan Safari Linux says:

    HMDs are something I do see taking off, but the overlays on this video were on windows (the cab) and not on her translation glasses where they should have been.

    Of course that would lead to problems, can you imagine the first auto accident causd d by a pop-up on a HMD?

    Reply  |  Quote
  7. i would like more to think about future devices then about future interfaces. Interfaces come by using devices (like the windowmanager is no longer that relevant if the screensize shrinks and you only use one window over the whole screen or when you use a touch interface, the buttons become larger so you can hit them without mouse)

    Examples: My 700g-ARM-Netbook with >8h running-time feels completely different then my Atom-Netbook did, weighting about 1.4kg, running 30min to 2h without charging.
    What is the most important part that makes it feel better? The weight! I can hold it in one hand over minutes without feeling the need to put it down. That gives me completely more freedom that you just dont think about when you just see them placed besides each others.
    8h running-time is another such border. If your devices start beeing usable a whole day without needing to be recharged, you stop connecting them all the time. With a device lasting 2h you _could_ do pretty much the same things, but you won’t. Simply because you allways know “if i start to watch a full movie, i’m gonna need energy afterwards.”

    Or mobile data-access: Here in Germany we only get sold pseudo-“flatrates” that will deliver 2-5GB per month and after that will be slow as hell. I have no need to care about this in reality, i hit this limit really seldom, but it still keeps me thinking if i really need to start this 200MB-download of podcasts or updates now or if it would be better to wait till i have WLAN.
    I don’t want to think about such things! I want net everywhere, as much i need, whenever i need.

    My Headphones: I bought A2DP-Bluetooth-Headphones last year and they completely changed how i use my phone for playing audio. I don’t need to fumble around with my phone, they can remote-control it. I don’t need to carry my phone with me while walking around in my room. I don’t need to watch about cables that could break.

    Nothing of these is technologies is something i couldn’t dream of 10 years ago. Yet they completely changed how i use my devices.

    Reply  |  Quote
  8. astine UNITED STATES Mozilla Firefox Windows says:

    Our fingers are simply not designed for detail work

    Luke, I’m thinking that you must have really fat fingers because that sounds crazy to me. Our fingers are absolutely designed for detailed work. Our hands have evolved a higher dexterity than nearly every other animal in the animal kingdom and our fingernails allow us to very precisely manipulate very small things. This is one of the things that separates man from the animals.

    Of course, our needs have since eclipsed what out physical bodies can do and so we’ve invented all these tools for detailed work, but interacting with human designed user interface shouldn’t be a job for tweezers. I’d suggest the opposite really, that if you need to use a hyper precise tool, the interface is miss-designed.

    Besides, I’ve always hated mice for the very reason that they create this distance between your hand and where you’re looking; there’s a disconnect for me between the mouse and the pointer which a touchscreen resolves. If I need real precision, I can always use a stylus.

    (I suppose none of this holds if you’re playing an FPS but I don’t really care.)

    Reply  |  Quote
  9. Luke Maciak UNITED STATES Google Chrome Linux Terminalist says:

    @ Chrissy:

    Yeah, it looked like a tech demo and nothing else. Real world apps and internet do not look that clinically clean.

    @ Dr. Azrael Tod:

    Hmmm… This is a very interesting point. But I think this goes both ways. The hardware drives software, and software drives hardware. This video however sort of ignored the hardware side, and focused on software, but modeled it on current hardware paradigms.

    In essence I agree. We need devices with better tactile feedback, and new interaction ideas built into the devices and the UI advancements will follow.

    @ Dr. Azrael Tod:

    Haha, me and my fat fingers. Perhaps I should have specified that our fingers are not great for precision work on an iPhone sized screen. Have you ever had the problem where you were dragging an icon on your phone and had to move the device around to see behind your finger because it was obscuring the place where you wanted to drop it? That’s what I meant. You never have these issues with a mouse.

    I don’t know – I grew up with mice. I have been using them since I was a kid. To me a mouse is the extension of my arm. Mouse and keyboards are the purest, most direct forms of interaction with my machines. I never liked the stylus because of the limited range of interactions – often you can’t right click with a stylus (unless it has buttons), and dragging is usually more cumbersome. With a mouse I can snag, and flick things between the folders without even thinking. Or highlight and Ctrl+C, Ctrl+V.

    Hell, even the multi-touch pad on my MacBook seems limiting. I’m nowhere near as fast or as efficient with it. Whenever I work without a mouse I feel like I’m doing things with one hand behind my back. I’m not working at full speed – unless I’m in cli environment in which case mouse is irrelevant.

    I guess we all have different preferences. I couldn’t live without my mouse.

    Reply  |  Quote
  10. astine UNITED STATES Mozilla Firefox Windows says:

    Have you ever had the problem where you were dragging an icon on your phone and had to move the device around to see behind your finger because it was obscuring the place where you wanted to drop it?

    Nope, but then again, I have really thin fingers. Also, I use Android; is the iPhone different in this regard?

    Everything you can do with a mouse and keyboard you can do with touch and keyboard and I find I’m more precise with a stylus when making pixel perfect maneuvers. But keep in mind, I’m talking about a touch screen. A touch pad presents the same problem to me as a mouse, namely that my hands are completely out of my line of sight when working and I can’t really see what I’m doing.

    Of course, to each his own here.

    Reply  |  Quote
  11. Luke Maciak UNITED STATES Google Chrome Linux Terminalist says:

    @ astine:

    I guess you might have a point here. One thing I can’t do the mouse is draw. I can sketch a semi-accurate circular shape with a pen/stylus in one or two quick movements. Doing the same with a mouse just does not work. The best I can do is to crank up accuracy, decrease speed and go very, very slowly.

    So I guess I see what you mean about the disconnect.

    Also, apparently I have fat fingers. lol Thanks for making me self conscious.

    Actually I think the problem I described occurs when use your right hand to drag things on the upper left side of the phone screen causing your hand to be in your line of sight as you work.

    Reply  |  Quote
  12. @ astine:
    i don’t think the problem with touch is that you have less exact positions (you have, because your finger covers more area then the corner of a mousepointer, but thats a technical and solvable problem) but that you have to move your arm up and really touch your screen.
    This results in multiple things:
    a) fat-trails on your screen (easy solvable, although through reflective displays we currently move in opposite direction)
    b) you have to move your hand even more way between keyboard and pointing-device (come on, you don’t want to work without keyboard)
    c) you hold your hands up to the screen all day.. something you won’t like anymore after the first 1-2h

    Touch is ok if you allready have the screen in your hand, in every other field its just hillarious.

    Reply  |  Quote
  13. Morghan UNITED STATES Safari Linux says:

    It’s a combination of writing on a smooth glass surface and small screen size. I have a QWERTY on my phone, an on-screen on my tablet that is a bit harder to use, and then the touch keyboard on my phone which is incredibly difficult for me to use resulting in lots of side-slide action.

    I’m very much a fan of hardware QWERTYs on phones and will likely never buy one that doesn’t offer one as a backup for typing even if the system functions are all touch based.

    Reply  |  Quote
  14. s1n UNITED STATES Google Chrome Linux says:

    If you paid attention, there were a few really clever and somewhat new ideas, though none earth shattering awesome:

    1) Eyeglasses that overlayed a HUD and automatically translated the overhead audio. While HUDs aren’t new, the auto translation in eye wear strikes me as a new idea.
    2) The hotel employee’s card that was opaque and 2 sided as a regular business card, yet behaved like a cell phone. I guess if all displays are paper thin, then the card would be like 2 cards glued together.
    3) The opaque to translucent refrigerator door that can be toggled by touching it. This seems like an idea that has many many possibilities.

    With the exception of the paper-thin displays (everywhere, srsly?), the rest of the video showcases current technology. For example, the guy at the train stop with the paper thin phone: he does business online, takes a photo and has it automatically donate to the cause on the poster, and then looks at the 3D donation charts. That’s all basically a paper thin smartphone of today combined with the stereoscopic tech from a Nintendo DS.

    What I didn’t see was what problem modern (first-world) life has now that Microsoft was proposing to solve with tech. Every step in the field of technology is to somehow improve life using the tech. Though, I disagree that most of Apple’s products seem to aim at putting a good form to an existing function (of which they are remarkable at doing), they have still managed to improve computer usage for some people.

    All of that being said, the video needed one thing, a disclaimer:

    “No earth was shattered in the making of this video.”

    Reply  |  Quote
  15. StDoodle UNITED STATES Google Chrome Windows says:

    My first, and current, thoughts have all been on the UI as well. To give some back story, I have some pretty bad RSI in both wrists and elbows; I can hardly use a mouse anymore (I use a pen stylus, Wacom Intuos 3 to be specific, for all of my CAD work; has a bit of a learning curve, but I find it better than a mouse more often than not). I’ve already noticed many times that touch pads and touch screens seem to aggravate my RSI even more quickly than mice, so I kind of dread a “touch-screens everywhere” future.

    The problem isn’t that our fingers are bad at precise work; the problem is the specifics of a drag-your-finger-across-the-screen, right on top of what you’re trying to look at, that doesn’t work. If we had “no touch” screens where you could point your fingers and move them in mid-air, with a larger range of motion that was mapped onto the often-smaller screens, I think things would be significantly better.

    Oddly, the Kinect seems to be on the fore-front when it comes to such possibilities, yet Microsoft completely forgot to take such technology into account in their video. That’s MS for you… every once in a while, they’ll come up with a huge breakthrough that changes the game entirely; but they’ll abandon it and focus on something else completely. *sigh*

    Reply  |  Quote
  16. Luke Maciak UNITED STATES Mozilla Firefox Windows Terminalist says:

    @ Dr, Azrael Tod:

    I think the technical term for this is “gorilla arms” or “minority report arm fatigue” :)

    @ Morghan:

    I like physical keyboards but I actually never owned a phone with a physical qwerty. I don’t mind touch keyboards as such – I guess I just got used to the fact that typing on the phone is less efficient and more of a hunt-and-peck type of thing. :P

    @ s1n:

    Ok, the eyeglasses translation thing is actually not a bad idea, though I would probably prefer them to be used for more than just that. I mean, think about it – they already can translate speach in real time, which means they actually must pack some powerful hardware and software considering that it is basically doing what is the wet dream of most AI researchers today.

    Let’s face it – deterministic translation algorithms we have now all suck. The best approach we have for automated translation is what Google is doing – which is basically aggregation and correlation of massive data sets. But that is only partially helpful for real time translation scenarios. Speech recognition isn’t any better either. Siri can do pretty impressive things, if you don’t have a peculiar accent or speech impediment. Still, it is still quite primitive compared to what we thought we would have by now. Combining these two and having them work seamlessly and accurately would be monumental achievement that would be more exciting that anything else shown in that video.

    But I guess my point is, why relegate the glasses just for translation. If they can run a perfect speech parser and translation software they shouldn’t have too much trouble running eye tracking software either. The glasses could become the universal HUD display, making the handheld devices obsolete.

    I don’t really see application of the business card. If it is just a business card, then it is a total overkill. It seems wasteful to use a complex electronic multi-touch display in lieu of simple, biodegradable and inexpensive paper equivalent. If it is a multi-purpose device, then it is not innovative at all. We have been talking about paper-thin electronic displays for years now.

    As for the refrigerator thing, I could swear I have seen this done on the Jetsons or some similar “house of tomorrow” cartoons. :) Also, there is no reason why we couldn’t build that particular gadget today. I really don’t think it would be to mount an inexpensive LCD on a fridge, and have a cheep inward facing camera somewhere in the frame. The question is, who would need it? I mean, why not just open the fridge.

    Now, if said fridge could scan it’s own contents, and then talk to my phone to tell me whether I have to buy milk or not is a different matter altogether. That would actually be cool. But the video really does not show that.

    StDoodle wrote:

    The problem isn’t that our fingers are bad at precise work; the problem is the specifics of a drag-your-finger-across-the-screen, right on top of what you’re trying to look at, that doesn’t work.

    This – exactly this is what I meant when I was talking about fingers getting in the way of detailed work. Thank you for articulating it better than I did.

    As for Kinnect, I agree that it is a very cool technology but one completely nu-suited for gaming. On the console market Kinnect is an expensive gimmick that doesn’t seem to be making as much money as Microsoft thought it would. They were expecting it to revolutionize gaming and conquer the casuals, but as it is their style they got onto the market late. By the time it came out, people were already starting to get tired of Wii gimmicks, and the novelty of motion controls was already wearing thin.

    Still, people who hack the Kinnect are doing a lot of impressive stuff – motion capture on the cheap, visual sensor arrays for robots, physical premise security demos, etc… It seems to have tons of applications outside of gaming (where it has an average appeal) but Microsoft does not seem to be interested in developing it at all in that direction.

    Oh, and I totally believe you that touch screens are worse for your hands than mice. A little while ago I was in my bed, browsing the web, texting and playing games on my phone and after few hours my hands were starting to cramp up and I couldn’t find a comfortable position.

    Reply  |  Quote
  17. icebrain PORTUGAL Mozilla Firefox Linux Terminalist says:

    s1n wrote:

    1) Eyeglasses that overlayed a HUD and automatically translated the overhead audio. While HUDs aren’t new, the auto translation in eye wear strikes me as a new idea.

    Have you seen the Word Lens app for iOS? This is already real. Combining it with an hypothetical HUD is obvious if you’ve seen it.

    I agree with StDoodle: the stupidest thing about the video is how they forget Microsoft’s own innovative technology. They have actual working prototypes on the list of projects on Microsoft Research that are more futuristic than what they’ve shown on the video.

    Reply  |  Quote
  18. Luke Maciak UNITED STATES Google Chrome Linux Terminalist says:

    @ icebrain:

    Holly shit dude, the Word Lens App! Downloading it right now to see how it works. Probably not that great, but the ability to understand Spanish via my phone would be greatly appreciated. :P I will report on the results if I can find some instruction manual to test it on.

    Reply  |  Quote
  19. Luke Maciak UNITED STATES Google Chrome Linux Terminalist says:

    Well, I tried Word Lens and it’s a bit of a ripoff:

    $10 for Spanish to English
    $10 for English to Spanish

    Kinda pricey for a novelty app. It did manage to reverse/scramble some words for me. Gotta wonder how good the translation really is. I just don’t feel like spending $10 on something I will probably only use once or twice in my life. :P

    Reply  |  Quote
  20. John Wendel UNITED STATES Mozilla Firefox Linux says:

    I don’t want to interact with a device by gestures of any style, I just want to speak to it and have it do what I mean.

    Reply  |  Quote
  21. Luke Maciak UNITED STATES Mozilla Firefox Windows Terminalist says:

    @ John Wendel:

    Unfortunately, this is a really, really difficult thing to do. I think Siri is a very good example of what the stat of the art looks like right now in that department. It is good, but we still have a lot of trouble parsing common speech. You still have to phrase your sentences certain way, and avoid ambiguity when talking to those machines. And if you have a speech impediment, stutter or a thick foreign accent, you are out of luck.

    I am certain that we will get there one day, but speech recognition is one of the slowest growing fields in our industry – it evolves, but very slowly and via almost imperceptible incremental improvement, while most other technology branches have very rapid cycles, and turn themselves upside down quite regularly.

    Reply  |  Quote
  22. StDoodle UNITED STATES Google Chrome Windows says:

    I think speech recognition will eventually get to the point where it can take over for most common tasks, much like you can accomplish most common tasks in MS software using a mouse and the ribbon. But there will always be a place for an advanced interface for “power users.” Today that mostly means direct text-command input, but I could envision a future where all sorts of complicated gestures can take the place of many lines’ worth of typed commands.

    Combine the two well, with specicially macro’d text and gesture input, and we’ll be friggin’ computer wizards! (Not just a pun, well yeah, but I mean we’ll look like wizards.)

    Reply  |  Quote
  23. MrPete GERMANY Mozilla Firefox Windows says:

    Oh, it’s colorful! It’s 3D (in a very few places)! It must be the future.
    But: if this is the future I stay in the present where I get most of these features now.

    The comment from Mart made me laugh:
    […]the “future-tech” shown above seems too similar to what I’ve seen in Iron Man.

    True, so true!
    Overlays over the physical world are implemented in most smart phones (layar in android, for example) and so… this is pointless. Well, why do I wonder. It’s Microsoft :/

    Reply  |  Quote
  24. Agnosis CANADA Mozilla Firefox Linux says:

    StDoodle wrote:

    but I could envision a future where all sorts of complicated gestures can take the place of many lines’ worth of typed commands.

    Well, thanks to google, that future is already here!!

    Reply  |  Quote
  25. Luke Maciak UNITED STATES Mozilla Firefox Windows Terminalist says:

    @ StDoodle:

    I highly recommend reading Rainbow’s End which describes exactly the type of interface you describe – where power user can bind complex functionality to casual muscle twitches and micro-movements.

    @ Agnosis:

    I fixed your link. Also, I LOL’d.

    Reply  |  Quote
  26. @ Luke Maciak:
    Well, I think both Iron Man and every futurist technology video ever made since computer graphics got acceptable take cultural cues from animation and computer graphics itself. The fact is, why things like these get made is because they are really fun to make. You have a POV-RAY plugin? Make a bright glowing screen! I have a friend who does such things (pictures of HUDs on schoolmates, et cetera) and he does not do it to provide a faithful representation of what technology could or should or might be, but of what looks cool. It comes straight out of Hollywood, which is fine, but when people said “That is cool! I want that!” and tech producers obliged is how stuff like this got rolling.
    By the way, I am sure you fend this question off all the time (maybe you should put it prominently in your about page, which I did read once a while ago when I first found TI) but how would I go about getting OS/browser icons on forums/blogs I control?

    Reply  |  Quote
  27. Luke Maciak UNITED STATES Google Chrome Linux Terminalist says:

    @ Arlo James Barnes:

    Well, I guess it depends on the message board type you use. If it is an extensible one there might be already a plugin like that for it. I use this WordPress plugin to get the browser icons/flags on comments. It’s actually based on this browser detection plugin and the EasyIP2Country plugin. All of these are slightly outdated (as WordPress plugins sometimes get) – especially the browser detection gets confused when people post from ipads and kindle fires and such. I keep meaning to fix it myself, but… Eh… Lazy.

    I mean, it is a rather simple thing – all you need to do is some pattern matching against the user-agent and use the right icons. I think the EasyIP2Country thing is open source so you will likely be able to rip off the IP matching patterns and adapt them to whatever you need for the message board.

    Reply  |  Quote
  28. Zac GERMANY Google Chrome Ubuntu Linux says:

    @ StDoodle:

    Considering that speech in humans is only 99% accurate, speech as a sole solution to computing has a long time to go.

    Reply  |  Quote

Leave a Reply

Your email address will not be published. Required fields are marked *