Let me start this off by sharing a cute “futuristic” video about possible future of the mobile technology. Please keep in mind that this was created by folks at Microsoft so you won’t actually see any innovative ideas or ground shattering paradigm shifts in there. Microsoft basically created a vision of future which is safe – one which it understands.
Let me ask you a question? Did anything in that video impress you? Did anything in there made you go “wow, this is cool?” I personally was mostly shaking my head. The only neat thing in the whole video was the way separate devices paired up and exchanged data without any hassle. But it worked that way because it was a proof of concept demo. More or less this sort of thing is a bit of a pipe dream – primarily because different devices from different vendors simply do not interface like that. I don’t see that changing anytime soon, so this grand unified vision of swiping data from your phone onto your desk is a bit unrealistic. Even if both were made by Microsoft and were running the same type of OS, data transfer would still have to be more complex than that because of security concerns, authentication and practical issues. For example, today it is much easier to transfer data via Cloud based sync than via direct device-to-device connection which may seem counter intuitive to the laymen, but makes perfect sense to us programmers.
In cloud scenario both devices are individually authenticated against the cloud and can access the data directly – there is no transfer necessaryu. In direct transfer scenario, they would have to locate each other, authenticate, establish encryption, etc.. So the whole swiping your graph onto someones screen is basically just pointless eye candy. Yes, it looks cool but in real life it would actually be much, much more efficient. One engineer edits the graph on his tablet, and the other one sees the changes being applied in real time on his desktop because they are both using a cloud based colab software. There is no futuristic swiping – it just happens. And we already have this technology right now. It’s called Google Docs, Office Live, etc..
My problem with this video is that it basically shows us our current technology, throws in some absolutely pointless visual effects and pretends like it is the future. No it is not. What is the utility of having 3D holographic overlays hovering over your phone / tablet display? They look cool, but how do they improve your workflow? How do they make your life easier? They serve no purpose other than aesthetics. And while aesthetics are important, they must go hand in hand with functionality. This is something Apple has nailed in the recent years – their UI’s are pretty, but also very solid. Micrsosoft in their fervor to catch up, has completely missed this point and so far has been building pointlessly flashy overlays on top of their traditionally clunky interfaces. This video is a proof of that. If this is the future, then we have actually gone backwards in terms of productivity.
There is absolutely nothing inspiring in this video. It is dull, boring and needlessly flashy. It does nothing more but showcase technology we already have right now. We can already do everything that was shown in that video with our current generation mobile devices – it is just not as effortless. Why? Because we are using multi-function, real world operating systems and not clinically clean mock-ups designed for a TV spot. It is obvious that in real world scenario, getting things done on your phone will take a few more clicks. It is also obvious that finding a recipe for a cake is not going to involve flipping trough a virtual, touch interface driven picture gallery, but browsing the actual web, with an actual web browser. Not something you can easily do with one finger, but arguably much, much more flexible and functional.
Have you noticed that the most common interaction in this video are finger swipe, click and pinch gestures? You know, the same gestures employed by all the devices we have right now? What very common interaction is conspicuously downplayed in the video? Typing. Why? Because typing on touch devices really sucks. Yes, modern tablets and smart phones are really good at predictive suggestions and use adaptive floating hit-boxes but touch typing on a smooth glass surface is still inefficient and inconvenient. Typing on touch screens is almost always hunt-and-peck affair, simply because we can’t feel the keys under our fingers. And this is a huge problem with current touch technology – it provides no haptic feedback. Making fully functional touch interfaces that you can actually feel would be a huge breakthrough. In fact, we already have the technology to do it. It’s just very immature, and has not had much of mainstream media exposure. Perhaps because it is difficult to wow the audiences with haptics using a video showcase. Tactile feedback simply does not look exciting at all when you watch it on the screen. Swiping graphs between monitors on the other hand kinda does.
Brett Victor wrote a really insightful deconstruction of this video, explaining why the “innovation” shown in this video is essentially a dead end. He calls this mode of interaction “Pictures Under Glass”, and he in neither excited or inspired by it. He finds it rather ironic that the interfaces of the future are going to be less expressive than what we have right now.
Let’s face it – despite being around for over 40 years now (first mice prototypes were designed in the 60’s), a standard mouse is still much more comfortable pointing device than modern multi-touch screen. Why? Tactile feedback is a big part of it. Pinpoint accuracy is another. Our fingers are simply not designed for detail work – that’s why we have developed precision tools – tweezers, pliers, blades, needles, etc. Smooth, glassy interfaces attempt to turn the cock back and try to convince us we can solve our precision problems with simple poking, prodding and sliding.
In essence this video is not showing us the future, but the present, just with a prettified Hollywood style veneer. If we want to actually improve our UI’s we need to go beyond pictures under glass. We need to look into haptics, tactile feedback and virtualizing precission tools. We need to research interfaces that revolve around eye tracking, brainwave pickups that interface with HUD displays. We need efficient ways of delivering tactile feedback, preferably without using auxiliary hardware – so neural stimulation, subdermal implants, etc.. That’s the future.
But hey, what else did you expect from Microsoft funded initiative. These guys wouldn’t know future if it jumped out of the bushes and slapped them in the face. You need proof? Just look how well Microsoft is doing in the search and smart phone markets. Those are the technologies that completely blindsided them and caught them with their pants down. They are not the kind of company that will lead us in the mobile revolution. Not by a long shot.