The mouse is probably the best and the worst invention in the history of computing. Without early pointing devices such as mice and track-balls we likely would not have graphical user interfaces in the form they exist today. These devices heavily influenced the early GUI designs and were in part influenced by them. But while it revolutionized the way we interact with computers, mouse also facilitated development of really bad habits that professional writers and programmers have to quash before they start being productive.
The mouse was born at the Stanford Research Institute in 1963 and was the brain child of one Douglas Engelbart. Before that pointing devices in form of track-balls already existed out in the wild (first introduced internally in the Canadian Navy research labs). Englebart’s device was the first time the design was inverted, creating a device that tracks it’s own movement in 2d space. The prototype looked like this:
You can probably see where did this prototype get it’s nickname – it really did resemble a mouse toy with it’s back-facing cord and a bulky wooden shell. Considering how indespensible mice are these days, you would think that Engelbart must have been swimming in royalties, but unfortunately that is not the case. The patent which he received for his invention ran out in 1987 – a few years before it became the computing mainstay it is today.
There were two events that put the mouse in the position of becoming as ubiquitous as the keyboard. In the early 80’s IBM decided to open up it’s PC platform and create world’s first, and to this point the most popular open architecture model. This opened up the market for dirt-cheap minicomputers destined for home use. The second event was the release of Microsoft’s Windows 1.0 in 1985. Bill Gates, ever the shrewd business critter wisely jumped on the graphical bandwagon and delivered a much needed mouse driven GUI for IBM’s platform that was at that point exploding, and growing at an exponential rate.
That said, the people who first utilized the mouse to it’s full potential were folks at Xerox PARC labs. In 1979 Steve Jobs visited their facility as part of an exclusive tour. What he saw there literally (and by literally I of course mean figuratively) blew his mind. Xerox had few hundred of their Alto machines on the premises, all networked, all running software built object oriented programming paradigms and all running a mouse driven GUI. He didn’t really care for the first two things (I mean come on, it’s not like home users would ever need networking, and OOP – yea, like that would ever catch on) but the user interface did this to him:
This was also his reaction when Xerox told him they were not going to sell these Alto machines. They didn’t think they were marketable, so they would just keep them for internal use. Which of course wouldn’t be the first time when circa 60-80 Xerox management had spent money developing a technology that was about 2 or 3 decades ahead of its time, and then just sat on it until it was too late. But I digress. The reason why Apple has, for the most part, always been putting so much attention to UI design is because Steve Jobs had a shattering nerdgasm at the Xerox labs back in 79 and has been driving his company in that direction ever since. But as much as Steve loved the mouse, I think we must thank his arch Nemesis Billy Gates for pushing int into the ubiquity. Steve’s computers have never had as huge market share as the ones “infected” with Bill’s much loved, and at the same much maligned OS. Granted, the two men were in completely different businesses. Steve was selling computers, whereas Bill was trying to exploit the world’s first open architecture to build a closed source software monopoly. And he would have gotten away with it too if it wasn’t for that pesky kid Torvalds and his internet pals. But that’s a story for a different day.
When you read interviews with Tim Berners-Lee – the mind father of the World Wide Web, you will notice that his initial prototypes were not built with the mouse in mind. His initial designs had the links enumerated to facilitate a keyboard driven navigation. But by the time he introduced his ideas to the world and the world was ready to implement them, the pointing devices were already becoming ubiquitous and seemed like a perfect way to complement his designs. Hypertext was forever married to the mouse leaving the “numbered links” idea to the wayside. After all, what could be easier than “clicking” on a link?
If it wasn’t for the mouse, browsing the web would be an entirely different experience. It would likely more akin to this than it is to modern browsing experience. A web browser developed in a world where mouse did not exist would likely be something like Vimperator or Pendactyl rather than say Chrome or Safari. While definitely cool in theory I am not entirely sold on keyboard driven browsing in Vim like mode. And I’m a seasoned Vim user. There is just something about the web and the mouse… They just go together.
Without a mouse we likely would not have the WYSIWYG paradigm either. You know who pioneered that idea? Most people attribute it to Charles Simonyi and Butler Lampson who developed the Bravo editor for the (wait for it…) Xerox Alto, which had a predominantly mouse driven interface. Yeah, Xerox mostly sat on that one too. These guys had enough innovations locked up in their vault to change the world six times over but instead opted for becoming the tech industry’s poster boy for wasted opportunities.
Pretty much everything you do these days requires a mouse input. Want to manage some files? Mouse it. Want to browse the web? Mouse it. Want to edit some document? Mouse it… But technically you shouldn’t. With the exception of web browsing, most activities that you do with the mouse can be accomplished faster with the keyboard. Or rather could be if you were using the right tools.
It always makes me cringe when I see people frequently pause their typing to grab a mouse and fiddle with it to highlight some text, delete a line or do something equally silly. This sort of thing is destructive because it absolutely always breaks your flow. When you life your right hand off the home row to mouse over something you are making a context switch. While you may not really notice it, your brain certainly does. The action that ought to be almost instantaneous becomes a whole logistical operation; First your hand needs to locate the mouse, then you need to wave it around a bit to see where the pointer is, then you bring the pointer to the actionable area, etc.. The mouse is probably the biggest time waster in the computing industry, but most people do not realize this because it is such an integral part of our computing experience.
If you have ever heard a smug Vim or Emacs user brag about how their editor of choice has made them better programmers. Well, part of the equation is that these editors force you to eschew the mouse. They were developed long before mouse became popular, and while they support it, they do it halfheartedly. The mouse is not at the core of their text editing paradigm (as it is in Word or say TexEdit) – the keyboard is. And that’s in a large degree the hidden source of their power. Vim is great not because it has kick-ass keyboard shortcuts – it is great because it was built from the ground up to be operated solely via keyboard. It is great, because mastering it forces a user to realize just how much the constant pausing for mouse input slows them down and breaks up their flow and their thought processes.
Recently a number of so called “tech journalists” started proclaiming the end of the mouse era and heralding “touch” as the new input paradigm shift. Pay these guys no heed as they are complete idiots and they would not be able to recognize technology trend even if it peed in their cereal every morning for a decade. Touch is not a new paradigm. Touch is a refinement of the same 2D surface pointing paradigm we have been using all along. The first “touch” devices we had on the market were driven by a stylus. Do you know what a stylus is? It is a mouse-like device which is held in your hand like a pen. It was adopted as a pointing device because people were frustrated how hard it is to actually “write” with existing pointing devices.
Ok, let’s back up a bit. To be fair, the idea of a stylus actually predates Engelbart’s invention. That credit goes to Tom Dimond who in 1957 presented a paper titled “Devices for reading handwritten characters” at the Eastern Joint Computer Conference. So technically the stylus is older than the mouse. On the other hand Dimond never really envisioned his device as a general input tool. He was simply so annoyed with the lack of options of importing and parsing hand-writing into computer memory to actually do something about it. Engelbart on the other hand was actively trying to invent a better general purpose input device. You don’t really see the stylus being used for general I/O (outside specialized hand-writing capture tech) until the age of ubiquitous mouse.
If you look at the UI of the early stylus driven systems (Windows CE, Palm OS) you will notice that it pretty much designed for a mouse. There was nothing touch specific about the way you interacted with those systems – there was still double-clicking (or double tapping) highlighting and etc. The multi-touch interfaces of today are simply refinement on top of that. I submit that they are merely a usability driven iteration of the mouse interface.
You can conceptually track how these interfaces evolved. First you remove the stylus, because people loose that shit all the fucking time. So to remove the “lost stylus” annoyance you make your screen react to fingers. Then you realize fingers are big and ugly compared to the fine-tipped stylus so you can no longer expect the users to approximate mouse gestures. So you adjust things – remove double-clicking, make the hit-boxes around buttons bigger, transition from “scrolling” to “swiping” and etc. The end result is what you currently see on iOS or Android.
If you need more proof, ask yourself why given this ground breaking multi-touch technology these interfaces don’t actually do anything requiring more than a single finger, with the sole exception of the “pinch to zoom” gesture. Because they are firmly grounded in the mouse driven interface paradigm – that’s why. If the the touch driven UI of iOS is so ground breaking, how come I can do a lot of the same gestures on the track-pad of my MacBook? Because the former is a refinement of the latter. A track-pad is a track-ball without a ball, which in turn is an inverted mouse. Touch is not a paradigm shift in the way a mouse was. Touch screens are supplanting the mouse in places where it makes sense – but the UI design is mostly driven by the same 2D pointing concepts.
What is unfortunate is that the ubiquity of the mouse driven interfaces is actually holding us back. Very few people out there (even counting those working on the bleeding edge of innovation) are capable of imagining a world without this 2D pointing paradigm. This is why most of our current visions of the future involve weak-sauce “pictures under glass” style GUI’s. I think it is time for a new paradigm shift. I think we need a new I/O scheme that will allow us to leave the mouse behind. Because frankly, the future of “glass pane” interfaces does not excite me at all. I want us to move forward towards a more synergistic approach. I want man and machine on a convergence course – I want interfaces such as those depicted in Rainbow’s End or Halting State. Once we get those, perhaps people will be able to type a document without pausing for half a minute every 12 seconds.