Some users are like little children or small animals: they require constant supervision, hand holding and care, else they may end up hurting themselves or damaging company property. Who are these users? Well, typically they are white collar office workers who have been using computers as their primary work tool for at least a decade or more, and yet still have absolutely no clue how to operate them or how they work. In other words, the people who fill up the help desk queue with tickets every single day of your life.
They should not be confused with Power Users, or as we call them in the business “professionals who treat their job seriously”. IT people don’t get to interact with these folks frequently, because frankly, they do not need hand-holding and spoon feeding and therefore contact the help desk only in dire emergencies, their tickets get quickly escalated and resolved at lightning speed because they are helpful, informative and cooperative. Regular users are typically anything but…
Also, please note that when I talk about users I don’t mean folks who don’t know much about computers because they don’t use them all that much. For example if your work does not involve editing electronic files and using office suite and email then I wouldn’t expect you to be an expert. On the other hand, if you have been using Microsoft Word and Outlook on every week day from 9am till 5pm for the last 15 years the I’d at least expect you to know how to double space a document or set up an email signature. If you can’t do these things, and you have to call the help desk on average once or twice a week so that they can remote-in and do it for you, then perhaps you shouldn’t be in this line of work…
It never ceases to amaze me how many people in the business world not only do not know how to do basic day-to-day computing, but have absolutely no idea how computers and internet work in general. To them, computers are mysterious black boxes with magic inside, and networks are strange, almost supernatural hyperspace portals between computers that allow magic, gnomes and unicorns to pass through and make things happen on the internet. They hold strange, almost superstitious beliefs about technology that have no grounding in reality whatsoever. Today I would like to talk about these common myths and misconceptions.
#1: Hacking is magic
Mostly due to portrayal of hacking in popular culture, most people who work with technology every day have very distorted view of computer security. I think the conceptual model they are working from is that computers have these magical barriers (firewalls) that protect them from unspecified dangers, but which can be circumvented by typing really fast without using the space bar. In essence, no security system is safe and a skillful hacker can break into just about any computer in less than five minutes if he is good enough.
Typically around the end of the semester, I teach my students how to “hack” computers. I don’t really show them anything practical – I simply give them a quick overview of the workflow used by character Trinity in the movie Matrix revolution, which to this day is the only Hollywood production that managed to show real hacking. If you have missed the 5 second clip, here is a screenshot:
Basically, you port scan your target, figure out what services are running on it, then you find one that is out of date, and exploit it. In the Matrix movie Trinity was able to break into a the power plant system, because the local sysadmin was an idiot and never updated the ssh demon on the internet facing machine. As a result she was able to run a 3 year old CRC32 exploit on the machine and get root with relative ease.
The important bit of this fictional scenario is that it was not magic. The computer system was compromised not because Trinty was a 1337 h4x0r but because it was not patched properly. The break in was not a result of hackers brilliance, but of bad IT policies a lackadaisical approach to software updates at the site. It was easily preventable and wouldn’t happen if the administrator did his job.
Just like many other types of white collar crime (like fraud for example) hacking works not because hackers posses some innate magical powers, but because laziness, incompetence or mistakes leave computer systems vulnerable. Granted, all software has bugs and potential vulnerabilities so it is technically impossible to build impenetrable systems. There is always a chance an inquisitive tinkerer can find a way to exploit some previously unknown bug or quirk to gain access. But even then, the actual exploit is a result of human (programmer) error and not a supernatural skill at typing really fast.
Not to mention that a lot of people imagine that when you are getting “hacked” it actually looks more or less like this:
It’s like a big battle of wits between the black hats and the white hats, and presumably whomever can type more words per minute will win. Or something like that. I don’t really know how people imagine it works. Honestly, if you are getting hacked, you probably won’t know until well after the fact. And if you do notice something fishy going on on your network, there is a very simple, robust solution that will thwart any hacking attempt with 100% success ratio:
Despite popular belief, no hacker can actually get into computer that is turned off or not connected to the network unless he actually breaks into your house.
#2: Government can “hack” into everything
The corollary to the above is the unshakeable belief that the government, and especially CIA has access to super hacking tools which are above and beyond anything we could have ever seen in the private sector. Once again I blame Hollywood for this. This belief is on par with the notion that US government has access to super-advanced alien technologies recovered from the supposed Roswell UFO crash but for some strange reason refuses to use them.
The truth is that governmental agencies us the same shitty software we use in the private sector. Most of the good security tools are either open source, or commercially available world wide. The proprietary, custom spy stuff they have is likely just as shitty – especially considering it was most likely written by lowest bidding contractor, on a shoe tight budget and with an aggressive deadline. Actually, maybe I’m too harsh on governmental projects (but seriously, have you seen some of the web apps hosted on .gov domains?) – there are very likely some very talented and high paid teams working on the mission critical stuff for intelligence agencies and military.
Still, I don’t think it is possible for them to have some mega-hacking tools for the simple reason that “hacking ain’t magic”. We already established that it doesn’t work that way. Furthermore, any security related technology developed in secret by the government is likely going to be also independently developed in the private sector around the same time. The security world is interconnected and people everywhere work on similar problems, and come up with similar solutions. This is why most computer scientists are vehemently opposed to software patents.
Most computing problems have a limited number of optimal solutions. Very often researchers working independently converge on very similar solutions. Computer algorithms are not necessarily inventions, but rather discoveries. It is probably best to think of them as of mathematical formulas or laws of physics.
Recently this myth has flared up with the big news stories revolving around the PRISM surveillance program. But that program never involved superhuman “hacking”. It involved corporations willingly giving intelligence agencies access to their servers.
#3: Everything is a virus
Most people who fall into the non-power-user category are completely incapable of identifying malware. Or rather they tend to attribute typical non-malware, user errors or hardware issues to malicious viral activity, and ignore actual infection symptoms. I guess they assume is that a virus is supposed to do something unusual: for example print messages on the screen, play sinister audio clips and etc. This is typically reinforced by intro level technology courses which like to talk about the history of malware and showcase some of the early viruses that were written primarily for the proverbial LULZ. These days malware is primarily written with profit in mind: either used to deliver advertising as payload, or to create bonnets which can be utilized in various ways.
Things like this ought to be common knowledge considering the proliferation of malware on the internet, and the frequency with which the users pick it up in the wild. But alas, it is not. Let me run you through some examples that I have encountered in my travels just to give you an idea how a mind of a typical user works:
- Screen is upside down – definitely a virus
- Strange popups and redirects – business as usual
- Accidentally associated Word files with Adobe Reader – totally a virus
- Antivirus & firewall disabled and most executables crash – business as usual
- Battery status LED blinking red – OMG virus!
In other words, anything that seems strange or out of the ordinary (to them), is immediately labeled as virus. A more mundane glitches, crashes and nuisances are typically ignored and attributed to the incomprehensible and capricious nature of computing devices. Why there are popups everywhere? Who knows, the inner workings of a computer system are impossible to understand.
Typically, when a call comes in about a virus (singular), it is safe to assume the problem is anything but. The exception to the rule are the scareware infections. These are easy to identify because they typically claim the user’s computer is infected by hundreds of viruses.
#4: Internet research requires “computer skills”
Google is a company that was made famous by its radical approach to user interface. At the time when most companies aimed to fit as much information on their landing page as possible, Google left theirs bare save for a single functional element: a text box. To this day this type of interface remains one of the most intuitive, and user friendly interfaces. You are looking for something? Type your question into the box and hit search. Don’t feel like typing? Speak your query then – Google has semi-decent speech processing as well. And as soon as you do, a list of relevant web pages pops up, inviting you to click and explore or revise the search. I can’t imagine simpler method of interaction between man and machine.
And yet, a lot of users can’t seem to wrap their heads around it. Or perhaps they don’t want to. I can’t tell you how many times I had friends and relatives call me to ask my advice on a computer problem. Majority of these issues are resolved by simply typing the symptom description or error message into Google and following the top link on the page. In fact, this is likely what happens when you call someone for help with some random problem without bothering to do a tiny little bit of research first. Your friend will literally type what you say into Google and proceed to read off the solution from the first or second link on the page.
If you done some basic research and followed easy steps (such as rebooting, downloading updates, running virus scan, etc..) and got stuck on a technical fix that, for example, requires editing windows registry (or typing commands into the terminal) then it is a completely different conversation, mind you. That’s actually perfectly acceptable and reasonable point at which one ought to seek technical help.
But requests for assistance with Google are not limited to technical problems. Quite often people ask me for help doing basic online searches that have nothing to do with computers or even with electronics. For example, a someone recently asked me to find out what is the best kind of propane grill out there.
If my name was Hank Hill this would be a legitimate and reasonable question. If I was an American dad who actually owns and uses a grill this would be a legitimate and reasonable question. If I ever voiced an opinion in the propane vs charcoal vs wood debate, this would be a legitimate question. If I ever bragged in a manly way about my grilling prowess, this would be a legitimate question. But I am none of these. I treat barbecuing mostly as a spectator sport – which in my case means I stand back and let other people talk excitedly about it while I try to figure out why they find it so interesting. Why was I chosen for this task? Would you care to take a guess?
Because I am a computer guy, and therefore I am good at searching things on the internet. Apparently my vim skills, and my experience in Python, PHP and Ruby somehow make me really good at typing things into a box and reading things off the screen. Unlike troubleshooting computer problems which can sometimes be little tricky, searching the internet for non-technical information (such as customer reviews) is something anyone should be able to do. The only skills required to use Google are:
- Ability to type words into a box
- Reading comprehension at grade school level
Once you have these two skills down, the rest is just trial and error. By that I mean, typing in a general search query, evaluating results with respect to relevance and narrowing down the query by adding or removing keywords if needed. You do not need to be “good at computers” to do this. In fact, you don’t even need to be smart to do it. It’s basic pattern recognition.
I honestly don’t know why people do this, but I have a few theories. One is that users who are otherwise intelligent and well adjusted human beings simply turn off their brain as soon as they find themselves in a vicinity of a computer as a defense mechanism that protects them from accidentally learning something new about the tool they use for work every day. Either that or they are simply lazy and they use their “computer illiteracy” as an excuse to get someone to do the research for them.
I don’t know why this happens, because in the movies web search works about the same as in real life only default font sizes in browsers are super huge and no one ever uses Google or any other existing search engine.
#5: WWW goes in front of everything
The www subdomain is kinda weird these days. Back in Pleistocene when the Internet was still young we didn’t know that the World Wide Web will become so big and important. It was just something that Tim Berners-Lee cooked up, and which while really cool was mostly geared toward creation of interlinked documents. But if you needed to transfer non-text files for example there were much better ways to do it, than try to shoe-horn them into HTTP packets. We hat FTP and Gopher for example.
Quick aside, did you know that there is still about 160 Gopher servers on the internet and that you can actually still browse the Gopherspace using the Lynx web browser? In fact, browsing Gopher servers with Lynx is actually more fun than trying to browse the web with it. If you want to try it for yourself the Floodgap is a good place to start. Just do:
Anyway, the web wide world was a much different place back in the days when Gopher was a protocol and not an annoying garden rodent your dad complains about all the time. We sort of imagined that most internet places would want to set up service specific subdomains: so for example if you wanted to Gopher around you’d go to gopher.example.com. If you wanted to upload some files you’d go to ftp.example.com. And if you wanted to check the website, you’d go to www.example.com. It made sense at the time.
Then of course gopher became an animal and web became a big thing. All people wanted to do was to browse it, and web designers quickly realized that if you really wanted people to download a file, you just linked to it. It was much clearer and easier than posting instructions on how to access the FTP or Gopher server and hope the user doesn’t fall asleep or get bored and return to browsing porn halfway through reading them.
But we ended up with the www subdomain attached to a lot of web addresses. It is sort of like the tail bone of the internet at this point – mostly useless, legacy thing that some people argue helps us in some way with balance, posture or some shit like that. The www subdomain is still typically the default location for the web server, but it doesn’t have the same importance as in the past.
Today, most webmasters set up their DNS records so that you don’t actually have to type it in. For example, it doesn’t matter whether you type in www.terminally-incoherent.com or terminally-incoherent.com into your address bar – you will still get to this site no matter what. Some web browsers will also helpfully add the www for you in cases where it makes sense so you don’t really need to worry about. We as web designers don’t even want people to think about the three dubs anymore.
But there is a cadre of middle aged users who were introduced to the web at the height of the .com boom who will religiously type in www in front of everything. Especially subdomains. So for example, webmail.example.com becomes www.webmail.example.com. Or even worse: imap.example.com is now www.imap.example.com.
These folks are adamant about their dubs, and they refuse to listen to anyone about it. Hell, they will fight to death with senior IT dudes over their inalienable right to type www in front of everything. And when I say “fight to death” I mean like “pistols at dawn” style (side note: apparently when you run out of bullets it counts as a draw). As a result we ended up putting wildcard DNS records for every subdomain.
#6: Memory is storage
Conceptually, this is not a very intuitive thing to grasp. I can kinda see why people get confused about this. Computers have two types of memory: main memory where you temporarily store running programs and currently processed data, and secondary memory also known as mass storage where you keep all the illegally downloaded mp3’s and porn. The distinction between these two types of memory is probably one of the most useful, and enlightening things you can learn as a casual computer user. Groking it gives you really good insight into how performance of your computer is actually measured, and what the specs listed on the computer box actually mean.
A user who understands the distinction between main memory and storage will no longer be baffled by the fact that a really large hard drive doesn’t speed up his computer, or that no matter how many rams you jam into a computer case it will not increase it’s capacity to store your pornography and word documents.
This is actually, really, really, really difficult to teach. I spend a lot of time trying to drive this home using all kinds of metaphors: like the brain for example. You have short term memory that evaporates in seconds, and long term memory which lasts for a while but sometimes retrieving from it is slow. Memory is like your desk, and storage is like the filing cabinet next to it where you put the files once you are done with them. RAM is what you hold in your hands, HD is your backpack. I re-iterate, re-iterate, re-explain, show pictures, draw diagrams and constantly return to this topic in just about every lecture (because it ties into so many things). And without fail, there is always one student that walks up to me at the end of the semester asking now many rams they need to store a lot of movies.
#7: The number of files on hard drive has impact on performance
This should technically be part of #6 but it is so common that I decided to give it entry of its own. Partly because this misconception is perpetuated by people who don’t even know about RAMs. You can usually make people who internalized #6 relatively happy by telling them to download a few rams from the interwebs. While the website doesn’t actually do anything, it provides users a with a pleasurable progress bar feedback which causes a placebo like effect, and often alleviates the “my lap top is slOw for no raisins” problem.
For some strange reason however large percentage of population think that cluttered hard drive negatively impacts performance in a huge way. I’m not sure if this is a throw-back to the good old days of non-journaled file systems when fragmentation was a huge issue, or just a conceptual glitch. I guess I can see how someone could think of a computer as a room, populated by little gnomes that do magical things – and if you don’t keep it tidy and delete unused files it becomes a junk pile, causing the gnomes to get lost while they frantically run around trying to find your shit underneath all the garbage… Or something like that.
I mean, yes – a hard drive that is 99% full, and very fragmented will slow you down a bit. But you only take the performance hit when you are reading or writing to the drive. If you have enough RAM, your computer should chug along relatively smoothly unless you open up a million things and force it to page to disk. And of course, once you start paging, shit will be slow no matter what because disk is slow.
This means that no matter how many files you delete from your hard drive, the performance is not going to improve much. There are many factors that degrade performance, but having too much porn and illegal music on your hard drive is not one of them. Typically the biggest culprits are resident processes that like to stay in memory and sap CPU cycles at all times. Despite what many inexperience users think, antivirus suites with big brand names like McAfee and Norton are absolutely terrible in that aspect: uninstalling and replacing them with less memory intensive alternatives will typically increase your performance at least ten fold, if not more.
#8: Downloading is catch all term
Users really, really, really like to download stuff. Most of them are not entirely sure what that word means, but for some reason the really like to use it in every sentence. On an average helpdesk call, you can expect the word “downloaded” to be used about 8-10 times, in different contexts meaning different things. Because fuck consistency, and fuck communication! DOWNLOAD ALL THE THINGS!
To help new help desk drones we have created the following chart to help you understand the different ways in which this word can be used:
Downloading: The Definitive Chart
- Installing (software) – I downloaded Microsoft Word from that CD you gave me.
- Installing (hardware) – A technician downloaded some rams into my computer.
- Un-Installing – I downloaded it off my computer to make more space.
- Uploading – I downloaded a file via the web portal, see if you got it.
- Submitting – I downloaded that web form.
- Filling out – I downloaded that web form with my information but it won’t download.
- Attaching – I downloaded it as an attachment in an email I sent you.
- Saving – I chose Save As and downloaded it to My Documents.
- Downloading – I uploaded fox fire from the website but now it wont download.
I guess the most important thing to remember is that users will never, ever use the word “downloaded” when they want to say they have obtained some file from a website. They will typically use the word “uploaded” for that. Every other computer related action that in any way involves networks or magic will be branded as downloading. Don’t question it – just refer to the chart, and try your best to guess what they meant. Follow up questions as to where the file was initially, where it ended up and where it was supposed to go, will let you determine the context.
#9: Computer Science courses teach you how to repair computers
Every relative I have thinks that what I learned in school is essentially computer repair. I mean, it totally makes sense, right? Computers are so complex and magical that you need to spend 6 years in school and get a master’s degree in Computer Science in order to even begin to repair them. I bet a lot of folks seriously think that dudes that work at Geek Squad all have fancy college degrees in computer science or something.
The truth is that Computer Science is as much about computers as astronomy is about telescopes. They are tools and the technological back-pinning of the science which in itself is more of a study of information, algorithms and discrete systems than anything else. It is a science – an academic discipline that is very close to mathematics, but while Math is typically pure abstract theory, CS has some non-theoretical domains in which you get to apply the mathematical models and formulas. A good CS program is not focused on teaching you how to program, but rather forces you to learn programming in order to be able to do more interesting things.
But alas, an average user who can just barely manage to turn their computer on in the morning without having a panic attack is not going to know about this. Why? Because they actively avoid learning anything – we already established that, haven’t we? If you close your mind to new information, you are not being a productive member of the society – you are the useless log of flesh we have to drag behind us as we move forward.
#10: Macs are better for graphics
I think the marketroids at Apple are still patting themselves on the back over this particular myth. It is almost uncanny how deeply this baseless assumption got assimilated into public consciousness. I guess, once upon a time, back in Pleistocene this might have been true. At some point in history, Apple probably did have a slight edge in this department with specialized hardware and optimized software and built in displays they could claim were custom calibrated for high quality. But that was long time ago, and even then questionable.
Today Macs use the same off-the shelf hardware you can find in your PC so there is no edge there, unless you get one of them Retina Display macs. The software graphic designers use is the same on both platforms, and it is made by Adobe, not apple. In fact, Adobe and Apple are not on best terms these days. They certainly have no incentives to make the Mac ports of their famed design suite run any better. In fact, they probably stand to profit if it runs worse than the PC version.
Granted, there are tons of reasons you would to get a Mac computer: like Unix under the hood, or the fact that it is not Windows 8 for example. But the graphics thing: that’s simply not true. There is nothing in OSX and Mac hardware that makes it inherently better at graphic design. Well, except the Retina displays maybe.
#11: Zoom and Enhance
Speaking of image processing, here is another thing that Hollywood totally ruined, and turned into a joke:
This is especially infuriating because it is super easy to test this at home: just open an image, and zoom the fuck in. At some point, the image becomes a mosaic of squares. Each of them is a single color dot representing a pixel at the resolution at which the picture was taken. A one megapixel camera will take a picture that contains 1 million of such dots, and not a single one more. Each dot has only one data point of information (other than its position in the grid): color and intensity. When you zoom into a photo, all you are doing is making the pixels bigger – each dot of the image, becomes a number of dots on your screen. At some point they become visible to the naked eye as ugly squares.
You can’t zoom in beyond that though. You can’t enhance a pixelated image because there is simply no information there. You only have the pixels that are already in the image and nothing beyond that. This is the technical limitation inherent to bitmap imaging – it is how all digital cameras work and there is nothing you can do about it. Enhancing algorithms are not magic – they can potentially clean up a pixelated image and make it less blurry, but they can’t extract information that is not in the file.
The only way to get a clear image of a reflection on someone’s iris is to take that photo with a high resolution camera. A grainy, night vision shot from a gas station security cam is never going to have the resolution you need to do the kind of processing you typically see in Hollywood movies.
#12: I don’t have anything worth backing up
Every time I talk to users about backup strategies, they give me the same line:
I don’t think I really have anything worth backing up on there. This is of course a blatant lie. Well, maybe not a lie but just a misconception. The fact is you do have stuff on there that you would loathe to lose – you just don’t realize it yet. Do you know how I know? Because I like to play the game that goes like this:
“Hey, remember how you told me you don’t do backups on your computer because you don’t have anything important there? Well, yesterday when you dropped it of at my place I just straight up re-imaged it without saving any of your stuff.”
Then I enjoy about 15 minutes of cursing, yelling and crying before I tell you I was kidding and that I actually backed up all your stuff. And if you didn’t have anything important on your hard drive you wouldn’t be whining, crying just now. You do have shit on your computer you don’t want to lose. Everyone does. This includes pictures, documents, locally saved emails, bookmarks, saved passwords even configuration details. If you do as little as backing up your Chrome/Firefox profile folder, it actually saves incredible amount of grief and pain later on. Instead of trying to find all your favorite sites again, and remember passwords to the more obscure ones, you can have your browser open up to your last session as if nothing has happened.
If you really have nothing to lose, then you should have no problem wiping your hard drive and re-installing the OS now. If all your stuff is in the cloud, and you have nothing saved locally that is not synced up to some remote server then you are all set – you probably don’t need to do backups. You can let your cloud provider and NSA mirror your shit for you. But as long as you save anything locally, backing up should be a habitual and involuntary reflex.
So, what are your favorite myths and misconceptions about computing? Let me know in the comments.