software engineering – Terminally Incoherent http://www.terminally-incoherent.com/blog I will not fix your computer. Wed, 05 Jan 2022 03:54:09 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.26 Software Immitating Real Life Solutions: A Design Trap http://www.terminally-incoherent.com/blog/2011/04/13/software-immitating-real-life-solutions-a-design-trap/ http://www.terminally-incoherent.com/blog/2011/04/13/software-immitating-real-life-solutions-a-design-trap/#comments Wed, 13 Apr 2011 14:18:46 +0000 http://www.terminally-incoherent.com/blog/?p=8002 Continue reading ]]> On Monday, I wrote an extended post on calculators in which I mentioned a common software design pitfall. I wanted to talk about it in some more detail because it is a fairly interesting topic. Software calculators are excellent jumping off point for this discussion because they help to illustrate the issue quite clearly.

If you work as a software developer, there will be a lot of times when users will ask you to automate or digitize some existing process. There is something that they are doing on paper or with some cumbersome, electronic tools, and the want it to be done with computers. This is where you step in. You analyze the process, gather the requirements and design a solution. What the users need, is an improved, automated version of that process. But what they think they need, and what you end up building is usually a direct imitation of their process in software. Which is nowhere near as helpful as it could be.

Let’s use my calculator example. Lets say software calculators do not exist. A user brings you his desk calculator: a really cheap, simple, solar power thing, and asks you to make something like it in software. What you probably will end up doing is this:

Flawless Implementation... But stupid...

The standard windows calculator is pretty much exact replica of a real life calculator, along with all of its flaws and hardware limitations. The calculator on the left can only display a single result on it’s screen, because it is using a tiny low powered LCD and has a very limited on-board memory. The calculator on the right, has no such limitation. It could easily use a bigger output window to display a scrollable history of results but it does not? Why? Because the developer who made it was more concerned with imitating the real world object than with actual usability features. This might be an arguable point, but IMHO a historical display would be much more useful than strict adherence to an established calculator look and feel.

The problem is that the designer of the calculator application probably never even considered this feature, because it was not part of the requirements. The requirements were based on the real world calculator which used LCD for output. So the original requirements document probably specified that output is to be done in a text box that can only display a single number at a time. Why? Because that’s what the user requested. The user wanted a software calculator, and so this is what you are delivering.

As you can see this is a very complex issue. It is easy for me to sit here, and poke fun at a badly designed software calculator, but you can’t really take too much liberties with the user requests when gathering requirements. I might think that historical display would be useful, but frankly my customer may not care. So I might end up building and billing them for a feature they not only did not request, but also did not need or care for.

It all boils down to gathering requirements. In my opinion this is by far one of the most difficult, and most important parts of software design. If you don’t get this part right, then the entire project will be off kilter. And even if you get the all the requirements down, you may still end up building the windows calculator. An exact replica of a very, very limited tool.

The problem with this design stage is that most of the time users don’t know what they need. Firstly, they don’t know how to articulate their needs to you because they do not speak the same language. What you may call a radio button is a “thingie” to them, and your combo-box is a “whachumacallit”. Half the time talking to users is like playing Pictionary.

Secondly, what they think they need is not always what they really need. For example, a user may ask you to implement a desk calculator application simply because they have never used a graphing calculator. I’m still using calculators as a metaphor. Most kids in US get exposed to graphing calculators in high school or earlier. But I’m using this as an example where user wants you to implement something really simple and basic because they don’t know a bigger and better version of that thing exists. They don’t know that a calculator could actually solve symbolic equations for them, do unit conversions, trigonometry, calculus and etc… And sometimes there is just no way of convincing them that they could use these extra features. Sometimes the only way to do it, is to build the basic calculator but use a design that would be easily extended. This way if six months from now they realize they want all these extra features you can add them without re-designing the entire app.

Thirdly, it often does not occur to them what can be done with automation. It is very common for a user to request a calculator app, when what they really need is something entirely different. And you won’t realize that unless you ask them all kinds of silly questions. Like, why do they need the calculator. It’s a crazy thing to ask, right? But sometimes the answers may surprise you. For example, your user may tell you that they need it to crunch numbers for this lengthy report they need to submit every day. In fact, they do the same batch of calculations every time – all that changes is a few key values. So in effect, what they really need is an automated version of that report. You could easily build them a form that would allow them just to key in the few variables, and then print out the results. But it simply never occurred to them to ask for that. They figured a software calculator would be nice because they could copy and paste results instead of keying them in avoiding typos. But they did not make the next obvious mental leap of imagining their whole process being automated.

Believe it or not, this happens all the time. I have learned the hard way to ask silly, crazy sounding questions when gathering requirements, because half the time the next thing you hear them say is “Wait… You could do that?” Users very, very frequently underestimate what computers can do for them. They also overestimate, but that’s a topic for a whole other post.

Let’s talk about real life examples. Have you ever run into this issue? Are you guilty of building such an application? I know I did this a few times, though I don’t think I ever created anything that would be worthy of a The Daily WTF treatment.

One thing I remember making is a bunch of VBA macros that would import specific tables from excel into a word document, fix page breaks, change headers/footers, etc. Essentially what they wanted was a way to imitate what an actual employee would have to do to create a particular report. The problem was that the employees were all technology challenged and would constantly paste in tables as pictures, mess up the formatting, screw up page numbers and etc. So some reviewer would always get stuck with the task of fixing all that shit. Of course what this company really needed was to replace their mish-mash of word and excel documents with a full blown desktop application that would validate user inputs, crunch the numbers, do some post processing and then automatically generate the report document in a PDF format. But that idea was shot down every single time we brought it up. Instead we ended up with mountains of VBA code in both Word and Excel documents leading to lots of interesting version miss-match issues and annoyed users ending up copying and pasting data into blank worksheets to get around the validation scripts and restrictions we were trying to enforce.

But that’s kinda boring. So I have a funnier story. The Chinese food place near my work used to accept orders via fax. They had check-boxes on their menu, so you could simply check all the stuff you wanted, fill out a sheet with your address and then fax it to them. It was kinda silly, but it worked. Eventually though, they decided to catch up with the times, and finally implemented an online ordering system. Naturally, I decided to give it a whirl.

The website looked crappy, but I didn’t mind. It was pretty easy to navigate, and placing an order wasn’t a major headache so I did not hold the outdated look and feel against them. I was actually happy that they finally upgraded their technology.. Up until I submitted my order and saw a progress bar, accompanied by a stern blinking warning not to close the window until I get an order confirmation. The text underneath the progress bar was saying stuff like “Establishing connection…”, “Dialing..”, “Connection busy… Retrying…”. It went on like this for about 5 minutes before the order finally went through.

At that point it dawned on me: they did not upgrade shit. They were still taking orders by fax. Someone simply wrote them a front-end web application that took the order from a customer, and then launched some sort of a faxing application to the restaurant. I couldn’t believe it, so next time I was picking up my food there I asked them about it. The lady at the counter smiled at me, pointed at the dirty old machine in the corner and said “Yeah, you place an order online and it comes out right here. Pretty cool, huh?”. All I could really do was to nod in agreement. Though cool wouldn’t be the word I would use to describe it.

How about you. Do you have any examples like this? Let me know in the comments.

]]>
http://www.terminally-incoherent.com/blog/2011/04/13/software-immitating-real-life-solutions-a-design-trap/feed/ 2
On Optimization http://www.terminally-incoherent.com/blog/2009/07/21/on-optimization/ http://www.terminally-incoherent.com/blog/2009/07/21/on-optimization/#comments Tue, 21 Jul 2009 14:50:04 +0000 http://www.terminally-incoherent.com/blog/?p=3448 Continue reading ]]> Here is an interesting story that I got from one of the old-timers in our industry. The guy who told it to me used to be a COBOL developer back in the day when Cobol was the “bleeding edge” technology. He is no longer working in the field nowadays, and he sort of lost track of the technology train.

He told me that he recently was working on some deal with the first company that hired him out of college. They gave him a brief tour and talked about the upcoming upgrade of their billing/accounting/everything else system. Apparently they were finally moving from their old cryptic, COBOL application to a brand new one written in ASP.NET. Few more prodding questions confirmed his suspicion. The COBOL app was the same exact system he helped to design 20 something years ago. This was some of the worst, buggiest and most unreadable code he has ever wrote in his life (being green and fresh out of school) and yet it was still in operation.

That’s not all though. He asked them how come they have never replaced the system with something more modern up until now. It turns out that they had. This was actually the third attempt at migration to a new technology. Previous 2 have failed miserably. Their development teams did produce viable code which fared pretty well in small scale tests. But when they actually tried to run full scale operations, the ASP app would just grind to a halt.

I chuckled. I was not surprised. “Back in the day, people knew how to write code. We are so spoiled by the Moores Law that we forgot how to do it these days”, I mused. He nodded in agreement.

You see, the COBOL system processed millions of records every day. Even though it was old, and running on an ancient hardware each batch would only take seconds to crank out. It was stable, reliable and the COBOL old-timers optimized the shit out of it over the last 20 years using every known trick in the book. The app was maintained by a shrinking cadre of wise bearded fellows who scoffed at new fangled concepts such as objects, polymorphism or encapsulation. They, however knew exactly how to shave few seconds off an operation by writing intricate spaghetti code.

The ASP and then ASP.NET code on the other hand was written by groups of greenhorns fresh out of college. They were idealistic, and excited about their project. They wrote great object oriented code, split into clearly defined modules. They leveraged open source libraries. More importantly their code was run on top of the line servers – best the money could buy.

And yet, each time they put it to real life stress test the ancient COBOL kludge would run laps around them. It would process 10 thousand records before their code even finished initializing. What took the old app 10 minutes would take 2 days on the .NET platform despite running on a hardware that was at least 10 times as fast.

They could not take such a major hit in speed as it would hurt their productivity. So twice they have shelved the project waiting until hardware finally catches up. Yep, they were waiting for hardware to catch up so that they could even hope to match the performance of a 20 year old application. Every once in a while they would brush of the code install it on new, juiced up servers and have another crack at it. They didn’t bother rewriting it, because so much money was sunk into it in the first place. So each consecutive team assigned to it would just do some minor re-factoring. This time however they were sure of success. The initial tests revealed that the newest incarnation of the ASP app was only 30% slower than the COBOL app which was considered an overwhelming success.

Non only that, they explained, but in 2-3 years the hardware will become twice as fast, which means that they might actually be able to match or even exceed the COBOL performance. Imagine that.

I’m not taking a crack at .NET or modern programming paradigms here. There is nothing wrong with either. Someone could argue that choosing to write this code in C would be a better idea from optimization point of view. Then again, modern JIT compilers can often optimize the executed code much better at run time than a C guru could ever do it by hand.

In fact, there is no for why COBOL code running on old hardware would really outperform .NET running on a modern rig. None besides crappy coding on the .NET platform. Back in the day when memory and disk space was scarce and each CPU cycle was important people knew how to optimize code. They knew how to write programs that will scale well, under limited resources. They had to learn these tricks because there was simply no way to shove hundreds of megabytes of data items in and out of memory like it is today. When they wrote code, they had to think about how it will work with large data sets.

Over the years we have sort of become lazy and complacent. I’m as guilty of it as everyone else. When I write code I hardly ever consider large data sets. I just make sure the important columns in the database are indexed, and that my query is not retarded. I hardly ever look at the actual logic within my program. I write deeply nested loops without thinking about scalability. It became sort of a pathology.

I became painfully aware of this while working on my thesis. When I was forced to do operations on big data matrices over and over again, I had to go down to basics – get rid of fancy objects, iterators and all that jazz, and just use simple loops and arrays. And even then I was struggling because no one ever taught us how to really approach practical optimization. I mean we talked about it in theory – and we were taught about algorithms. But no one really bothered to teach us practical things such as good ways to identify bottlenecks in your code, or practical optimization tricks.

I guess everyone assumed you will pick up stuff like that on your own. Or it will get drummed into you at your first job. Or people simply forget about it. After all, parallelism is the sexy thing to talk about these days. So instead of finding bottlenecks and eliminating them let’s parallelize the code and make it run on a cluster. Which is a valid approach, but not in every situation. Every once in a while you run into situation like the one I described above. There is an ancient COBOL app running on an ancient hardware – and it cranks out results faster than your code written in a modern language, running on a modern computer.

Does it means that we lost an edge? Does it mean we forgot how to write efficient code that will run fast even with very limited resources. Not really. There are still people out there who can do this sort of thing well. And of course over optimization can be harmful too.

This is just something to think about. Situations like this one happen in real life, and are quite ironic. I wonder how did the .NET development team justify their poor performance to the management.

]]>
http://www.terminally-incoherent.com/blog/2009/07/21/on-optimization/feed/ 11
Small Programming Projects http://www.terminally-incoherent.com/blog/2009/04/30/small-programming-projects/ http://www.terminally-incoherent.com/blog/2009/04/30/small-programming-projects/#comments Thu, 30 Apr 2009 14:57:46 +0000 http://www.terminally-incoherent.com/blog/?p=2954 Continue reading ]]> Let me know if this has ever happened to you. Your boss walks up to you and tells you he has a tiny little, itsy, bitsy project for you. He wants you to build a small online application. Nay, a small online form – just a single form with some database back end. Nothing more. A single page that would allow the employees could submit their TPS reports (or whatever) online. It’s only going to run on the intranet, it will never face the web and there is only 4 people who will really be using it so there is no point worrying about authentication, user management and all that stuff – just do the bare bones minimum necessary to tell the users apart, and keep them from overwriting each others data.

You implement it, and everyone loves it. The boss pats himself on the back for clever use of technology and tells you to add a tiny little feature to make the thing accept quarterly evaluations as well, and extend the user base to like 20 people. That’s about it. And you shouldn’t really worry about expanding it. It’s not going to get any bigger than this. It’s just a tiny change. No need to add anything else. Don’t spend any time improving the design. It’s not necessary, and you will be wasting time.

Next week you get another request. Then another. And another. Each time it is a tiny little change – and you are explicitly told the application is not going to grow, and it does not need a redesign. Six months down the road your application is the main online hub of the company. It’s facing the web, it’s tracking just about every little bit of data your boss could think off, stores few gigs of data and every employee, client, applicant and visitor must interact with it at one point or another. It is a monster and the code is a labyrinthine maze of hacks, patches workarounds and hastily added modifications aiming at adding stability and security to something that was not designed for it in the first place. And better yet – no one, including you can believe how huge it got and how quickly it happened. No one has seen it coming. No one could predict it would actually be taken this far.

I’ve seen this happen multiple times, and heard about similar scenarios from others. Very small, simple projects have a tendency to blow up and grow exponentially into huge enterprise systems. You never know which project is going to bloat this way. In fact, you probably won’t know that one of your projects is on this destructive path until it’s to late. It happens in small incremental steps, spread over long amount of time. But it happens.

And since we know it happens, we can be prepared for it. The easiest way to avoid exponential growth of this type from becoming an issue is to always code as if you were designing something 5 times as big. Always modularize your code, build your applications in a classic 3 tier system and always use MVC or similar paradigm. If your app bloats into some sort of a monster, you will have the infrastructure in place to support it, and build it up. At least for the most part.

Then again, this approach flies in the face of the KISS principle. Sometimes coding this way is indeed an overkill. Sometimes a single form will just be a single form and building a 3 tier architecture and creating/deploying some sort of a framework to support it may be a huge waste of time and resources. Sometimes a quick, dirty and direct approach can be vastly superior to the roundabout enterprise way.

So I guess the trick here is to do something in between these two extremes:

  1. Always assume your code is going to be facing the real internet, even if the initial spec says it wont. Build with security in mind.
  2. Always assume you will need multiple access levels for your users and a flexible access controls.
  3. Never assume your data won’t bloat out of proportion. Never use sqlite or SQL Server express when you could be using something more scalable
  4. Design your database for extensibility – normalize your design and be aware that you might be adding more tables and more complex relationships into the mix in the future. This shouldn’t be much effort since your tiny project will probably need only a handful of tables.
  5. Try to do some data modeling and object-relational mapping as this will help scaling the code later. Since you are starting small, this should be easy to do and it will keep your code clean and organized
  6. Design or steal a robust log in / user authentication module. One day your application may become the main login to the intranet or myriad of tacked on services. Don’t half-ass it. Also think how you can handle authentication from robots – since you may need to set up complex web services one day.
  7. Use an off the shelf module or solution if possible. Why? Chances are the author already spent a considerable amount of effort to ensure extensibility and scalability of their code. So when that inevitable feature request comes in, you have that much less work to do. And even if it is not very extensible, you may still be ahead. It is much easier to justify the need to do some re-writing or re-designing when you can blame it on inferior 3rd party code.

Feel free to add your own tips to this list. Did this sort of thing ever happen to you? I speak from experience here – I still maintain a system that started like that. I was young, naive and green undergrad and I got a tinsy little web project. And now here I am, many, many moons later – still maintaining the damn thing. It grew into a behemoth – an overwhelming mountain of code some of which is so crappy that I am the only brave soul that dares to touch it.

I also inherited a system like that once. I rewrote most of the UI (that was actually my assignment) and left most of the back end intact – trying to avoid the hairy code on that side of the application. I then happily passed it on to the next brave soul.

How about you?

]]>
http://www.terminally-incoherent.com/blog/2009/04/30/small-programming-projects/feed/ 8
Software Calcuators and UI Design http://www.terminally-incoherent.com/blog/2009/02/02/software-calcuators-and-ui-design/ http://www.terminally-incoherent.com/blog/2009/02/02/software-calcuators-and-ui-design/#comments Mon, 02 Feb 2009 16:19:02 +0000 http://www.terminally-incoherent.com/blog/2009/02/02/software-calcuators-and-ui-design/ Continue reading ]]> I noticed a very peculiar UI design pattern lately. Have you ever used a calculator program? I’m sure you did at least once in your life. Most operating systems ship with some sort of a calculator tool. Strangely enough, they all look the same:

croppercapture8.Png

Can someone explain to me one thing though? Why the keypad? I mean, why is it there on every single fucking calculator – whether you are using calc.exe on windows or xcalc on a real operating – there is always the keypad. Why?

Your computer already has a perfectly well designed calculator input interface in the form of a numeric pad on the keyboard. If you haven’t noticed it was actually designed to look like a calculator with the tight clustering of numbers, and dedicated buttons the basic math operators placed in strategic easy to reach places:

croppercapture12.Png

Does anyone actually use the mouse input when using these software tools? Virtual keyboards are ok if you are paranoid and afraid of key loggers but they are not great for data input unless you are using a touch screen or a tablet with a stylus. I personally find the numpad faster, easier and more intuitive to use than an on-screen input.

This design tries to mimic a real physical, low end calculator forgetting that these things generally suck. If I know I will need to do some number crunching on paper, I use my trusty TI-86 – not one of those simplistic pieces of crap with a one line LCD display.

It seems that most of the simple calculator programs out there approach the problem wrong. They forget that the chepo calculators are built the way they are simply to cut costs. The LCD is so tiny to keep the production costs down. But there is no reason why a software calculator should not display recent calculation history the way high end graphing calculators do. Instead of using a big output screen for recent results, they instead build a huge and unnecessary number pad, and restrict the results to a tiny little box that can only hold one number at a time. WTF?

Believe it or not, this pathology runs deep. I was surprised that someone actually built a front-end for the unix bc tool using precisely this philosophy – a big virtual keypad and a one-number display box. There are however few tools designed the right way. Most notable one of course is the Windows XP PowerCalc:

croppercapture9.Png

I believe that I mentioned that to me this is one of the better calculator UI’s. It has a nice big box where it displays results, a graphing area, a list of constants and variables and helpful hints just above the input box. Did you notice the lack of the virtual numpad? Yeah, when you get rid of it, you suddenly get more space for all this other cool stuff.

SpeedCrunch is not bad either. The default setup does include the stupid keypad, but it is a linux app. That means you can easily configure it and make it into something very minimalistic:

croppercapture10.Png

I guess there is a lesson in UI design, and usability here. Designing a software calculator to work like calc.exe or xcalc is like building a word processor that acts the same way as a real typewriter. Imagine if MS word only allowed you to use mono spaced font, and would not let you delete or backspace erroneously entered characters forcing you to retype the whole page every time you made a mistake. Fortunately at some point someone figured out that you can actually design something much better than a virtual keyboard. We still haven’t really made that leap with calculators though. When you tell someone to build a calculator tool, they immediately start thinking about building that virtual key pad. Why? Because that’s what calculator is, right? A numeric keypad with an LCD.

What they should really be thinking about is what a calculator really does, and how can it’s functionality be improved and extended in virtual space.

]]>
http://www.terminally-incoherent.com/blog/2009/02/02/software-calcuators-and-ui-design/feed/ 13
What is the point of explicit typing again? http://www.terminally-incoherent.com/blog/2008/07/07/what-is-the-point-of-explicit-typing-again/ http://www.terminally-incoherent.com/blog/2008/07/07/what-is-the-point-of-explicit-typing-again/#comments Mon, 07 Jul 2008 15:30:56 +0000 http://www.terminally-incoherent.com/blog/2008/07/07/what-is-the-point-of-explicit-typing-again/ Continue reading ]]> I’m noticing that C# 3.0 has implicit type declarations which is actually a very interesting step to make this language look less like a cheap Java clone. It is a very neat feature that lets you declare your variables like this:

var s = "Hello";
var d = 1.0;
var numbers = new int[] {1, 2, 3};
var orders = new Dictionary();

By show of hands, lets see who doesn’t approve of this feature? Who here thinks that we should explicitly declare the type on the right side of an assignment statement? Yeah, you guys are all wrong. Please tell me, why is it necessary to write:

String foo = "bar";

Could foo possibly be anything else than a String in this line? String can’t be subclassed, and no other data type can be instantiated this way. So the word String doesn’t tell me anything that I wouldn’t know by looking at the line anyway. It also doesn’t tell anything new to the compiler which can infer the type quite well. Forcing people to type String every single time they declare one is just cosmetic… no, pedantic anal retentiveness. Hell, even Scala folks use implicit type declarations – and these are the people who think that static typing in Java is not static and not type safe enough.

The explicit declarations are quickly becoming a thing of the past. The trend is moving towards less verbose, compact and more elegant syntax – in both dynamic and static typing camps. C# seems to be adopting to this new reality.

When I first saw C#, I described it to a colleague as “a Java Clone written by C++ people, or a C++ clone written by Java people”. It seemed to have been created combine the best of both worlds and draw both Java and C++ developers in. But that was a long time ago. Java is no longer the cool and hip language. It is “your dad’s language” – outdated, bloated dinosaur. It’s not going to go away but it is no longer cool. I think Microsoft made a smart move here, distancing itself from it. I mean, implicit data types, lambda statements and all that jazz.. They might actually have a chance to remain relevant as Java is slowly transitioning into it’s new life as a senior citizen of programming languages. Actually, scratch that – relevant is a wrong word here. Change it to popular, well liked or buzz worthy.

Both C# and Java will remain relevant for years to come, just like C is still relevant now. No one thinks C is revolutionary, hip, cool and awesome anymore. But no one seems to be eager to throw it away and forget about it either. Universities still teach it, and companies still hire C programmers. The same fate awaits Java and C#. They will be around for a long, long time.

[tags]c#, c++, java, implicit type declaration, scala, typing, programming[/tags]

]]>
http://www.terminally-incoherent.com/blog/2008/07/07/what-is-the-point-of-explicit-typing-again/feed/ 1
Installation Wizards are not allways User Friendly http://www.terminally-incoherent.com/blog/2008/02/12/installation-wizards-are-not-allways-user-friendly/ http://www.terminally-incoherent.com/blog/2008/02/12/installation-wizards-are-not-allways-user-friendly/#comments Tue, 12 Feb 2008 16:17:42 +0000 http://www.terminally-incoherent.com/blog/2008/02/12/installation-wizards-are-not-allways-user-friendly/ Continue reading ]]> Installation wizards have their place. For example, when you are installing and configuring an operating system a wizard is your best friend. The design of the wizard is paramount as it is often the very first thing your user sees. First impressions are crucial, and if your install procedure sucks, then it puts your whole application in a bad light. That said, sometimes wizards – no matter how pretty, and helpful are just unnecessary overhead.

For example, let’s compare the way a Windows user and an Ubuntu user perform a typical installation. The windows user will start by downloading the package form the internet or popping in the CD into the drive. From there he will have to typically go through following steps:

  1. Look at some generic “This installer will guide you through the process…” screen
  2. Agree to an EULA
  3. Pick program components to be installed
  4. Confirm he wants to install the application in C:\Program Files\The Application
  5. Decide if he wants shortcuts to be added to Start Menu and Quick Launch
  6. Review the summary of chosen settings
  7. Stare at the progress bar
  8. Confirm that he wants to run the application and/or read the README file
  9. Start using the application

Note that I omitted the common steps of entering the product key and online activation to keep things fair. Let’s assume the windows user was installing a multi-platform open source application – something like Firefox or something similarly popular and ubiquitous. To install the same application a typical Ubuntu user would just open Synaptic and find it on the package list (equivalent of finding and downloading the package from the web) and then:

  1. Click on the Install button
  2. Stare at the progress bar/scrolling text
  3. Run and start using the application

What is the difference here? I mean other than one system uses a repository and other downloadable packages – that bit is actually inessential here. The major difference is that in Ubuntu the application is silently installed in the background without asking the user any stupid questions. And believe it or not, this is much more user friendly way of handling installation than a pretty looking wizard.

The average user wants the default components to be installed in the default directory with the default set of options and shortcuts. Think back on recent windows applications which you have installed – how often do you change the default installer options to something else? I typically just leave the default settings unless the app ships with annoying add-ons like toolbars or other adware and let’s you opt out of them by un-checking a box or two in the installer. Other than that, I typically just click next.

Tons of applications these days provide a silent install option that can be invoked by passing a special command line parameter to the installer binary (typically something like /S, /Q or –silent). This performs the whole installation in the background choosing all the default settings in a way simillar to that used by apt and Synaptic on Ubuntu. But the default installer makes us jump through multiple hoops instead. Why is that? Why can’t the silent install be the default option and the detailed wizard be invoked by some command line switch?

Usually people who would want to take advantage of the customization options in the installer are power users who could easily figure out how to trigger this hidden mode. The rest of us would simply hit a button, wait few seconds and then simply enjoy the application.

It seems that folks in Linux and Apple camps always knew this. Ubuntu for example only uses wizards for configuring complex pieces of software – like the OS itself. In the windows world however, the installation wizard is the king for apps big and small. It is sometimes quite ridiculous – for example, it is not uncommon to see a 2-3 MB application composed of a single executable requiring 6 or 7 step installation process. I know because I created such installers myself. Most of us are so used to them we hardly even notice them, but if you sit a complete novice in front of the computer and tell him/her to install some software every extra step they have to take is another occasion for doubt and panic.

And no – I’m not making this up. I’ve been actually asked by coworkers to stand there and watch them install this or that application. They would click the installer, hit the first question and look back at me. I would nod and they would proceed to the next one. Most of them would then apologize for taking my time and explain they were simply afraid they would mess something up if they answered one of the questions wrong. They just didn’t realize all they had to do was to click next repeatedly.

Adobe already figured this out. When you go to install their PDF Reader app these days all you need to do is to clikc on the Install button on their page. Then an ActiveX or XUL window pops up and displays a progress bar. There are no questions asked, no configuration options to be chosen. The reader is just installed and then the progress bar disappears letting the user know his application is ready for use. It’s clear to me that these guys got it. They did the usability research and they noticed that most people just click next all the time. And if overwhelming majority picks the default settings, then why even ask? Just install the app with most common configuration and provide mechanism for power users to circumvent it.

[tags]installation, installing, apt, synaptic, windows, usability[/tags]

]]>
http://www.terminally-incoherent.com/blog/2008/02/12/installation-wizards-are-not-allways-user-friendly/feed/ 13
DRM Software Industry Must be a Cash Cow http://www.terminally-incoherent.com/blog/2008/01/18/drm-software-industry-must-be-a-cash-cow/ http://www.terminally-incoherent.com/blog/2008/01/18/drm-software-industry-must-be-a-cash-cow/#comments Fri, 18 Jan 2008 16:40:25 +0000 http://www.terminally-incoherent.com/blog/2008/01/18/drm-software-industry-must-be-a-cash-cow/ Continue reading ]]> I realized something today – we are in the wrong industry guys! We should all be writing DRM software! I mean, at least in theory. I would never do it because I find the idea of DRM morally reprehensible and intrinsically flawed. In fact, I think most self respecting programmers think the same way and stay away from that sector of the market. But it must be a fucking cash cow!

Ok, you don’t see it yet. Let me explain. Imagine doing highly abstract cryptography for people who are so technologically inept that they can’t even spell the word cryptography. Imagine working on products that no one actually expects to work. Let’s face it, even the big fat movie studio executive that just paid few mill to some shifty software company is expecting their DRM to actually prevent the final product from hitting usenet and torrent boards. And best yet – you don’t even have to do much quality assurance because your client doesn’t really give a fuck how this software will affect the machines of their clients. Even if you fuck up, and write something that actually can damage end-users optical drives (hi there Starforce!) you still get paid. It’s your client, not you will need to deal with the customer support, the bad PR, refunds and etc.. Hell, maybe they will even hire you back to write another DRM scheme for them.

What was the last big DRM thing? BioShock? Yes, it’s old news but I don’t recall anything more recent – I haven’t been paying much attention. That one however generated so much buzz it actually registered on my radar (few things do these days). Personally I haven’t used it, but I hear that the game has not only a built in rootkit, but also multi-step online activation process, and that it calls home all the time. In fact I hear that most people who bought it just downloaded a crack to get rid of that garbage.

If you were to slow, I will repeat it for you slowly:

In fact I hear that most people who bought it just downloaded a crack to get rid of that garbage.

Yes, DRM is such a pain that legit customers are cracking their own legally purchased copies (invariably breaking the DMCA) because the copy protection is such a pain. Can you see the irony here? The copy protection which was supposed to maintain the integrity of the package and prevent this sort of thing from happening is being easily removed by a widely available patch that appeared a week after the release of the game.

I guess we can’t forget about ACS and they lovely t-shirt I bought that has their super-sekrit encryption key printed on it. :mrgreen: DRM is really a joke – and not particularly funny one at that.

Remember Bob, Alice and Eve from your cryptography lessons? Bob and Alice always try to communicate, while Eve is listening. Most cryptographic problems involve securely passing information between Bob and Alice while protecting it from Eve. DRM poses a peculiar problem because it does not follow this model. When you work with DRM you want to send messages between Bob and Alice while protecting them from… Alice. After all, Alice can’t be trusted as she might share them with Eve. You can probably see why serious security researchers don’t actually bother working on one of these problems – it’s stupid, and unsolvable. If Alice can read and comprehend the message, she can pass it to Eve. Period. Entertainment industry calls this “The Analog Hole” while the rest of us refers to it as “The Reality”. The problem with this supposed hole is that it can’t be closed with software. That’s just how it works – you have to use hardware. Can you see where this is going?

Nah, you don’t see it. I didn’t see it at first either so let me tell you. Who do you blame when your DRM gets cracked? Anyone? Anyone? The hardware vendor of course. You thought I’m gonna say “the previous developer” but no – that’s who you blame at a real software shop. At a DRM shop you blame the hardware vendor for dropping the ball, and not making their shit impeccable, and impervious to everything including a voltmeter and a soldering iron attack. At some point the data must be analog, unless they figure out a way to directly stream content into a wetware DRM chip implanted into your head. So really, this is all a matter of where do you patch into the electronic system to recover the data.

Hardware folks know it, but they must play ball or they will be locked out of the content. What good is a next-gen DVD player if it can’t play any of the next-gen DVD’s? So you end up with a system that has two broken components: software that doesn’t work, and hardware that is intentionally slow, complex and expensive which doesn’t work either.

Since plugging the analog hole is an engineering task on par with building perpetum mobile, hardware people will always struggle with implementation. If you are behind the schedule, give the hardware folks a half assed incomplete spec to work from and then change it 3 or 4 times. Oh, and remember to revert to a previous spec at least once in that process to get them totally confused. Then you can blame them on delays. If the client asks why the spec is so shitty, or why you change it so often tell them details leaked out on the internet and you have to do this to keep implementation details secret. Sigh… I wish we all could play this game, but out in the real world developers are actually expected to deliver software which works, is on schedule and doesn’t mess up your system. Only DRM makers can churn out some piece of garbage that doesn’t really do anything beyond making your machine unstable, and still get paid.

But let’s get back to Bob and Alice again. There is a second part to this equation that few people talk about. Bob actually doesn’t send the message himself. He dictates the message to Eve who then encrypts it and hand delivers it to Alice. Confused? Think about it – I’m talking about the human element. How do you get a zero day scene release?

Ok, there is more than one way – I’ll grant you that. But more often than not you get a zero-day by having a supplier close to the source. Usually there are thousands of people involved a movie production, post production, publishing and distribution. They all have internet access and most of them probably have been known to download stuff without paying for it. Any one who touches the source can leak it and tracing such a leak is extremely difficult because copying digital data usually leaves no evidence. The only way you can work is backwards – if you nab the uploader you may or may not be able to work your way back to the supplier.

This is what I mean by Eve encrypting and delivering the message to Alice. Most movies get leaked onto the interwebs long before they get the DRM treatment. So you are really building software to protect something that is already available out there.

Let’s summarize:

  1. you build cryptography software for a client that doesn’t understand cryptography
  2. you are working on a problem that is known to be unsolvable
  3. your client does not expect your software to actually work
  4. stability of end-user’s machine is not an issue
  5. compatibility with hardware/software on end-users’s machine is not an issue
  6. ethics are not an issue – your client doesn’t care if you use a rootkit or a trojan
  7. support is mostly not an issue – at most you might just need to provide an un-installer for the rootkit
  8. if all else fails you can blame the hardware vendor for delays

All you are really expected to do is to cripple user experience to the point where they will just go and download illegal copy. So you make a shitty piece of software cobbled together any which way, make it do some hard-core math to facilitate your half-assed encryption, then charge the gullible but unreasonably wealthy client an arm and a leg and move on to the next victim. Pure profit.

Naturally, I bet the DRM industry does have some honest, hard working people who take pride in their work. They will probably come here and yell at me for talking shit. I’m not knocking you guys – I admit, cryptography is a fascinating subject. I’m sure that the software you build uses very cool ideas, and is actually very effective. I’m really happy that you get to work on those hard and challenging issues – I really am. In fact, I will think about all the hard work you did next time I’m watching (or playing) a pirate copy of the movie (or a game) that your software was supposed to protect. :mrgreen:

[tags]drm, digital rights management, drm software, copyright, copyfight[/tags]

]]>
http://www.terminally-incoherent.com/blog/2008/01/18/drm-software-industry-must-be-a-cash-cow/feed/ 12
Designing a Tetris Clone: Part 1 http://www.terminally-incoherent.com/blog/2007/10/20/designing-a-tetris-clone-part-1/ http://www.terminally-incoherent.com/blog/2007/10/20/designing-a-tetris-clone-part-1/#comments Sat, 20 Oct 2007 15:13:14 +0000 http://www.terminally-incoherent.com/blog/2007/10/20/designing-a-tetris-clone-part-1/ Continue reading ]]> Someone told me that Tetris was a “hello world” of game programming. I whole heartedly disagree. Tetris may not be an awfully complex game, but it sure as hell is not trivial. The whole concept of a “hello world” is to show the simplest trivial example of functional code that actually does something. But if you think about it, Tetris is not all that simple. So I decided to actually implement the game to show that if you really sit down and try to put it together it is slightly more than just a hello world.

Now, I could just sit down and hack a working prototype relatively quickly but let’s do this the right way. Lets design it, rather than hack it.

Good place to start is to put the spec of the game down on paper. So, let’s just list everything we know about the game:

  1. the playing field is a grid of the size 10×20, 10×24 or 10×16
  2. all tetris pieces are composed out of exactly 4 blocks
  3. the pieces come in 7 shapes: J, L, T, Z, S, I and O
  4. each piece has an orientation and color
  5. pieces start at the top in the middle of the grid, and slowly fall downwards
  6. the speed at which the pieces fall increases with each level
  7. player can move the pieces left and right or rotate them (change their orientation)
  8. when a piece touches the bottom of the grid or top of another piece it becomes locked down
  9. there is a slight delay before the lock-down that allows the player to slide the piece into place
  10. if a full row on the grid is filled it will clear and add to the players score
  11. the original Nintendo formula for computing score is m(n + 1) where n is level, and m takes the value of 40, 100, 300, 1200 depending on the number of lines that were cleared simultaneously (40 is one line, 1200 is 4 lines)
  12. when a line clears, all the blocks above move down
  13. requirements for advancing to the next level depends on a game, but most start with around 40 lines, and the number gets progressively smaller to become 1 level per line around level 20-30
  14. some games implement gravity that makes blocks that do not rest on other blocs slide down causing chain reactions
  15. most games display the next piece next to the grid
  16. most games allow player to instantly drop the block down either via hard drop or soft drop
  17. some games implement a wall kick that allows for rotating pieces backed up against the wall

The 17 points above roughly describe the game and can be used as an initial specification. Remember, we are building a game that already exists in many different version, so we want to create something that is instantly recognizable as Tetris, but we do not necessarily need to include all the fancy features that were added to it over the years.

As they taught you in your software engineering class, pay attention to the nouns. They are your potential classes. Right now I see several really good candidates: Block, Grid and Piece. In fact, we could possibly define classes for each specific piece like LShapedPiece, JShapedPiece and etc… They would all be subclasses of Piece and would define the shape, color and orientation. Each of these classes would also implement rules on how the piece could be rotated. This way Grid class would only need to interact with a Piece class.

I actually can see a nice factory pattern forming here. Grid will call a Piece factory which will randomly generate one of the subclasses, and then return it as Piece. This may or may not be practical, but it seems like a good idea at the moment.

Also, the spec is just a rough sketch of the game. If we want to implement it, we must answer several questions and make some important design choices. For example, how big do we want the grid? Do we want to allow the player to rotate pieces clockwise and counter-clockwise or just in one direction (the later choice means you only need a single rotate button rather than two). Do we want to have hard drops or soft drops? Do we implement wall kicks or leave them out? We also need to pick several variables such as lock delay, the speed at which the pieces drop. We must decide if we want pieces to crawl (move one pixel at a time with specific velocity) or skip (move full block length down every n seconds).

We have a pretty good scoring formula, but just a very rough guideline on level advancement. That will need to change. Oh, and there is also that pesky matter of collision detection – somehow we will need to decide how do we detect that piece has hit a wall, or a locking element like floor or another piece.

We also need to pick platform and a language at some point, but that is not all that important at this point. Tetris can be written in any language, and while some might be better suited for this task than others, it ultimately won’t matter as long as we can use some graphic display libs and don’t have to re-implement a whole 2d display framework.

I’m going to address all these questions Part 2, and start drawing up some UML. I also want to look at actual game piece design – like sprites, and internal representations of a piece and a grid. I already have certain design in my head, but any suggestions are welcome. I also have a very specific language and framework that I’d like to use, but I’m not really designing with that language in mind so I’m not going to tell you yet. Good design should work in any OO language – and once we pick one, we can do necessary adjustments if needed.

Still, I’d like to see what language you guy would use for this. Let me know in the comments. If you suggest a good one, I might even change my mind. :)

[tags]games, game design, gaming, tetris, tetris clone, design, software engineering, factory pattern[/tags]

]]>
http://www.terminally-incoherent.com/blog/2007/10/20/designing-a-tetris-clone-part-1/feed/ 3
The GTFO UI Design Philosophy http://www.terminally-incoherent.com/blog/2007/10/04/the-gtfo-ui-design-philosophy/ http://www.terminally-incoherent.com/blog/2007/10/04/the-gtfo-ui-design-philosophy/#comments Thu, 04 Oct 2007 15:27:15 +0000 http://www.terminally-incoherent.com/blog/2007/10/01/the-gtfo-ui-design-philosophy/ Continue reading ]]> I present to you what I call the GTFO UI Design Philosophy. There are many books written on designing good user interfaces, appropriate methodologies, approaches and techniques. People talk about different UI paradigms, styles and etc. To me UI design roughly boils down to this: “get the fuck out of my way and let me do whatever I’m doing”. That’s it! Everything else just adds polished feel to your design. Knowing when to GTFO and let the user work in peace however is what separates a good UI from an annoying one.

When designing user interface please keep these 10 things in mind:

  1. Making Me Wait is Annoying – every second you make the user wait for your app to launch is a second they are using to imagine strangling you with your own intestines. Seriously. When I start an application, I want to use it immediately. Not in 5 minutes, not in 30 seconds but now. Make sure the UI pops up immediately, and that the controls are functional as soon as possible. Then if you need to load some plugins, or libraries put a throbber somewhere in the corner. As you are loading, let me do easy stuff like open the file dialog and etc… Gray out the shit that is not ready yet, or something.
  2. Stealing Focus is Evil – don’t you just love when you typing something when all of a sudden your IM client or some other app pops into the foreground? While an your app might potentially need to steal focus from it’s own window to display a dialog, it should never, ever attempt to steal focus from other applications. It’s annoying, and it interrupts my work flow.
  3. Modal Dialogs Suck – sometimes modal dialogs might be necessary, but they should be used very sparingly. There is absolutely no reason to make the common save, open and properties dialogs modal. Why? Because modal dialogs not only steal your focus, but also hold it hostage. And stealing focus is evil
  4. Splash Screens Suck – I can’t tell you how many times I said to myself: “boy, I love how this application shows me this big fucking ugly splash for a full minute every single fucking time I start it up”. Seriously, just think about it? Do you like staring at splash screens? No? Then why THE FUCK do you put them in your software? If your Splash screen is there because your application takes 5 minutes to load, and you need to use a progress bar of some sort then maybe consider #1 on this list. Start with a basic UI and update it dynamically as the shit is loading.
  5. Your UI Icons Suck – you can hire a world renowned artist to design icons and buttons for your UI so incredibly beautiful that mere glimpse of them will make people ejaculate rainbows – someone out there will still think they are fucking ugly. Provide functionality to hide that shit – either via some minimalistic text only UI or skinning. What the button does is more important than how it looks, and your power users will probably use the keyboard shortcuts anyway.
  6. No One Wants a Fat, Ugly Toolbars – if your toolbar, status bar, and sidebar take up so much space that there is hardly any left for the actual work area, you fail at UI design. I’m looking at you MS Ribbon, and assorted legion of ugly ass GTK apps. See the rule about icons – no one fucking wants your shitty UI elements in their face. If you absolutely must have big bulky toolbars, provide a mechanism to hide it, skin it or configure it. Otherwise it’s just a distraction and annoyance.
  7. You Do Not Need and Icon in the System Tray – nor do you need to start with windows. Unless you are an app that needs to be on all the time (like a firewall, or on-access AV) you have no business there. If I want you to always be on and start with windows, I will configure it. I’m talking to you Quicktime, WinZip and OpenFuckinOffice and every single crappy piece of shit that comes preinstalled on Dell machines. GTFO! Oh, and dear AV and Firewall makers – if your application does not have a disable and exit button that will kill all your processes, you are no better than the fucking malware you are trying to protect me from.
  8. No, I Don’t Want to See More Tips on Startup – here is a tip: go fuck a duck. How is that for a tip? No one uses these things, they are annoying as shit and serve no real purpose. Note: if your tips are delivered by a talking paper clip, you need to kill yourself.
  9. Don’t Freeze – when your UI locks up and becomes unresponsive most users assume that your app has crashed. In most cases it didn’t – it was simply to busy doing something else to register UI clicks. This is bad as most people will just kill your process leading to data corruption and various other fun side effects. Not only will they think you released a piece of shit that crashes all the time, but will also blame you for losing the data. I’m telling you, that fucking “Not Responding” notification in windows is a death sentence. Linux users on the other hand will just -9 your ass without even thinking about it. Use threads, or forked processes, and always display some sort of progress bar, or throbber animation when busy. As long as shit is moving on the screen the lusers probably won’t even be tempted to click on the UI.
  10. Did I Mention that Splash Screens Suck? – seriously people, lay down on the Splash bullshit. Especially if the bitmap splash image is actually bigger than your core binary. Get a grip!

So here you have it. Follow these 10 simple tips, and your users will love you. Unless of course your app sucks anyway – but I can’t really help you there.

Btw, profanity is there to emphasize the point. We all know that all these things suck for the users. I dare you to take any of these points and argue why doing opposite from what I’m telling you would be better for the user. Sure, some of these things might be related to development costs (making an UI that doesn’t lock up on intensive IO might be harder than creating one that does) but don’t tell me that splash screens, startup tips and ugly ass toolbars that can’t be resized or hidden really save so much time and money.

Every one knows these things. Everyone hates the apps that do them. And yet, every other piece of software on the market (proprietary and free alike) violates at least one, if not all of the tenets of GTFO UI Design outlined above. Why is that?

[tags]ui, gui, ui design, gtfo, splash screen, toolbar, user interface, graphical user interface, splash, throbber, progress bar, taskbar[/tags]

]]>
http://www.terminally-incoherent.com/blog/2007/10/04/the-gtfo-ui-design-philosophy/feed/ 13
MySQL: Find Duplicate Entries in a Table http://www.terminally-incoherent.com/blog/2007/09/19/mysql-find-duplicate-entries-in-a-table/ http://www.terminally-incoherent.com/blog/2007/09/19/mysql-find-duplicate-entries-in-a-table/#comments Wed, 19 Sep 2007 17:17:28 +0000 http://www.terminally-incoherent.com/blog/2007/09/19/mysql-find-duplicate-entries-in-a-table/ Continue reading ]]> Here is a little background for this issue. The database used to run on ancient copy of MySQL until very recently we upgraded it to 5.0. Not without some headaches of migrating the database, but it worked. Common problem we had with this database was duplicate entries in printed reports. Why did these happen? Mostly because of user error combined with a lack of foreign key constraints.

The two MyIsam tables that were causing the issue were: report and proforma. Both were used for tracking documents through the review process. Report was the actual field audit report that would be submitted, reviewed and sent to the clients, while proforma was the documentation of expenses that our employees were supposed to submit for each assignment. Both would store dates, comments, notes and other info regarding said documents. The proforma had a 1-1 relationship with report (each proforma was tied to a single report entry). But since the old MySQL version did not support Foreign Key constraints this was not enforced (or rather only enforced by the PHP front end).

However users would continuously find ways to submit multiple proformas for the same report – for example by clicking the “submit” button 7 times. Other fun trick for re-submitting proforma was to send it in as TBA (to be annouced). Upon seeing the TBA the clerical staff in the office would manually associate it with appropriate report, usually without checking if another copy is already in the database. And when we locked that down, they would just continuously send bug reports about it forcing us to sort this out at the DB level.

What happened if the non-existent FK constraint was violated? One of the main reporting sections of the site used a complex join across these two tables. If there were two or more proformas per report, that report would show up on the list multiple times. What do people do when they see duplicates on the list? They start deleting them. Problem is – these were not real duplicates, but the same entry repeated several times. Deleting one copy would hose all of them, causing yet more support request for restoring the entry from the nightly DB dump.

Fun times. So after we switched to a DB engine that was actually developed in this century I immediately switched the tables to InnoDB and put a freaking foreign key constraint on that relationship. This way if someone finds a new loophole in the PHP code they will end up seeing a nice MySQL error instead of creating new duplicates.

But guess what – I just got more bug reports about duplicate entries. Apparently the FK constraint is not retroactive, and MySQL will gleefully allow duplicate keys to exist in your FK column when you apply the constraint. I figured that it would start trashing about and keep giving me errors if that was the case, but no – all it cared about was an index on the key column. So I was left with a task to track down all the duplicate entries in the foreign key column in my proforma table. How do I do this? This problem actually got me scratching my head for a bit, until I had a sudden epiphany:

SELECT 
	id, report_id, count(*)
FROM
	proforma
HAVING
	count(*)>1

I found around 17 separate instances of the foreign key (here report_id) being repeated anywhere from 2 to 8 times. All of these were for older entries that no longer show up on the first page of results, so naturally no one noticed. Still, it shows that the problem was more prevalent than we initially suspected.

What I’m really trying to say here is this: use database constraints to enforce table relationships. This is the only proper and effective way to do it. Trying to enforce constraints in software is just asking for trouble as you create multiple points of failure. Different parts of the code will update any given table at different times, and all of them must check this. Failing to implement proper checks in every piece of code that touches that table may lead to duplicate entries in FK columns and similar undesired side effects. A simple constraint on a column will stop this from happening much more effectively, with much less effort.

[tags]mysql, sql, database, database design, foreign key, key, constraint, php[/tags]

]]>
http://www.terminally-incoherent.com/blog/2007/09/19/mysql-find-duplicate-entries-in-a-table/feed/ 4