I was in the City the other day, listening to a somewhat interesting talk on computer security. For my readers who are not from the area, let me explain. When we NJ dwellers say “The City” (in capitals), we mean a certain nearby city of York. This of course means skyscrapers, smog, hellish traffic, crowds, noise and dirt. I love big cities, they remind me of home. The are the only places where you can see the stark juxtaposition of a sharply dressed businessman in Armani suit stepping over a homeless bum sleeping on the sidewalk as he is hailing a cab. These places pulsate with life, and purpose and have this strange intensity. You can almost feel the weight of the accumulated human experience all around you. If the stone walls of the skyscrapers could talk, they would sing us a moving story about love, commitment, betrayal, hate, strife, happiness, sorrow – about hearts and dreams being broken or fulfilled every day on the busy streets. That said, I totally don’t mind living in the quiet and lazy suburbia.
The whole shindig was targeted more at the managerial types so it was sort of dumbed down out of necessity. You see, when you do a presentation of IT people or programmers you talk about technology. You throw it out there, say what it does, why is it good, and then you dive in and show how it works, how it can be broken, and how to hack it into submission. That’s what excites us. When you do a presentation for the decision makers, you briefly describe the technology, then you talk about “business scenarios”, costs, benefits, risks and tell “industry stories” and then try to sell them “solutions”. Abridged transcript would be as follows: “blah blah blah, interesting stuff, money money money money, risk, money money, opportunity, money money, buy buy buy!”
Still, the gist of the talk was interesting and I was able to sneak in one or two technical questions at the end so it was not a total loss. In fact I found it worth sharing here.
There are pretty much 2 ways to secure your machines. On the small scale you simply run a client antivirus, and software firewall on each desktop. On a large scale, you put trusted machines behind a big bad firewall, or perhaps build tiered architecture with firewalls between each tier. Both methods have flaws. The big scale method, ironically doesn’t scale well because large companies tend to have dynamic network architectures due to growth, mergers and work is more and more often done from beyond the firewall due to mobility of the workforce. So your firewall infrastructure end up looking like swiss cheese full of holes, exceptions, and strange rules no one remembers creating.
The small scale approach is similarly vulnerable. Your security applications are running in the context of the operating system so if the OS gets compromised by a new zero day exploit that installs a root kit you are dead. If you can’t trust your OS, how can you ever be sure every little piece of malicious code was removed? How can you even attempt to remove that stuff if the malware is actively killing all the anti-virus threads it can find? There are many cases when the best thing to do when it you get compromised is to reformat and start from scratch.
The new idea the talk tried to introduce was to run your security software in a virtual machine. This virtual machine would be a minimalistic, stripped down OS, which would act as your internet gateway, firewall, IPS and anti-mallware scanner. The idea is to divorce your security software from the host OS to make it less susceptible to attacks on that system. Instead of running a big static OS installation with many services, applications and points of attack, you are now exposing only a small, hardened, special force OS that provides no services to the outside network. It poses a much smaller target, and it is easier to aggressively patch and upgrade virtual machines than full blown operating systems that perform mission critical tasks. Furthermore a compromised virtual security layer can be easily switched odd, and “rolled back” to a “clean” state at any time. This is naturally not foolproof, but it does seem to offer slightly higher degree of protection than the traditional approach.
The point they were really trying to sell to us was the impact this has on large scale network architectures. They juxtaposed it against more traditional data center philosophy of putting physical firewalls between different parts of your infrastructure (ie. forward facing web servers are kept on a separate network from application and database servers. Using vitualization is like giving each machine it’s own dedicated hardware firewall and ips shielding it from everything else. The products they are selling are supposed to make it easier to organize machines into dynamic server pools, which can be reorganized on the fly using global policies, and the like.
There is a downside to this – running a security VM on each box is expensive in terms of performance. However, in the day and age of ubiquitous quad-core processors it is may not be such a huge concern. If you dedicate a single core and say 512 MB of RAM to run the VM you still have a 3-core powerhouse with 3GB of RAM on your hands. At least for now. I’m sure that the next version of Windows will probably need all 4 cores, and all your RAM to actually draw windows on the screen, but that’s a whole different story.
[tags]security, virtualization, vm, virtual machine[/tags]
This is extra-cool with modern systems that support hardware virtualization. Running a hardened OpenBSD install in a VM as a gateway with pf, clamav, and sendmail shouldn’t take much disk space or memory but would definitely stop most worms and mail viruses cold.
Heck, it may actually free up resources because you may not need to run an antivirus client locally (there’s still the problem of boot sector viruses on media brought from home or internet-downloaded viruses that make it past the blacklist, but the second problem could be solved by scrubbing HTML pages in transit and how often do you see boot sector viruses anymore?).
512MB and a core dedicated to the security vm?
O_o
You can run an iptables firewall vm in such a way (eg coyote linux) in only a few megabytes. I mean, sure, it’s an interesting idea, but does it really need to gobble that level of resources? Just “because it’s there” is how we get bloatware in the first place.
For example, I am now starting to test our software on Vista (now that SP1 is out). I’d tried it this time last year and it was far too buggy. A fresh vanilla install of MSDN Vista Ultimate 32bit + Windows Updates (all security and recommended, one optional) + nVidia drivers + Service Pack 1 + another Windows Updates + defragmenting (to improve the size of the backup image)… and the damn thing was THIRTEEN gigabytes. The only non-OS, non-driver software I had installed was Firefox. THIRTEEN gigabytes. I only gave the C: 20GB for ease of backup. More fool me, I guess.
After installing
(…remove “less than” sign and continue post…) 1g of our software, it’s now reporting over 15GB consumed, and that’s before I’ve even started up our software. Eh? Exactly how much space does Vista want? When will the horror end?
There better be some decent porn hidden in that bloated carcass of an OS, or there’ll be hell to pay.
also, why doesn’t the spellchecker on this box allow linux as a word?
dammit.
Don’t forget that office users in companies *don’t* and *won’t* have an x-core machine with y GB of RAM for at least 5 years to come. Companies tend to calculate in $$, and they get the cheapest machines that do the job for the average employee. *And* companies tend to decide on machines where they can be sure that there is support over the next bazillion years. There are monthly budgets and yearly budgets and tax legislation and allowance for depreciation so that it actually makes sense to repair a PC, not replace it. of course ymmv.
There are repairs, and you can discuss if it’s still the original PC after 3 repairs, but that’s how financial accountants think. Even my wife does so in her office — her office PCs are 7-year old Duron-750, and they still do the job, so why replace them? I had to replace 2 power supplies, 1 RAM, 1 VGA card, even 1 mainboard, but it’s still the same PCs. We switched from ME to XP, but still everything works. I even have 2 mainboards for 10 euros from ebay just to be able to repair the PCs once more.
As Schneier says “security is a trade-off” (just yesterday in his blog). I have a separate DSL modem connected to a linux server, and everything runs through dedicated proxies, so I’m fairly sure about security on the clients. And as for browser attack vectors: there’s no other chance than to keep the OS up2date, no IDS or IPS will help you there 100%. It’s more of educating the users.
Currently VM is a wonderful technology for data centers to increase the CPU usage (estimations range that average CPU usage in big DCs is 10-30%), save energy (IBM had a press-release that they could replace 3700 unix servers with 70 partitioned Zseries and offer the same number and power of services), decrease cooling requirements and provide on-demand provisioning, and it’s great for developers to segregate host from dev environment.
But it’s not for average office users right now.
This reminds me of the Monty Python joke in “The meaning of life” about the machine that goes “ping”. There is truth in what the bookkeeper tells, but noone understands it then (“Ah, I see you have the machine that goes ping. This is my favorite. You see we lease it back from the company we sold it to and that way it comes under the monthly current budget and not the capital account.”). Imho the joke in the movie goes about having a machine that noone understands and can use correctly.
the number is 3900 unix servers being replaced by 30 Zseries with virtualized or partitioned linux/390. sorry, memory is getting worse.
… and reducing 155 data centers to 7 worldwide is also saving quite some $$ for energy costs.
How can a virtual os stop viruses and block attacks on a different os running on a system? I’m assuming that both oses are running at the same time, but when a breah occurs, does it occur to the entire system such that any operating system can stop it, or is it targeted to the main one?
[quote post=”2385″]512MB and a core dedicated to the security vm?[/quote]
That’s 512 MB dedicated solely to the VM – but yeah let’s say 1-2GB dedicated to the VM to be more realistic. I have a 2.4 GHz dual core CPU on this very laptop and a 2GB of RAM and I can quite comfortably run Windows 2k in the VirtualBox on top of my Kubuntu install.
Vista is so bloated it scares me. And there is really no justification for it. To me it runs like XP’s slower cousin with minor UI tweaks. When I moved from 2k to XP I could at least “feel” this was a new OS. :P
[quote post=”2385″]Don’t forget that office users in companies *don’t* and *won’t* have an x-core machine with y GB of RAM for at least 5 years to come. [/quote]
True. This technology I think is targeted more at big data centers. They were really pushing the whole dynamic network architecture idea which doesn’t really matter that much in a regular office environment. And yeah, it is more expensive, but they claim you get much better security coverage out of this.
[quote post=”2385″]How can a virtual os stop viruses and block attacks on a different os running on a system?[/quote]
For one, it acts as a proxy between you and the internet so you can have packet scrubbing, intrusion prevention going on in there. An attack from the outside will most likely target the guest system which is what it will see on the network.
Other than that, I’m not sure. As I mentioned, they were a little light on details, and I got referred to their sales people for further questions. :P