linux – Terminally Incoherent http://www.terminally-incoherent.com/blog I will not fix your computer. Wed, 05 Jan 2022 03:54:09 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.26 Using BTSync Behind a Corporate Firewall http://www.terminally-incoherent.com/blog/2015/03/17/using-btsync-behind-a-corporate-firewall/ http://www.terminally-incoherent.com/blog/2015/03/17/using-btsync-behind-a-corporate-firewall/#comments Tue, 17 Mar 2015 20:00:22 +0000 http://www.terminally-incoherent.com/blog/?p=18443 Continue reading ]]> BitTorrent Sync is pretty neat. I have been using it ever since Scott recommended it in the quintesential backup thread of 2013. It even made it onto my big list of essential tools. It provides a nice alternative to cloud solutions such as Dropbox by enabling you to sync devices directly without storing your data on a centralized server owned by a third party.

One of the major issues I had with it was using it behind corporate firewalls. When you are on a network that only allows outbound communication on port 80 and 443, BTSync is completely useless. Unlike Dropbox or Google Drive which both have absolutely no issues synchronizing folders in such environment, the BTSync client simply does not work at all.

And yes, before you say anything, there are reasons to block outbound traffic on port 22. Firstly, if on average the number of users who need to ssh out of that location approaches zero, then leaving that port open simply increases the attack space for no reason. Secondly, even if users do need to ssh out, chances are they will be communicating with known servers. Why have a wide open port that can be used an abused, when you can control connections on IP and MAC address basis, and require audit trail, and change-of-permission request documentation when devs ask for more access.

The only outbound ports that are typically wide open are HTTP and HTTPS. Your local BOFHs can’t readily lock them down as tight as they would want to, unless they set up a proxy server. Fortunately, proxies break a lot of the modern, dynamic, internet based things so chances are you might not have one. And if you do not, then you can funnel your BTSync traffic through an SSH tunnel on a HTTP/HTTPS port.

To get this working working you will need a few things:

  • A functional shell with ssh on your work machine
  • An internet accessible remote machine running sshd server
  • Recent BTSync client (obviously)

If outbound communications on port 22 are open at your location, any server to which you have shell access will do. If you only can get out on ports 80 and 443, you will need to configure said server to run SSH daemon on one of these ports. This unfortunately requires root access.

You set this up by editing /etc/ssh/sshd_config. Search for the word “Port” and simply add another entry below, like this:

# What ports, IPs and protocols we listen for
Port 22
Port 443

Then restart ssh server:

sudo service ssh restart

Make sure you can ssh into it from behind the firewall. If your port 22 is closed, you can specify the alternate port on the command line like this:

ssh -p 443 you@your.host.net

If that works, you will now be able to create an SSH tunnel that will act as a SOCKS proxy. On the machine where you want to run the BTSync client, do the following:

ssh -D 9988 you@your.host.net -N -p 443

This will create a SOCKS proxy tunnel running on the local machine on port 9988. You don’t have to use that port number. Feel free to use any other port, as long as it is not taken by anything else. I recommend making a script with this command and saving it somewhere in your path, because you will have to run it whenever you want to enable syncing.

Finally, once you have the tunnel running open the BTSync client, go to Preferences and open up the Advanced tab. Check the “Use proxy server” box, type in the localhost ip and the port number you picked (in my case 9988). Use the default SOCKS4 proxy type:

BtSync Proxy Setup

BtSync Proxy Setup

Save the settings, then pause and restart syncing to make them take effect. Once you do this, you should see your folders syncing up as they should. Of course the sync will stop when the tunnel is closed, but it is better than nothing.

]]>
http://www.terminally-incoherent.com/blog/2015/03/17/using-btsync-behind-a-corporate-firewall/feed/ 8
Installing Arch Linux on the PogoPlug http://www.terminally-incoherent.com/blog/2014/11/03/installing-arch-linux-on-the-pogoplug/ http://www.terminally-incoherent.com/blog/2014/11/03/installing-arch-linux-on-the-pogoplug/#comments Mon, 03 Nov 2014 15:05:35 +0000 http://www.terminally-incoherent.com/blog/?p=17977 Continue reading ]]> Back in 2012 I wrote about how I set up a $30 linux server by installing Debian Squeze on a PogoPlug. I have been using the device for close to two years, but it died. I grabbed an identical replacement few days ago, but for some reason I was having trouble getting Debian working again, despite using identical device, similar thumb drive and following the same procedure. The truth is that Debian is not the best OS to run on this device. Pretty much everyone’s go-to system of choice for these devices is Arch linux which has excellent ARM support.

I’ve been itching to try Arch for a while not but I never really had an opportunity so I figured I might as well use it now. It worked amazingly well, so I figured it would be wroth while to document the procedure for future reference. Especially considering it is slightly different from the Debian procedure. I used this guide but with some significant alterations (see below).

Logging into the PogoPlug

First, you need to figure out the IP of your plug. Best way to do this is to log into your router and match it by name or Mac address. Once you know the IP address you can ssh into it using root as the username and ceadmin as password.

Preparing the Thumb Drive

The default OS on the PogoPlug is locked down pretty tight. Pretty much the only place with write access on the device is /tmp so you won’t be able to install to the internal drive (or rather it is fairly impractical to do). Instead you want to set up Arch on a thumb drive.

First, you will need to figure out which devices is the drive recognized as. I’m fairly sure the top-most USB port on the back of the device always registers as /dev/sda but you can easily check it by plugging it in and then running:

dmesg | tail

The last few lines should reveal which device the system thinks was plugged in. I’ll assume it was /dev/sda. First thing you want to do is to repartition it using fdisk:

/sbin/fdisk /dev/sda

Create two new partitions:

  • Press o to blow away all existing partitions on the drive.
  • Press n to create a partition, p to set it as “primary” and 1 to designate it as first
  • Hit Enter to accept the default starting point
  • Specify size using the format +[size]M where [size] is an actual value in MB. For example, I used +1536M designating majority of the space on my 2GB drive for my primary partition, and leaving 512MB for swap. If you have 4GB drive use +3582 and so on.
  • To set up second partition hit n, p, 2
  • Hit Enter to accept the default starting point
  • Hit Enter once again to use all the remaining space
  • Hit t then 2 to change the filesystem on partition 2 and use 82 (Linux Swap)
  • Hit a, 1 to make first partition bootable
  • Hit w to write changes to the disk

When you’re done the p command should return something like:

/dev/sda1   *           1         911     3501853  83 Linux
/dev/sda2             912        1018      411308  82 Linux swap

Since arch uses ext3 file system we will want to format the primary partition /dev/sda1 as such. Unfortunately the default OS on the PogoPlug does not ship with support for ext3 so we will need to download the mke2fs tool from the arch website and then use it to format the partition:

cd /tmp
wget http://archlinuxarm.org/os/pogoplug/mke2fs
chmod +x mke2fs
./mke2fs -j /dev/sda1
mkdir alarm
mount /dev/sda1 alarm

Installing Arch

Now we are ready to download the Kirkwood Arch binaries. The latest builds are close to 200MB in size, which was too big to fit in on the PogoPlug system partition. I recommend downloading it it to the newly formatted drive instead:

cd alarm
wget http://archlinuxarm.org/os/ArchLinuxARM-kirkwood-latest.tar.gz

The official PogoPlug write-up on the Arch website tells you to use bsdtar to extract this archive. This may or may not work for you. I had major issues unpacking that way due to a locale mismatch and the UTF-8 encoding being used for file paths within the compressed bundle. Extracting the file the old fashioned way however worked just fine which is what I recommend you do:

tar -xzvf ArchLinuxARM-kirkwood-latest.tar.gz
sync
cd ..
umount alarm

Finally, download the U-Boot installer which will flash the ROM memory of the PogoPlug and force it to boot off the USB drive. Note that this step can brick the device (though I’ve done it a dozen times by now and never had any issues):

wget http://archlinuxarm.org/os/armv5te/boot/pogo_e02/pogo_e02.sh
chmod +x pogo_e02.sh
./pogo_e02.sh

Once this is done, reboot manually:

/sbin/reboot

If everything worked correctly the device should now boot into Arch. When the device reboots, log in with username root and password root.

Configuring Arch

First thing you will probably want to do is to update the system. You use the default Arch package manager pacman for that:

pacman -Syu

Next, you probably want to change a root password and add a new regular user for yourself (remember to add yourself to the wheel group):

passwd
useradd -m -g users -G wheel -s /bin/bash luke
passwd luke

The Kirkwood install is very bare bones and it does not ship with sudo so you will probably want to install it:

pacman -S sudo

Configure it with visudo and append the following to the end of the file:

%wheel      ALL=(ALL) ALL

This will give your regular user and all the other potential future members of the wheel group access to sudo command. At this point it may be a good idea to log out and log back in to make sure the user account you just created works, and that you can use su and sudo to elevate your privileges. If everything works, you may want to disable the remote access to the root account like this:

passwd -l root

You will probably want to change the devices hostname. On Arch this is done via the hostnamectl command:

hostnamectl set-hostname mypogoplug

If you’re on a Windows network and you want to be able to use the hostname instead of the ip address when you ssh you will need to install samba and give it a netbios name:

pacman -S samba
cp /etc/samba/smb.conf.default /etc/samba/smb.conf

Modify the smb.conf file to include:

workgroup = MYWORKGROUP
netbios name = mypogoplug

Now start samba and set it to start on boot:

systemctl start samba
systemctl enable samba

You should now be able to ssh into your plug using mypogoplug rather than the IP address from Windows machines. If you have Apple machines on the network and you want to be able to access them using mypogoplug.local then you will need to install two additional packages: avahi and nss-mdns:

pacman -S avahi nss-mdns

Now open the /etc/nsswitch.conf file and change the following line:

hosts: files dns myhostname

into:

hosts: files mdns4_minimal [NOTFOUND=return] dns myhostname

Finaly, start the avahi-daemon and set it to be run on boot:

systemctl start avahi-daemon
systemctl enable avahi-daemon

At this point your device should be discoverable on the network and more or less ready to go for whatever purpose you may want to use it.

]]>
http://www.terminally-incoherent.com/blog/2014/11/03/installing-arch-linux-on-the-pogoplug/feed/ 4
Super Lazy Provisioning with Vagrant http://www.terminally-incoherent.com/blog/2013/11/20/super-lazy-provisioning-with-vagrant/ http://www.terminally-incoherent.com/blog/2013/11/20/super-lazy-provisioning-with-vagrant/#comments Wed, 20 Nov 2013 15:07:38 +0000 http://www.terminally-incoherent.com/blog/?p=15877 Continue reading ]]> It has been almost a year since I posted anything even remotely related to my php series. I guess I should have suspected that SITS would be the death-kneel for the project. It seemed like a good, small sized project but I didn’t anticipate for the fact that it was kinda boring and unexciting. Not to mention the fact that I was over-complicating it in the name of doing things the right way™. Maybe one day I will actually manage to finish it… Maybe. No promises though.

It just so happens that this series was the first time I talked about Vagrant – the super sleek virtual machine manager for developers. Since then I’ve been working on and off with various Vagrant boxes and experimenting with different ways of setting them up in efficient, hassle free ways. So I figured I might as well make a post about it and share a thing or two I learned.

Old Vagrant Logo

Btw, can you believe this used to be the official Vagrant logo?

The simplest way to get a vagrant box up and running is to follow the official manual and simply do something like:

vagrant init my_dev_server http://files.vagrantup.com/precise32.box
vagrant up

This gives you a naked, bare bones Ubuntu installation that you can ssh into and customize to your liking. By default it ships without things like Apache or PHP and that’s actually a good thing, because you can make it into whatever you want. Why would you need PHP on a RoR box for example?

The downside of this is that every time you want to start a new project with a squeaky clean environment you have to spend time setting it up. One way of avoiding this overhead is to set the environment up once, and then package it as a box file using the vagrant package command. This will produce a portable virtual machine that you can share with other people or upload somewhere. Next time you need to set up the same type of environment you just do:

vagrant init new_dev_server http://path/to/my_dev_server.box

This works extremely well and makes sharing environments with co-workers or team-members extremely easy. In fact it lets you give someone the exact copy of your development environment at any time without much hassle.

But box files tend to be rather big. The more complex the setup, the bigger the file tends to be. You have to host them somewhere, or figure out an efficient way of sharing them (like Dropbox for example) because sure as hell you won’t be emailing them to anyone.

Not to mention that forking off copies of your working environment and making them “official” dev boxes for new team members is probably not the best idea. Chances are that your personal vagrant box will diverge from the stock copy and get customized. You will likely import your .bashrc, aliases your text editor config files and etc. You probably don’t want to distribute all that, so chances are that you have some “official” dev box you keep pristine clean installing only the bare bones dev-dependencies.

What is the difference between your “deployment-ready” box and the stock one provided by the vagrant team? In most cases the exact delta between the two is one 15 minute session of furiously typing apt-get commands. That’s honestly about all you need to do to set up a workable lamp box.

So, here is a crazy idea: why not start with a bare bones stock box, and then just run a script that includes all those furiously typed commands? Congratulations my friend, you almost invented provisioning. Vagrant actually has a full “balls to the walls” support for all kinds of hard core provisioning tools such as Puppet, Cheff and Ansible. All of these tools are great and extremely useful, and if you are a sysadmin you might already be familiar with some or all of them. And if you do not, then learning them will definitely be beneficial to you.

That said, many of us are developers who are merely trying to set up a standardized development environment for a team of three to five people. Learning how to create a Puppet Manifest, Cheff Cookbook or an Ansible Playbook might be an overkill. Fortunately latest versions of Vagrant has something for us too: shell provisioning.

The idea behind shell provisioning is exactly what we came up with few paragraphs above: use a stock box, then run a script to set it up. It is simple, easy to configure and hard to screw up. How do you do it?

Well, let’s write the simplest Vagrantfile we can think of:

Vagrant.configure("2") do |config|
  config.vm.box = "lamp"
  config.vm.box_url = "http://files.vagrantup.com/precise32.box"
  config.vm.network :forwarded_port, guest: 80, host: 8080
  config.vm.synced_folder "./www", "/var/www"

  config.vm.provision "shell", path: "./setup.sh"
end

The first five lines are standard. Like 3 tells Vagrant where to find the base box, line 4 sets up port forwarding and line 5 shares the /var/www directory on the guest os with the host. Line 7 is the important part, and it simply points to a shell script, path being relative to your Vagrantfile.

What do you put in the script? Anything you want really. For example if you just want to set up Apache and PHP then it can be dead simple:

#!/bin/bash
sudo apt-get update
sudo apt-get -y -q install apache2 php5

You put both the Vagrantfile and setup.sh in the same directory and run: vagrant up. The script will run immediately after the machine boots for the first time. It is that simple.

If you want to install a full LAMP stack you need to do something a tiny bit more complicated. Why? Because in their infinite wisdom Ubuntu package creators decided that mysql-server package needs to use a dialog to ask for root password. As you can imagine this does not really work during a initial provisioning boot sequence. So you need to be sneaky and set up an unattended mysql install like this:

#!/bin/bash

# Set up unattended install of mysql-server
export DEBIAN_FRONTEND=noninteractive
sudo debconf-set-selections <<< 'mysql-server-5.1 mysql-server/root_password password toor'
sudo debconf-set-selections <<< 'mysql-server-5.1 mysql-server/root_password_again password toor'
sudo apt-get -y update 
sudo apt-get -y -q install apache2 php5 php5-mysql mysql-server mysql-client

What else can you do? Anything you want really. Anything you can think of, and put down in the form of a script should work, as long as it does not need any real-time user input. For example, if you wanted to enable the Apache mod_rewrite module, and install the Composer package manager you could add this to the bottom of our file:

# Enable mod_rewrite
sudo a2enmod rewrite 
sudo service apache2 restart

# install additional non-essential stuff
sudo apt-get -y -q install curl vim

# Install Composer
curl -sS https://getcomposer.org/installer | php
sudo mv composer.phar /usr/local/bin/composer

One important note is that Vagrant will check for existence of the provisioning file every time it stars. So if you decided you don't want to keep your provisioning script around on your working instance you have to remember to remove it from the Vagrantfile. Or you can make the script clean up after itself automatically. This is exactly what I did in my personal LAMP setup.

You should note that the linked repository is almost empty. There isn't much there other than the Vagrantfile and setup.sh which is basically all you need. The idea is to distribute these two tiny text files, instead of a gigantic packaged box, because the end result is the same: a vm that is set up and ready to go.

Anyone who wants to create a clean environment with this setup simply has to clone it, and run it:

git clone https://github.com/maciakl/lamp.git
cd lamp
vagrant up

Feel free to use it for your own nefarious purposes. As usual, comments and bug reports are more than welcome.

]]>
http://www.terminally-incoherent.com/blog/2013/11/20/super-lazy-provisioning-with-vagrant/feed/ 3
Unity is not Great http://www.terminally-incoherent.com/blog/2013/05/08/unity-is-not-great/ http://www.terminally-incoherent.com/blog/2013/05/08/unity-is-not-great/#comments Wed, 08 May 2013 14:09:01 +0000 http://www.terminally-incoherent.com/blog/?p=14329 Continue reading ]]> About two weeks ago my work laptop died. The motherboard just bricked itself to pieces and there was no rescuing it. As my old machine was old and decrepit, and I was going to be replacing it with something with an entirely different hard drive profile I opted to do a clean install of Ubuntu 12.4. Previously I’ve been running Kubuntu 10.4 and it’s been high time to move on.

Why not 13.4 you ask? Well, I’m in my 30’s now which means I’m living on an accelerated time-space curve now. Time simply flows faster than it did when I was in my teens or twenties. For example, I recently refereed to an event that happened back in 2010 as “the other day” because that’s honestly how it felt to me. From that you can probably extrapolate that I am not a huge fan of upgrading my OS every other day (which is roughly how 6 months seems to me). I can hardly give half of a fuck what quirky adjective-animal combination is the most up-to-date one. I just want something semi-stable that is regularly patched, and that has a decent sized package library. I’ve ran non-LTS versions of the OS before, and I was always unpleasantly surprised when Canonical end-of-lifed them and “vanished” the repositories leaving me stranded every 6 months or so. To put it plainly, anything that is not an LTS is not worth my time.

12.4 ships with Unity – Cannonical’s new window manager and desktop environment. There has been a lot of discussion about it when it was first released, and it seems that Ubuntu community has been torn apart between those who like it and those who hate it. Previously I didn’t really have an opinion in the discussion, because I haven’t really used it for anything substantial. I of course checked it out back when it was new, and it looked kinda sleek and a tad bit showy for my tastes but I didn’t outright hate it.

With my switch to 12.4 I decided to give it a fair shake and see how it performs in regular day-to-day usage. Unfortunately, it turns out that all the haters were right: Unity is not that great. It is style over substance. It seems to try so hard to be OSX like that it makes things that used to be simple needlessly difficult.

Here are my biggest complaints about Unity in order of severity:

  1. Poor performance
  2. No functional pager
  3. Terrible application awareness

Let me tackle each of these in turn.

Performance

The machine I replaced my old laptop with wasn’t top of the line. It was one of the rank and file laptops we had in stock, and I might be able to swap it to a beefier dev machine at some point soon. Still, the hardware was nothing to scoff at – a respectable Intel i5 CPU and 4GB of RAM. Nothing to write home about, but we usually run Win 7 with full-blown Areo effects on this hardware, and it handles it without breaking a sweat. Unity was making it work for its money: the fan was whining full speed most of the time, and launching applications would actually freeze up the desktop for a few seconds. Switching desktops was literally painful.

You could argue that it’s not Unity but Ubuntu itself being an unoptimized resource hog. I was concerned by this too, but then I decided to do an experiment and ran the following command:

sudo aptitude install gnome-panel

I then logged out, logged back in and all my performance issues went away. Applications were now launching normally, and switching desktops wen from 1 second lag, followed by jerky animation to an instantaneous flick.

It’s probably also worth mentioning that after trying Gnome Panel, I went back and booted up in the Unity 2D mode to see how it stacks up. It was a big improvement on the default setup and the machine was actually usable. So if you do want to give Unity a whirl, I highly recommend using the 2D mode unless you have a top of the line rig. Of course half of Unity’s charm is how pretty it looks so you will be getting a diluted experience. Along with my other issues, I decided it just wasn’t worth it.

Lack of Pager

Both KDE and Gnome has always had excellent pagers. You know – those little widgets that sit in your task bar and let you switch between your virtual desktops. I always loved the fact that I could just glance onto said pager in the corner of my screen and see the rough layout of my windows on each desktop. Not only that, but I could just click on any arbitrary desktop to switch to it, or grab any of said windows and drag it to a different desktop letting me organize my shit without any hassle. In Unity that functionality is replaced with OSX inspired “workspace switcher” which does the pan-and-zoom-out kind of animation every time you activate it. It’s icon in the dock is static and doesn’t give you that at-a-glance preview which seems like a downgrade.

Gnome Panel

Note the pager in bottom right which shows you what windows are open on which desktop/display and allows you to drag and drop them.

I liked seeing window outlines in my KDE/Gnome pager. I liked being able to drag windows between desktops without actually switching to those desktops via the pager. The pager was always one of the best features of a Linux desktop. I always miss it whenever I’m working on Windows of OSX. In the past I have tried third party solutions that would add virtual desktops to Windows and pager like functionality of OSX but none of these have ever worked as well as the native KDE/Gnome task-bar widgets.

One could argue that removing the pager in not unorthodox because it mimics OSX behavior that users might be more familiar with. The fact that it was aped from Apple however doesn’t make it good. Personally I am not a huge fan of OSX spaces. I’m glad the functionality exists, but I often find myself wishing it worked more like traditional Gnome/KDE style virtual desktops. It is admirable that Cannonical is trying to learn from the best, but I think in this case they got it wrong. They took something that was working well, and replaced it with something similar but less functional.

Application Awareness

Unity is also missing a task bar. I don’t know about you but I like task bars. I’m very fond of the traditional, one entry per window task management. One of the first things I do when I install Windows is to disable collapsing in the task bar. I like to be able to see how many windows I have open, and be able to switch between them easily with a mouse. Collapsing applications into a single icon on the task bar hides vital information that I need to work efficiently.

OSX doesn’t have a traditional task bar, but it provides some alternatives. When you minimize windows in OSX, you essentially iconify them into custom doc entries. So as long as you command-M your windows instead of letting them go out of focus, you have yourself a functional task bar with one entry per minimized window.

Unity implementation combines worst features from both worlds and ends up with Launcher which is barely workable. Instead of showing you what windows are open on current desktop, Unity adds dots to the left of the icons of apps. One dot per open window on the current desktop… But only up to three dots in total for some reason. Instead of collapsing minimized windows into their own entries it simply hides them. So the only way to find out you have minimized windows on current desktop is to count the dots, then count open windows and subtract.

Unity Desktop

Pop Quiz: How many instances of Terminal are running here? The answer is five.

You can of course trigger Apple-expose like windows splay by clicking on said dots, but that shows you a preview of all the windows, without differentiating which ones were open, which were in focus and which were minimized. It seems a very haphazard, unfocused take on window management. With a traditional task-bar I can glance on the bottom of my screen and I instantly know how many windowed applications I’m running and on which desktops I have open windows. With Unity I always felt like I was flying blind, never having full awareness of my work environment.

Conclusion

In my honest opinion, Unity is mostly broken by design. I don’t blame Cannonical for trying to design a desktop environment that is easy to use and intuitive. I don’t blame them for abstracting and hiding away a lot of configuration details in lieu of a streamlined and unified look and feel. I don’t mind that they came up with an opinionated desktop environment that makes bold design choices. This is actually a good thing. Linux desktop needs this sort of focus on usability and user friendly environments. I’m glad that Unity exists because it allows us to have discussions about usability and user centric design on Linux desktop – something that used to be an almost foreign concept few years ago.

That said, when you come up with an opinionated framework that makes bold decisions, you risk using established power users. I am unfortunately one of them. I’ve been an Ubuntu user for many years now, but Unity is just not for me. I like traditional task-bars and pagers and I want them to be part of my desktop experience. If I didn’t like them I would probably be using a tiled window manager like real men are supposed to (or so I’m told).

That said, I can see how Unity could provide better user experience for novice users who have not yet developed habits such as juggling many virtual desktops and displays. If you are the type of person who usually runs one or two applications at a time, Unity 2D might actually be a viable option. The large icons on the launcher and search based application finder make it very easy for a Linux novice to find a program they might need for a particular task.

It’s a pity that Unity offers next to nothing to us power users.

]]>
http://www.terminally-incoherent.com/blog/2013/05/08/unity-is-not-great/feed/ 16
Explosive Log Failure http://www.terminally-incoherent.com/blog/2013/04/08/explosive-log-failure/ http://www.terminally-incoherent.com/blog/2013/04/08/explosive-log-failure/#comments Mon, 08 Apr 2013 14:04:52 +0000 http://www.terminally-incoherent.com/blog/?p=14157 Continue reading ]]> Terminally Incoherent went down this Friday. And when I say down, I mean all the way down – terminally shut as it were. Fortunately, no one noticed. I got no angry emails.Hell, I didn’t even get a friendly “hey, your site just went down”… On second thought this is actually kinda sad. One of you should have noticed, damn it!

Initially I thought it was a traffic spike so I just kept rebooting the server. Initially this would temporarily solve the problem. The blog would come back, only to crash and burn few minutes later. Eventually rebooting stopped working. Intriguing thing was that Apache seemed to keep on chugging. When you visited the front page it would display the familiar WordPress error: “Could not establish database connection”. This usually does not happen when I get DDOS‘ed due to Reddit or Hacker News link.

Whenever I get a huge traffic spike, Apache is usually the first thing to go. It’s actually a condition that is rather easy to spot: as soon as you ssh to the server you get a scrolling list of error notifications about processes being killed because your system is out of memory. This time however there was no such thing happening. I ran top and was surprised how few processes were actually running. I’m used to seeing at least a dozen lines for Apache sub-processes but there was only like one or two present.

I did some more poking around and ascertained that the culprit was definitely MySQL. I tried manually restart the server but it wouldn’t stay up. I tried running it in a non-deamon mode with the –debug parameter but it would silently close right away without displaying any error messages. The MySQL error log was empty giving me nothing to go on.

In an act of desperation, I decided to reinstall MySQL. I figured out that maybe a recent update screwed something up (especially after I rebooted the machine so many times). Typically aptitude does not delete the contents of /var/lib/mysql (ie. where your physical database files live) but I was not going to take any chances. So I attempted to make a backup copy of all of the files in my home directory…

Unfortunately (or maybe fortunately, considering the circumstances) this failed. Initially I figured out it was a permission thing (wonky sudoers file maybe), so I tried this again as root with the same result. The machine simply refused to copy files. I think it might have mentioned something about being out of storage space but I initially ignored it. I mean, how could that machine be out of space seeing how it only hosts this blog which is not very large at all. Last time I checked it there was only about 2GB of used space on the root partition. This included both the OS, software and the site itself including all the auxiliary files.

After few minutes of muddling I ran df -h to make sure I was not out of space. I was wrong. The Used% value was at 100%. Somehow I managed to fill up my entire 20GB disk to the brim. My first thought? “HACKERS!”

I’m actually quite embarrassed at this and I blame Hollywood for infecting me with this memmetic garbage. I guess it is easier to externalize personal failures and throw blame at imaginary boogeymen than take responsibility. So I slapped myself, and decided to act like an adult. If your first reaction to a computer issue is “HACKERS” then you are either a child, my grandmother, a schizophrenic suffering from delusions of grandeur or a pointy haired middle-management individual who failed upwards into a position of power. Chances were that whatever happened there was my own fuck-up and not the work of some elusive and mysterious hax0r.

How do we track down where the disk space is going? It’s actually not that difficult, especially considering I wrote an extensive article on this very subject not so long ago. To make a long story short, the magic command is:

du -sBG /* | sort -nr | head

This gave me a list of the first ten top level folders in / ordered by size of their contents. Guess what was the #1 spot on that list? It was the /srv folder where my site lives. Or, as I found out after drilling down a bit it was the /srv/www/terminally-incoherent.com/logs directory. There were only two files in that folder: error.log which was 47MB and access.log which was over 17GB.

How did this happen? Why were these logs not rotated? Well, because I fucked up. When I moved the site to a new host back in June of 2011 I decided to put the logs in this directory rather than in the default /var/log/apache2. I guess the reason for this was that I was setting it up using Apache Virtual Hosts feature allowing me to run more than one site from this server. So if I ever decided to set up another domain, I didn’t want separate logging for both.

What I forgot to do back then was to set up a logrotate rule for that directory. By default, Apache automatically rotates it’s logs on a weekly basis keeping approximately a year’s worth of archival logs for reference. But whenever you set up custom Virtual Hosts and specify new log locations for them, said rule does not apply.

To set up log rotation for my site I simply copied over and tweaked the default Apache rules, creating a new file in /etc/logrotate.d with the following contents

/srv/www/terminally-incoherent.com/logs/*.log {
        weekly
        missingok
        rotate 12
        compress
        delaycompress
        notifempty
        create 640 root adm
        sharedscripts
        postrotate
                if [ -f "`. /etc/apache2/envvars ; echo ${APACHE_PID_FILE:-/var/run/apache2.pid}`" ]; then
                        /etc/init.d/apache2 reload > /dev/null
                fi
        endscript
}

I figured that 12 weeks ought to be enough backlog for now, considering that I haven’t really looked at the access.log file at all since 2011.

To be honest, I really did not expect this to ever be a problem. These log files are just pure text, and they only grow when someone views the site. Granted, I probably get multiple log entries per hit because of all the little bits and pieces, images and scripts that have to load for each page. Still, 17GB in a little under two years is quite a respectable amount of traffic.

Thanks for reading my ramblings all these years guys!

]]>
http://www.terminally-incoherent.com/blog/2013/04/08/explosive-log-failure/feed/ 9
Informative MOTD http://www.terminally-incoherent.com/blog/2013/03/20/informative-motd/ http://www.terminally-incoherent.com/blog/2013/03/20/informative-motd/#comments Wed, 20 Mar 2013 14:13:30 +0000 http://www.terminally-incoherent.com/blog/?p=14075 Continue reading ]]> Here is something that I usually like to do when I set up new servers: create informative Message of the Day that will remind me what this server is all about and where do things live on it and maybe if I set up some special aliases and or shortcuts on that box. For example, when I log into my PogoPlug I’m greeted with something like this:

Motd

MOTD on the PogoPlug

Another server I maintain may have a slightly more complex MOTD which looks something like this:

Another MOTD

Another MOTD

Note that I’m censoring these pictures mostly to avoid random h4x0ring attempts. Obscurity ain’t security, but posting OP addresses along with configuration details on the internet is just asking for trouble.

When you usually just ssh to one or two servers, which run the same OS and have similar setup then it is easy to keep all the relevant information about them in your head most of the time. When you have a dozen machines under your wings, some of which are work servers and some which are running mostly for fun, managing them might get a bit more confusing. Especially if they are running different OS’s and have been configured in different ways and for different purposes. What really got me into this habit was adding FreeBSD into the mix of my servers.

Prior to my adventures with FreeBSD I would mostly stick to Linux, and more specifically to the Debian/Ubuntu family of Linuxes. BSD, being a Unix behaves a lot differently and there are many subtle differences in the way you accomplish various maintenance tasks. I noticed that every time I logged into that server I had to remind myself where specific config files were, how you gracefully restart the web server, and etc… One day I broke down and simply added all of that stuff into /etc/motd and that was it.

From that point on, I would log into that server and go “Wait, how do I… Oh, never mind – it’s right here”. It also helps when you are not around, and someone else has to apply some updates or tweak some config files on that server. Quite a few people told me they love how the machines I maintain have all the useful relevant info up front, whereas other boxes on our network mostly greet them with a bare bones $ prompt.

To set it up on most systems, all you need to do is to edit the /etc/motd file. Ubuntu an Debian are both a little bit special in this aspect in that they automatically generate that file. In both cases the motd file is actually a link to /var/run/motd which gets overwritten and regenerated based on a script quite frequently.

If you want a custom message of the day on these systems, you need to edit /etc/motd.tail instead. If you happen to be running Ubuntu, then that is all you will need to do. The changes will be picked up automatically next time you log in. On Debian you actually either have to reboot the machine to see the changes, or just run something like:

uname -snrvm > /var/run/motd
[ -f /etc/motd.tail ] && cat /etc/motd.tail >> /var/run/motd

Granted, this advice applies mostly to servers that will only be accessed by administrators. You probably don’t want to set a very detailed and explicit MOTD on multi-user systems that will give shell access to other people you don’t implicitly trust. In those cases, MOTD is best employed to display “Don’t fuck around on this system or you’ll get banned and/or fired” type message. I usually move my detailed setup driven MOTD to my home directory and create a .bash_profile file that basically does:

source .bashrc
[ -f ~/motd ] && cat ~/motd

This still gives me a nice welcome prompt with all the useful information, but does not expose that info to end users.

How do you use the MOTD on your systems? I’m fairly sure there could be some really interesting things I could do with it beyond what I’m using it for right now. Do you make it display useful system information? Diagnostics? Do you ignore it? Let me know in the comments.

]]>
http://www.terminally-incoherent.com/blog/2013/03/20/informative-motd/feed/ 6
What’s in your Bash Prompt? http://www.terminally-incoherent.com/blog/2013/01/14/whats-in-your-bash-prompt/ http://www.terminally-incoherent.com/blog/2013/01/14/whats-in-your-bash-prompt/#comments Mon, 14 Jan 2013 15:04:01 +0000 http://www.terminally-incoherent.com/blog/?p=13635 Continue reading ]]> I spend a great deal of time on the command line. That’s how I usually manipulate files, do little maintenance tasks, run tests from and etc. Because of this I like to make that environment nice to look at. I typically set up my console terminals and editors with the excellent and rather popular Solarized theme. I like it because it provides an unified look and feel to my command line regardless of the client or the platform. It works in Konsole, iTerm2 and even in PuTTY on windows via a neat registry hack. Not to mention that the Solarized Dark theme is actually one of the very few light text on dark background combinations that doesn’t make my eyes hurt.

Last week we talked about various ways to streamline and customize Bash. That post didn’t include an important area: the command prompt itself. That’s because it is a complex topic in and of itself.

There are prompt purists out there who swear by the standard, minimalistic prompt like:

PS1="\$ "

This is enough to distinguish whether or not you are logged in as root (though I don’t recall the last time I had to log in anywhere as root because of the magical properties of sudo. But, that’s way to simple for me. For example, I tend to forget where I am in the file system quite easily. Sure, I could always just run pwd to see my location, but why do that if I could simply display it in my prompt using the \w tag like this:

PS1="\w \$ "

While you’re adding crap to your prompt, you might as well go all the way and use the standard prompt string you commonly see on Ubuntu boxes:

PS1="\u@\h:\w \$"

This typically evaluates to the boring username@host:~ $ style prompt. Purists and unix Longbeards scoff at this. They are like “does your username and host change so often you need it in the prompt?” The answer to this is of course… Yes. Kinda. On a typical day I have bunch of terminals open, each of them logged into a different machine. Having my username and host on the prompt lets me ascertain at a glance weather I am logged into the production server, the test server, my home server, my other home server, my vagrant instance or if I’m just on the local prompt. So I actually really need these things there. I also need them not to be this ugly.

Bash lets you use ANSI color codes to colorize your output, using the rather ugly \033 sequence followed by a color code. Unfortunately using these color codes directly turns your prompt into a completely unreadable mess. Check this out:

PS1="\[\033[32m\]\u\[\033[0m\]@\[\033[32m\]\h\[\033[0m\]:\[\033[33m\]\ w\[\033[0m\] \$ "

I absolutely hate this. I edit my prompt only every once in a blue moon, so I don’t memorize these codes. Every time I have to make a change to the prompt I have to make effort to actually parse this ungodly string. So I like to define my color codes like this:

##-ANSI-COLOR-CODES-##
Color_Off="\[\033[0m\]"
###-Regular-###
Red="\[\033[0;31m\]"
Green="\[\033[0;32m\]"
Purple="\[\033[0;35\]"
####-Bold-####
BRed="\[\033[1;31m\]"
BPurple="\[\033[1;35m\]"

Then I can just write my prompt with readable variable names. Please tell me that this is not much, much better than the thorn-bush-tangle looking line from paragraph above:

PS1+="$BRed\u$Color_Off@$BRed\h$Color_Off:$BPurple\w$Color_Off \$"

This gives you a prompt that looks like the one you might have seen in my PHP or bash tutorials from the past – a red username/host part and purple path. It is simple, aesthetic and useful. Lately though I’ve been kinda jealous of the people with tricked out prompts that return the status of the last command or the status of their git repository. So I decided to be fancy and write my own.

The status of the last command was pretty easy. You just check the $? variable and set the colors appropriately. I wanted to do it nicely, so I wrote it as a function.

# Status of last command (for prompt)
function __stat() { 
    if [ $? -eq 0 ]; then 
        echo -en "$Green[✔]$Color_Off" 
    else 
        echo -en "$Red[✘]$Color_Off" 
    fi 
}

The git part was little bit more difficult because while most systems have a __git_ps1 function that is to be used specifically in the prompts, it only returns the name of the current branch, but not it’s status. What I really wanted was to have prompt that could tell me whether or not my index is “dirty” or not.

Most of the time when I cd into a git repository, I run git status to see if there are any uncommited or untracked files in there. Wouldn’t it be nice to have that information in the prompt? But how?

Well, I tried couple of different things and settled on a solution I mostly stole from here. It turns out that there is no way to avoid running git status, but you can force your prompt to do it automatically and then parse the results like this:

# Display the branch name of git repository
# Green -> clean
# purple -> untracked files
# red -> files to commit
function __git_prompt() {

    local git_status="`git status -unormal 2>&1`"

    if ! [[ "$git_status" =~ Not\ a\ git\ repo ]]; then
        if [[ "$git_status" =~ nothing\ to\ commit ]]; then
            local Color_On=$Green
        elif [[ "$git_status" =~ nothing\ added\ to\ commit\ but\ untracked\ files\ present ]]; then
            local Color_On=$Purple
        else
            local Color_On=$Red
        fi

        if [[ "$git_status" =~ On\ branch\ ([^[:space:]]+) ]]; then
            branch=${BASH_REMATCH[1]}
        else
            # Detached HEAD. (branch=HEAD is a faster alternative.)
            branch="(`git describe --all --contains --abbrev=4 HEAD 2> /dev/null || echo HEAD`)"
        fi

        echo -ne "$Color_On[$branch]$Color_Off "
    fi
}

This is not a perfect solution. It parses what is commonly known as the porcelain – the user readable output that is bound to change. So upgrading git to the next release is likely to break if they decide to change the wording of these prompts a bit. Still, this is by far the fastest solution which utilizes only a single external command (other than bash built-ins).

Now you put it all together like this:

PS1=""
# command status (shows check-mark or red x if last command failed)
PS1+='$(__stat) '$Color_Off

# debian chroot stuff (take it or leave it)
PS1+="${debian_chroot:+($debian_chroot)}"

# basic information (user@host:path)
PS1+="$BRed\u$Color_Off@$BRed\h$Color_Off:$BPurple\w$Color_Off "

# add git display to prompt
PS1+='$(__git_prompt)'$Color_Off

# prompt $ or # for root
PS1+="\$ "
export PS1

In a perfect world you ought to have a prompt that looks like this:

My Bash Prompt

My Bash Prompt

Unfortunately, it does not work. Or rather it outputs something that looks like this:

\[\][✔]\[\] luke@firmin:~ \[\][master]\[\] $

The stuff in the middle is fine, but the non-printable character escape codes \[ and \] get printed from within the function calls. Why? Well, internally Bash recognizes these as escape codes when it parses the PS1 prompt variable but the echo command does not recognize them as such. So you can’t use these characters when you echo.

You could always set up your color codes without these characters like this:

Color_Off="\033[0m"
Red="\033[0;31m"
Green="\033[0;32m"
Purple="033[0;35"

Now my functions will work correctly but this causes another issue. If you don’t use the \[ and \] escape codes, Bash treats the ASNI codes as printable characters when it calculates the column count. If your column count is off, then your commands won’t wrap to the next line. Instead they will wrap around and overwrite your prompt. This is endlessly frustrating and annoying. Unfortunately it is a catch 22 – for PS1 prompt to work correctly you must escape color codes. But if you use functions you must echo the results and can’t escape properly.

I spent countless hours fighting with this issue, until I realized that using functions was just not a practical idea. In most scripting environments this would be a sound choice – modular code, encapsulation, etc. But Bash just doesn’t work that way.

So I rewrote it using the PROMPT_COMMAND variable. Normally when you set up your PS1 it gets evaluated when a Bash shell is instantiated. The trick I was using was injecting function literals into that variable, to trick Bash into running them each time it evaluates the prompt. This is hackish, but that’s how most people do it.

There is actually a proper way to set up Bash to build your prompt at run time. What you do is you assign a callback function to PROMPT_COMMAND variable. This function then gets called every time Bash is about to print out your PS1. What’s more important is that this function can manipulate and update the PS1 variable before it gets printed. So as long as you do all your magic in that function, you can append variables directly to PS1 without the need to echo them.

Here is my solution:

# set up command prompt
function __prompt_command()
{
    # capture the exit status of the last command
    EXIT="$?"
    PS1=""

    if [ $EXIT -eq 0 ]; then PS1+="\[$Green\][\!]\[$Color_Off\] "; else PS1+="\[$Red\][\!]\[$Color_Off\] "; fi

    # if logged in via ssh shows the ip of the client
    if [ -n "$SSH_CLIENT" ]; then PS1+="\[$Yellow\]("${$SSH_CLIENT%% *}")\[$Color_Off\]"; fi
    
    # debian chroot stuff (take it or leave it)
    PS1+="${debian_chroot:+($debian_chroot)}"

    # basic information (user@host:path)
    PS1+="\[$BRed\]\u\[$Color_Off\]@\[$BRed\]\h\[$Color_Off\]:\[$BPurple\]  \w\[$Color_Off\] "
    
    # check if inside git repo
    local git_status="`git status -unormal 2>&1`"    
    if ! [[ "$git_status" =~ Not\ a\ git\ repo ]]; then
        # parse the porcelain output of git status
        if [[ "$git_status" =~ nothing\ to\ commit ]]; then
            local Color_On=$Green
        elif [[ "$git_status" =~ nothing\ added\ to\ commit\ but\ untracked\ files\ present ]]; then
            local Color_On=$Purple
        else
            local Color_On=$Red
        fi

        if [[ "$git_status" =~ On\ branch\ ([^[:space:]]+) ]]; then
            branch=${BASH_REMATCH[1]}
        else
            # Detached HEAD. (branch=HEAD is a faster alternative.)
            branch="(`git describe --all --contains --abbrev=4 HEAD 2> /dev/null || echo HEAD`)"
        fi

        # add the result to prompt
        PS1+="\[$Color_On\][$branch]\[$Color_Off\] "
    fi

    # prompt $ or # for root
    PS1+="\$ "
}
PROMPT_COMMAND=__prompt_command

The end result looks like this:

Improved Prompt

Improved Prompt

As you can see I sort of abandoned the check-mark and x notation in lieu of command history numbers. Why? Mostly because these characters were causing line wrapping issues in some terminals (notably PuTTY). But also because the history numbers are actually somewhat more useful than simple ok/fail which is effectively communicated by color alone.

In addition I also added a little yellow tag that shows your remote IP if you happen to be connected via SSH like this:

Logging into Remote Systems

Logging into Remote Systems

Why do I need that on my prompt? Well, sometimes it is useful to know your IP – for example not to firewall yourself off from a remote system by accident. It also helps when you do port forwarding or have to play SSH-INCEPTION to get somewhere. For example, if I’m at work I can’t really SSH out of the network because of draconian firewall rules. The only outbound ports that are open to end users are 80 and 443. So if I for some reason I need to log in to one of my Linode boxes or to a university server I first have to SSH home. I run a tiny server listening for SSH connections on port 80 at home for the sole purpose of being my “relay” when I’m at work. So seeing the IP on the prompt gives me an idea where the hell am I logged in from when I suddenly find an open terminal with a live ssh session from 4 hours ago.

How about you? What is on your prompt?

]]>
http://www.terminally-incoherent.com/blog/2013/01/14/whats-in-your-bash-prompt/feed/ 17
Bash Tips and Tricks http://www.terminally-incoherent.com/blog/2013/01/07/bash-tips-and-tricks/ http://www.terminally-incoherent.com/blog/2013/01/07/bash-tips-and-tricks/#comments Mon, 07 Jan 2013 06:07:31 +0000 http://www.terminally-incoherent.com/blog/?p=13515 Continue reading ]]> We haven’t done one of these threads in a while have we? Let’s share our favorite shell tips and tricks. I’ll talk about Bash because that’s what I use in my day-to-day work. If you happen to be Zsh user, please share your favorite tricks as well.

History

Here is an idea – save all the histories! We are running modern computers with lots of memory and storage space: we might as well preserve our bash history forever (or at the very least for a very long time). This gives us an opportunity to do reverse search, and thus avoid typing same commands over and over. It also helps with forensics, in case you need to find out what went wrong with your system.

Here is what I usually put in my .bashrc:

# save all the histories
export HISTFILESIZE=1000000
export HISTSIZE=1000000

Since we’re allowing our command history to accumulate, it is a good idea to try to keep it as tidy as possible. This involves removing duplicate commands, and combining multi-line commands so that they can be re-run from the history with ease:

# don't put duplicate lines or empty spaces in the history
export HISTCONTROL=ignoreboth

# combine multiline commands in history
shopt -s cmdhist

Sometimes you have multiple terminal windows open at the same time. By default, the window that closes last, will overwrite the bash history file, loosing the history of all the other terminal windows and ssh sessions in the process. This can be avoided by this little setting:

# merge session histories
shopt -s histappend 

Most users are aware that you can use the and keys to browse/repeat the recent commands. That’s merely the tip of the iceberg.

Using the History Effectively

As you know, you can see your command history at any time by issuing the history command. The entries in the history are marked sequentially and the exclamation mark can be used to run these commands by number like this:

Using Bash History

Using Bash History

There are better ways of using the exclamation mark though. For example, my favorite history expansion command is !! which repeats the last command. How is this different from +Enter? Well, consider the following example:

# re-run last command with sudo
sudo !!

I use this shortcut every time I need to install something on Ubuntu. Or edit something in /etc. No, I’m not joking – I always forget sudo and ever since I learned this little trick, I think I saved hundreds of key-strokes per month.

My second favorite expansion is !?. It works just like the above, but you can use it to re-run the last command that matches the string that immediately follows it. For example:

# re-run last apt-get command with sudo
sudo !?apt

It is useful for those occasions when you forget sudo, then mistype something, or go do something else, and then you want to re-run the command few lines later.

Third favorite history expansion is !$ which expands into the last argument of last command. This one expands to the last argument of the last command. It’s probably best to use an example here:

# trying to copy file to directory you don't own
cp foo.dat /some/long/path/that/is/annoying/to/type/

# let's take ownership of that dir
sudo chown luke !$

That last trick can actually be accomplished with a single key stroke: Alt+.. If you happen to be working on a Mac, you can use Esc, . (Esc, then dot) instead.

There is also a $^ expansion, which as you can probably guess returns the first argument of the last command.

If you just typed in a really long, complicated command and managed to mess it up, you can use fc (fix command) to load said command in your default editor. You can then fix it, taking full benefit of syntax highlighting and editor features. Once you save and quit, the command will be executed automatically.

If you want to jump straight into editor driven command composition, you can simply hit Ctrl+X, Ctrl+E. This will open a blank editor window, and then execute the buffer contents upon exiting.

Finally, if you only remember one keystroke from this post, it should be Ctrl+R which is the reverse incremental search through history. Simply hit it, and start typing a command and options from your history should pop up immediately. You can execute them by hitting Enter or continue hitting Ctrl+R to go further down the history lane.

Key Combinations

Pretty much everyone knows that Ctrl+C kills the current process and that Ctrl+D will quit current shell and log you out. I like to put this in my .bashrc to prevent that last thing happening by accident:

# Ctrl+D must be pressed twice
export IGNOREEOF=1

Most people do know that Ctrl+Z will suspend and background a running task, and that it can be brought back to focus using the fg (foreground) command (or that a list of suspended tasks can be seen by running jobs). But there are a lot more key-strokes that most people are unaware of.

Bash actually uses a lot of Emacs key bindings. You could set it to Vim mode, but that might get little weird at times. I prefer to keep it in the default Emacs mode to keep things simple. Here are a few important keyboard shortcuts to remember:

  • Ctrl+A – jump to the beginning of the line
  • Ctrl+E – jump to the end of the line
  • Ctrl+U – clear the line
  • Ctrl+L – clear the screen
  • Ctrl+W – delete last word
  • Ctrl+K – delete to the end of the line
  • Alt+T – swap current word with previous (also Esc, T)
  • Alt+F – jump one word forward (also Esc, F)
  • Alt+B – jump one word backward (also Esc, B)
  • Alt+U – uppercase till the end of line (also Esc, U)
  • Alt+u – lowercase till the end of line (also Esc, u)
  • Alt+. – insert last arbument (also Esc, .)
  • Ctrl+R – reverse incremental history search
  • Ctrl+X, Ctrl+E – open default text editor to edit a command

There are more, but these are the ones I use most often.

Colorize Your World

By default bash is pretty boring color wise. This is of course by design, since in the past there was always a chance you could be logging in from a teletype or a dumb terminal that could not handle color codes. These days this is not much of a concern so I like to spruce up my shell a bit like this:

# enable colors
eval "`dircolors -b`"
    
# force ls to always use color and type indicators
alias ls='ls -hF --color=auto'

# make the dir command work kinda like in windows (long format)
alias dir='ls --color=auto --format=long'

# make grep highlight results using color
export GREP_OPTIONS='--color=auto'

I also like to colorize my man pages (and all less output in general):

# colorful man pages
export LESS_TERMCAP_mb=$'\E[01;31m'
export LESS_TERMCAP_md=$'\E[01;31m'
export LESS_TERMCAP_me=$'\E[0m'
export LESS_TERMCAP_se=$'\E[0m' # end the info box
export LESS_TERMCAP_so=$'\E[01;42;30m' # begin the info box
export LESS_TERMCAP_ue=$'\E[0m'
export LESS_TERMCAP_us=$'\E[01;32m'

The end result looks pretty good:

Colored Man Pages

Colored Man Pages

When possible, I like to use alternate commands that are more visually pleasing. For example colordiff instead of plain old diff, and htop instead of top. Unfortunately these are not always installed on new systems. I usually handle this situation like this:

# use colordiff instead of diff if available
command -v colordiff >/dev/null 2>&1 && alias diff="colordiff -u"

# use htop instead of top if installed
command -v htop >/dev/null 2>&1 && alias top=htop

The command -v bit checks if the app in question is installed, and creates the alias only if it is found. I’m redirecting the output so that I don’t get error messages scrolling on the screen when I log in.

And some other stuff…

Here is a few remaining bits that didn’t fit into the previous categories:

# corrects typos (eg: cd /ect becomes cd /etc)
shopt -s cdspell

# resize ouput to fit window
shopt -s checkwinsize

I often see people using cd ~ command which is quite redundant. Typing in cd on its own, without arguments takes you to your home directory anyway. While visually similar, the cd – command is much, much more useful. It takes you to the last directory that you were in. It is great for jumping between two different directories without the need to use pushd and popd:

cd /some/long/path
cd /some/other/path
cd - # takes you to /some/long/path
cd - # takes you to /some/other/path

Finally, there is the < operator. It is sort of the reverse of > in that it forces bash to treat the output of the command as a temporary file (but you don’t have to specify the name). Perhaps an example is in order:

# treat output as file
diff /etc/hosts <(ssh somewherelese cat /etc/hosts)

That's about all that I have for now. What are your favorite tips and tricks for your shell of choice (be it bash, zsh, tcsh or something else entirely). Let me know in the comments.

]]>
http://www.terminally-incoherent.com/blog/2013/01/07/bash-tips-and-tricks/feed/ 9
Ubuntu Disk Cleanup http://www.terminally-incoherent.com/blog/2012/11/10/ubuntu-disk-cleanup/ http://www.terminally-incoherent.com/blog/2012/11/10/ubuntu-disk-cleanup/#comments Sat, 10 Nov 2012 15:09:23 +0000 http://www.terminally-incoherent.com/blog/?p=12998 Continue reading ]]> This fine morning KDE greeted me with a particularly nasty warning:

Do you think I'm low on space?

Do you think I’m low on space?

Seems like it is time for some spring cleanup… And by spring, I of course mean winter. But where to start?

Well, the best place is usually to nuke your temp files. There are many ways to do this, but my go-to tool is Bleach Bit. It is a multi-platform tool that does a remarkably decent job sweeping up garbage that crufts up linux filesystems without actually doing any major damage. Also, it is quite fast.

Unfortunately in my case, it only managed to free up about 300MB of space. That’s certainly bigger than 89MB of free space I had previously, but still not great.

Here is a question: what does it mean when deleting temp files makes almost no difference with respect to unused space on your hard drive? It means that all the space was eaten up by your activity – files you downloaded, software you installed and etc. So the first thing to do is to clean up your home directory.

If you are like me, you probably have one or more “dump” folders where you just shove random files you don’t feel like filing anywhere else. Mine are:

  • ~/Downloads/ which is the default download folder in Chrome
  • ~/tmp/ which is where I dump random files, logs, and etc. For example, if I need to quickly photoshop a file and upload it to the internet for lulz, it goes into this directory.
  • ~/wkspc/ is a higher level temp dir where I put random tests and code snippets I don’t plan on saving

As a rule, it is usually safe for me to purge these directories outright when I’m running low on space. Whenever I find myself working on something inside ~/wkspc/ for more than a brief, fleeting instance, I usually move it to ~/projects/ and put it under source control. Everything else, is a fair game.

Sadly, nuking those folders gave me very meager results – probably because most garbage I download and generate on day to day basis is rather small. So where was all my space tied up? I decided to find out using du command:

du -sBM * | sort -nr

This will give you a nice list of folders ordered by size that looks a bit like this:

Find large folders using du

Find large folders using du

I actually took this screenshot after doing some cleanup, but you can sort of see how it works. The largest repositories of junk in my case are my Dropbox folder which I can’t really do much about, and my Desktop folder where I had a large directory of work related crap I did not want to get rid of. The rest of the directories looked fairly clean. And yet running df would show that / was 96% full.

Then I got another idea – my previous search explicitly excluded dot-files. So why not check them specifically:

du -sBM .* | sort -nr

Can you say jackpot?

Wat r u doin, VirtualBox? Stahp!

Wat r u doin, VirtualBox? Stahp!

VirtualBox directory has grew to be over 50GB large. Not good! I immediately sprung to action, deleted bunch of old snapshots, dumped unused VDI files to an external hard drive and went to town compacting the current ones. How do you compact them you ask? Using the friendly VBoxManage command:

VBoxManage modifyhd /path/to/your.vdi compact

Actually if your VM runs a Windows OS, should follow the advice listed here:

  1. Run the VM
  2. Defrag your disk once or twice
  3. Zero-out empty space with Sysinternals sdelete
  4. Then compact it using the command above

For me, this yielded about 10-20% savings per VDI file, which was not insignificant. But since I was already in cleanup mode I decided to keep going.

Having deleted most things in ~ that I was willing to part with, I turned to software. I can be pretty bad about installing software. Often I will download and install packages that I use once, and never touch again. I’m especially fond of downloading web browsers, IDE’s and scripting language run-times just to see how they feel. Half of these things don’t need to take up space on my hard drive.

So I decided to generate a list of installed packages and order it by size:

dpkg-query -Wf '${Installed-Size}\t${Package}\n' | sort -rn | less

Biggest offenders?

Use Tmux to make it easier

Use Tmux to make it easier

It seems that the biggest packages on my system were Haskell, Google Chrome, Open Office and… A shit-load of old kernel image files. See, this is the kind of thing that happens on Linux when you just let the OS upgrade itself whenever it wants. Every time there is a kernel security patch or an update apt leaves the old kernel version intact. This is good, because you can always revert to your old kernel version if the new one breaks everything and ruins your day. But after a while you get a long, long list of defunct kernel images and header files. You can actually see the entire list like this:

dpkg -l 'linux-*'

How do you clean this up? Well, you aptitude purge all the kernel files, except the current one. You can check what you are running right now via uname -r command. Then sit down, and prepare to type in a lot of package names…

Or use this script to generate a list of all installed kernel files, except the current one:

dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d'

I can’t actually claim ownership of this one – this sed monstrosity was actually made by the folks at Ubuntu Genius blog. In fact, they went one step beyond showing you how to automatically purge these things in a single command:

dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs sudo apt-get -y purge

In my case, the uninstallation took close to an hour to complete, and reclaimed over 8GB of space, without damaging anything important.

For a good measure I still went back and uninstalled useless things like extra web browsers, various ide’s, language runtimes and any GUI tools that haven’t been touched since I installed them. All in all I think this was a successful cleanup:

After the cleanup

After the cleanup

My / is now only at 81% with over 17G of free space. What do you think? Do you have any disk cleanup tips that you would like to share here? Let me know in the comments.

]]>
http://www.terminally-incoherent.com/blog/2012/11/10/ubuntu-disk-cleanup/feed/ 17
Your Top Three Unix Tools http://www.terminally-incoherent.com/blog/2012/10/08/your-top-three-unix-tools/ http://www.terminally-incoherent.com/blog/2012/10/08/your-top-three-unix-tools/#comments Mon, 08 Oct 2012 14:22:41 +0000 http://www.terminally-incoherent.com/blog/?p=12821 Continue reading ]]> Let’s say you set up a brand spanking new Unix/Linux box somewhere, or gain access to a bran new shell account on which you expect to be doing some work. What are the top three things you install first?

I’m not talking about standard shell tools that are part of the coreutils pacage on most systems – we all love things like ls, grep or wget but these things are usually always there. I want to know what are your top three indispensable things that may or may not always be there on a stripped down fresh install.

Here are mine:

Vim 7.3

You would be hard pressed to find a Unix or unix-derivative system that does not ship out of the box with a copy of vi. Most linux distros actually ship vim and simply alias it to vi in compatible mode. But, Vim 7.3 is still pretty hard to come by. Unless you are rolling out brand spanking new fresh release, chances are you will have version 7.2 on your system.

Why do I prefer 7.3? Well, it has a handful of features that I grown to like and rely on:

  1. Persistent Undo is huge help. Unlike most editors, Vim 7.3 will preserve your undo history even after you save and close the file. This is like having a poor-man’s version control for your files, without actually using version control.

    Unlike most text editors who track edits in a linear way, Vim keeps the undo history as a tree data structure which you can traverse back and forward without actually losing any work. There exist plug-ins such as Gundo which help you visualize and browse your past edit branches with ease.

    As you can imagine, once you get used to undo history persisting after file is saved and closed, it is hard to wean yourself off of it. It is just too useful and convenient of a feature not to take advantage off of it.

  2. Relative Line Numbers – 7.3 has this nifty feature which numbers the lines relative to the cursor position. So the current line you are editing is always line 0, and the numbers grow up and down away from it. Why on earth would you want that? Well, it’s a Vim thing really. Vim commands don’t take absolute line numbers or ranges as arguments – they take offsets. So instead of saying something like “do this on lines 5 through 7” you instead say “apply this to the next three lines”. Which means that seeing at a glance that the end of the code block you want to manipulate is N lines away from your cursor without having to count is extremely beneficial. It makes you more productive.

Could I survive working in 7.2? Yeah, probably – but if I can help it, I install 7.3 just to have an uniform work environment across all the machines I work on.

As to why did I include Vim itself on this list? I believe I already answered this question quite throughly.

Tmux

I found out about tmux only few months ago, but I’m already addicted to it. It is a tool so indispensable to me that I will go to great lengths to compile it from source if it happens to be unavailable on a machine I need to be using. Especially if it is a remote machine.

Tmux is a terminal multiplexer – a drop-in replacement for the venerable screen. If you have never used screen, I am about to rock your world my friend. This is the tool that will revolutionize how you work on the command line.

Here is how I explain this tool to complete n00bs: you know how vim and emacs have buffers that you can toggle between or even put side by side in split-screen mode? Terminal multiplexers give you exactly that – but for your shell.

Let me give you a hypothetical example: let’s say you ssh to a server and you are editing some config file. You need to change some value, but you don’t remember what are the allowed values and ranges for that setting. What do you do? Well, you could Ctrl+Z out of your editor, check the man page, and fg back… Or you could grab the mouse, and look the value up online, or locally.

Or you could split the screen, and open the man page side by side with the text editor like this:

Using Tmux on a remote server

Using Tmux on a remote server

Here you see me editing .tmux.conf viewing the man page, and keeping my eye on how hard PogoPlug is working by running top. All on one screen via single PuTTY SSH session from windows. And if I accidentally close my PuTTY window, all I need to do is to ssh back into the box, and issue a single command:

tmux attach -t session_name

All the stuff that I had open doesn’t close when my connection is lost, but keeps on chugging. This is great for big compile jobs – you just type in make, log off and go eat your dinner while the code compiles in the background.

Why Tmux and not Screen? Because it is easier to use. The key bindings are easier to configure, the screen splits look aesthetically nicer. And of course it is just much more friendly to use. For example, in Tmux opening a new shell instance in a vertical split screen buffer is a single action, whereas with screen it is usually two (first you split the screen, then you create a shell session in the new buffer).

Git

Without git I can’t get anything done. For one, I keep a lot of my config files under source control. When I set up a new machine, I usually immediately instal git, so that I can clone my .bashrc, .tmux.conf and .vim/ files and get my working environment in order. Only after I do all that I can start doing actual work.

Git is useful for more than that though. My personal philosophy is that anything I have spent more than 10-15 minutes creating ought to be under version control of some sort. This is sort of a rule to live by, and every time I bent or violated it I got burned pretty harshly. But do not rely just use it locally – a local git repository can be easily blown away along with your work by an accidental deletion, hard drive failure or a myriad of other accidents. If it’s worth sharing, put it on Github. If it is private, show it away on BitBucket. If you do not ever want it to leave the confines of your LAN, set up bare repository on another machine you own, and push your work there regularly so it is in at least two places (well, four – because you are backing both machines up, right?).

Granted, git is not the only source control system out there but it is one I have become fond of. It runs on just about every platform, and once you grok the basics it is actually fairly straightforward to use – especially for small, single person projects. It has great remote services, and pretty good assortment of client side tools (I hear good things about the Github clients for Mac/Windows and stuff like TortoiseGit).

What are your top 3 tools that you couldn’t live without? Let me know in the comments.

]]>
http://www.terminally-incoherent.com/blog/2012/10/08/your-top-three-unix-tools/feed/ 6