sysadmin notes – Terminally Incoherent http://www.terminally-incoherent.com/blog I will not fix your computer. Wed, 05 Jan 2022 03:54:09 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.26 Using BTSync Behind a Corporate Firewall http://www.terminally-incoherent.com/blog/2015/03/17/using-btsync-behind-a-corporate-firewall/ http://www.terminally-incoherent.com/blog/2015/03/17/using-btsync-behind-a-corporate-firewall/#comments Tue, 17 Mar 2015 20:00:22 +0000 http://www.terminally-incoherent.com/blog/?p=18443 Continue reading ]]> BitTorrent Sync is pretty neat. I have been using it ever since Scott recommended it in the quintesential backup thread of 2013. It even made it onto my big list of essential tools. It provides a nice alternative to cloud solutions such as Dropbox by enabling you to sync devices directly without storing your data on a centralized server owned by a third party.

One of the major issues I had with it was using it behind corporate firewalls. When you are on a network that only allows outbound communication on port 80 and 443, BTSync is completely useless. Unlike Dropbox or Google Drive which both have absolutely no issues synchronizing folders in such environment, the BTSync client simply does not work at all.

And yes, before you say anything, there are reasons to block outbound traffic on port 22. Firstly, if on average the number of users who need to ssh out of that location approaches zero, then leaving that port open simply increases the attack space for no reason. Secondly, even if users do need to ssh out, chances are they will be communicating with known servers. Why have a wide open port that can be used an abused, when you can control connections on IP and MAC address basis, and require audit trail, and change-of-permission request documentation when devs ask for more access.

The only outbound ports that are typically wide open are HTTP and HTTPS. Your local BOFHs can’t readily lock them down as tight as they would want to, unless they set up a proxy server. Fortunately, proxies break a lot of the modern, dynamic, internet based things so chances are you might not have one. And if you do not, then you can funnel your BTSync traffic through an SSH tunnel on a HTTP/HTTPS port.

To get this working working you will need a few things:

  • A functional shell with ssh on your work machine
  • An internet accessible remote machine running sshd server
  • Recent BTSync client (obviously)

If outbound communications on port 22 are open at your location, any server to which you have shell access will do. If you only can get out on ports 80 and 443, you will need to configure said server to run SSH daemon on one of these ports. This unfortunately requires root access.

You set this up by editing /etc/ssh/sshd_config. Search for the word “Port” and simply add another entry below, like this:

# What ports, IPs and protocols we listen for
Port 22
Port 443

Then restart ssh server:

sudo service ssh restart

Make sure you can ssh into it from behind the firewall. If your port 22 is closed, you can specify the alternate port on the command line like this:

ssh -p 443 you@your.host.net

If that works, you will now be able to create an SSH tunnel that will act as a SOCKS proxy. On the machine where you want to run the BTSync client, do the following:

ssh -D 9988 you@your.host.net -N -p 443

This will create a SOCKS proxy tunnel running on the local machine on port 9988. You don’t have to use that port number. Feel free to use any other port, as long as it is not taken by anything else. I recommend making a script with this command and saving it somewhere in your path, because you will have to run it whenever you want to enable syncing.

Finally, once you have the tunnel running open the BTSync client, go to Preferences and open up the Advanced tab. Check the “Use proxy server” box, type in the localhost ip and the port number you picked (in my case 9988). Use the default SOCKS4 proxy type:

BtSync Proxy Setup

BtSync Proxy Setup

Save the settings, then pause and restart syncing to make them take effect. Once you do this, you should see your folders syncing up as they should. Of course the sync will stop when the tunnel is closed, but it is better than nothing.

]]>
http://www.terminally-incoherent.com/blog/2015/03/17/using-btsync-behind-a-corporate-firewall/feed/ 8
Risk averse Workflows, or Why CEO’s Keep Losing Files http://www.terminally-incoherent.com/blog/2015/03/16/risk-averse-workflows-or-why-ceos-keep-losing-files/ http://www.terminally-incoherent.com/blog/2015/03/16/risk-averse-workflows-or-why-ceos-keep-losing-files/#comments Mon, 16 Mar 2015 15:39:37 +0000 http://www.terminally-incoherent.com/blog/?p=18426 Continue reading ]]> Let’s talk about workflows. If you are a white collar worker, chances are that you spend most of your day creating or editing digital files. Whether you are a programmer, sysadmin, accountant, salesman or a CEO, you will be spending considerable part of your day messing with data, grouped as some sort of logical entity: a document, spreadsheet, source code, etc.. Different people have different strategies and approaches for this sort of work.

For example, I have noticed that my personal workflow is very risk averse. I typically start with a git pull and ends with a git push. As I make changes to a file, I tend to save very often, which is not unusual for programming. When you write code you usually focus on small discrete, incremental changes that need to be tested in isolation. You make an edit, save the document, check if anything broke, make another change, and so on. When you finish working on a specific task or accomplish a specific goal, you commit the code encapsulating the changes into neat snapshot that could be rolled back later. Then you move on to the next thing. Multiple times per day you collect bunch of these snapshots and push them out to a remote repository.

The entire process is anchored not to the local the file system but also to a revision tracking system which provides me with backups and snapshots of my code. It is actually quite difficult for me to lose more than a few minutes of work due to a mistake or a software glitch. I always have at least 3 recent copies of the project: the working copy in storage, the local revision history, and the remote repository. More if I’m feeling adventurous, and I create feature branches which provide yet another working copy that is separate from the main one. It is a very safe way to work.

Busy CEO Workflow

Busy CEO Workflow

This is very different from what I call the “busy CEO workflow” which starts and ends within Outlook. I was recently able to observe several people using this exact Microsoft Office driven workflow and I was baffled how risky and failure prone it was. I would never actually choose to work this way, if nothing else than to save myself the stress and preserve my own sanity.

Let me try to outline this workflow for you:

  1. You start by receiving a Word/Excel document attached to an email
  2. You double click on that attachment to open it
  3. You laboriously make dozens of changes over the span of next 3 hours
  4. When finished you hit “Save and Send” button on the toolbar
  5. Outlook attaches the modified file to a new email

Note how in this particular workflow, all the work is being done almost entirely in memory. When you open a Microsoft Office document attachment from Outlook it renders it opens it directly. It probably puts a working copy somewhere in a local temp folder, but not in a way you could later track down. All the changes you add to the document may or may not be saved to that ephemeral temp file, which will go away the minute you close Outlook.

Microsoft Office does offer you a little bit of protection from glitches and software crashes in terms of the auto-recovery feature (unless of course it was switched off) which will periodically attempt to create a snapshot of your work. If the application does not close cleanly, it prompts you to recover from one of the recent snapshots. Unfortunately these backup copies are immediately deleted when the user deliberately closes application. So if you accidentally close the wrong window, you are likely to lose all the work.

The “save and send” functionality relies on a magical hand-off happening between two office applications that involves passing around references to an ephemeral, temporary file, hidden away from the user. This interaction is semi-reliable but I have seen it break in such a way that it closes the edited document and silently drops the modified file without actually ever giving the user a chance to send it.

This breakage is not an isolated fluke, by the way. The Microsoft Office interop features are known to be rather fragile. Because of their complexity Office applications often end up in weird states which may affect these sort of hand-off situations. In fact, it happened twice in a week when I was working with end users gathering specs for a project. Both times it required closing and re-opening of all Office applications to restore the functionality.

This workflow is fraught with data loss risk and has way to many points of failure:

  • There is no user-accessible “work copy” of the file with recent changes
  • Only life-line is the magical auto-recovery feature
  • The “save” feature is not guaranteed to work all the time

You have got to admit that this is quite bad. If you are a tech savvy person, you know that this is not how one is supposed to work. You are supposed to anchor your work in the storage, not in main memory. You are supposed to save often and keep multiple copies of your work to keep track of changes. And yet, this email-to-email, in place-editing workflow is baked right into the very fabric of Microsoft office. It is easy, convenient and as such it is really appealing to the busy executives who must juggle a lot of balls in the air at all times.

No amount of user education can counteract the “common sense” logic of “if you’re not supposed to use it, then why did Microsoft include it as a feature” counter-argument. Software developers of course know that this fallacious line of reasoning: we put half-baked features into our software all the time, and we don’t always have the time or resources to work through all possible use-cases and usage scenarios. Once the feature is in production, it is hard to remove it.

So the universe is full of half-baked convenience features that don’t really work right. I imagine the “save and send” feature was intended for people who just want to fix 3 typos before approving a staff memo or a courtesy letter of some sort. But but I’ve just seen someone use it to re-write an 80 page report almost entirely, over the course of almost an entire day. That file sat there, in memory when the person took their lunch break, responded to other emails, and worked with dozen other attachments. And that’s quite scary. It is putting a lot of faith in a piece of software…

Which is something I have noticed people do. As a software engineer, the best advice I can probably give you is to never assume any software you use is reliable. It isn’t. Unless it has been developed by NASA for the explicit purpose of flying a rocket into space, then the code is probably a bug ridden mess. And even NASA fucks up every once in a while.

If you consistently lose work due to accidental clicks or software glitches, and someone told you that you can avoid it by modifying your work-flow to route around the flaws in the software, would you do it? Or would you keep your workflow and just be mad at flaky software and the IT staff’s inability to make a third party application do things it was not properly designed to do?

Is there a way to eliminate the busy CEO workflow from your organization? Can you force it out of the system via infrastructure change? Granted, trying to force out Microsoft Office from your organization would be tilting at windmills so that’s probably not a good approach. You will never convince the business folk to give up Word and Excel, but you can sometimes wean people off Outlook. Especially new generations of office workers who grew up on fast, reliable webmail interfaces with endless storage capacities tend to scoff at the very idea of a dedicated email client. And that’s actually a good thing.

For all their flaws, web-mail interfaces do one thing right: they force users to anchor their work in the file system by asking them to save attachments to disk before opening them. This may seem like a major annoyance at first, but that one extra click solves so many issues.

Thoughts? Comments? Ideas?

]]>
http://www.terminally-incoherent.com/blog/2015/03/16/risk-averse-workflows-or-why-ceos-keep-losing-files/feed/ 5
Zero Effort Link Minification with WordPress http://www.terminally-incoherent.com/blog/2015/02/17/zero-effort-link-minification-with-wordpress/ http://www.terminally-incoherent.com/blog/2015/02/17/zero-effort-link-minification-with-wordpress/#comments Tue, 17 Feb 2015 16:40:45 +0000 http://www.terminally-incoherent.com/blog/?p=18331 Continue reading ]]> If you have ever linked to my blog, you might have noticed that my URL’s are absolutely monstrous in size. Not only is my long domain name rather long, but I also happen to use the “pretty” style of permalinks which includes full date and abbreviated post title.

There is of course nothing wrong with either of these things. Long domains may not easy to type but they can be memorable. Most people don’t even use the address box in their web browser (which used to drive me nuts until I learned to just accept it). These days it is all about on fuzzy search in Google so as long as someone can manage to type in something close to termanotely incaharet somewhere, they will find my blog.

Similarly, long permalinks are actually useful due to their descriptive nature. I like to be able to glance at a link and not only know what it is about, but also how long ago it might have been posted. I would not want to get rid of an useful feature like that.

That said, my links are just super long. Take this one for example:

http://www.terminally-incoherent.com/blog/2015/01/19/utility-spells-in -video-games/

It is an absolute beast, and I would love to have the option to shorten it when I post it on social media for example. Yes, Twitter does minify links by default but the truncated URL still looks quite ugly in your tweets.

I could use a third party minifier such as bit.ly or goo.gl but that is never a good idea. Not only does it obfuscate the links, but it also sets them up for possible breakage in the future when said 3rd party services shut down which is not uncommon. I have seen quite a few of them appear and go under in just last few years, and being backed by a big company does not seem to help. Google might not be going anywhere anytime soon, but they shut down their services all the time. Personally, I got burned twice with them. First time with Google Notebook (an Evernote precursor), and the second time with Google Reader. URL’s are supposed to be forever so I wouldn’t feel comfortable using any service that is not under my control. Ideally I would want to use my own domain for minification of links.

I noticed that WordPress already does support shortened links in a way. If you have permalinks enabled (and you should), WordPress calls your standard, long and “pretty” links the cannonical URL’s. However it also provides a so called shortlink for each post. You can see it shown in the post header on most themes:


The shortlink format is actually the default style of WordPress URL’s you get if you can’t be bothered (or you are unable) to set up permalinks. It acts as a fallback, and allows you to access blog posts by their internal database ID even if the permalinks are not working correctly.

So if I wanted to, I could link to my posts using the shortlink format like this:

http://www.terminally-incoherent.com/blog/?p=18242

If you have a short domain name, this might be enough for you. Mine is still too long, and I absolutely hate parameters in the URL. IMHO, it looks unprofessional. However, knowing that this format exists allows you to shorten and prettify it using Apache redirects. For example, I could put the following line in my .htaccess file:

RedirectMatch ^/(\b\d+\b)/?$ /blog/?p=$1

This matches any URL that contains only numerical digits followed by an optional trailing slash, and seamlessly redirects them to the shortlink style URL format. This rule allows me to use something like:

terminally-incoherent.com/18242

Because of the way WordPress handles shortlinks, users who follow the link above won’t actually see it in the address box when the page loads. Instead WordPress will automatically unfold it to the “pretty” permalink format, which is exactly what I want.

This is a huge improvement. Still to long though.

To compress my addresses even further I purchased a brand domain: tein.co Unfortunately ti.co and in.co were already taken so this is the next best thing I could think of. It’s not pronounceable, but it is short, and visually similar to my long domain.

All that was left was to set the new domain to “web forward” to my actual blog URL and I was all set. I use iwantmyname as my registrar and it took literally 60 seconds to set it up:

Web Forwarding

Web Forwarding

From there it worked pretty seamlessly. If you are feeling lazy and you don’t want to type in my full web address you can now simply use tein.co. Similarly, every post now has a genuine short link (displayed under post title) in the format:

tein.co/18242

Because the entire thing is implemented via 301 redirects, if you post one of these short links on Twitter you will still get that sweet Twitter Card action.

So there you have it. Zero effort link minification using built in WordPress addressing scheme and a single Apache redirect statement.

]]>
http://www.terminally-incoherent.com/blog/2015/02/17/zero-effort-link-minification-with-wordpress/feed/ 3
Spectacular Computer Failures: The Next Generation http://www.terminally-incoherent.com/blog/2014/12/22/spectacular-computer-failures-the-next-generation/ http://www.terminally-incoherent.com/blog/2014/12/22/spectacular-computer-failures-the-next-generation/#comments Mon, 22 Dec 2014 15:04:38 +0000 http://www.terminally-incoherent.com/blog/?p=18173 Continue reading ]]> If you have been wondering why blog post have been scarce lately, it is partly because my computer blew up again. Yes, the new one that I bought in September. If you have been following along, you might remember that last year I blew a video card in my old rig. I managed to squeeze maybe six more months of use out of that old rig by putting in a new video card, until the motherboard died in August. In September I got a brand new machine, and it started having issues on December 5.

I figured I post about the symptoms and experience here in case anyone else decided to buy an Alienware Aurora-R4 with a dual NVIDIA GeForce GTX 780 setup only to have it die few months later.

The problem started when I was playing a game (it was FarCry 4 for reference) when it completely froze up. It was a hard lock-up with the non-responsive keyboard, and speakers stuck repeating a single bleep over and over again. The video winked out few seconds later and my monitor dutifully displayed a “NO DVI SIGNAL” message, but the speakers kept on going. I ended up having to power cycle it just to get rid of the noise.

This was kinda odd, since FarCry 4 has been rather remarkably polished and bug free (as it should be since it is basically FarCry 3 with a palette swap) so such hard crash was unexpected. But the machine rebooted just fine so I thought nothing of it. Since it was already late, I thought nothing of it, logged off and went to sleep assuming this was the universe’s way of telling me to get off the computer.

Next day I was doing something in Photoshop, and the machine did this again: all of a sudden my screen went blank, and then about 30 seconds later I saw BIOS POST screen and the computer started rebooting itself. Again, I was a bit concerned but after it powered up, it was fine again, and I was unable to reproduce the crash by just toying around in Photoshop so I wrote it off as a one time glitch.

It wasn’t until I went back to FarCry 4 that I saw a persistent issue. Every time I started the game it will load up, show me main menu, let me load a saved game, display a progress bar, and then as soon as the actual game would start the screen would go blank. I would then get the “NO DVI SIGNAL” message from my monitor, followed by a reboot shortly after. This happened every single time.

My Event Viewer

My Event Viewer

As soon as I had a reproducible issue, I started digging. First place I went was the Windows EventViewer which, unsurprisingly, was full of critical Kernel-Power errors. I checked the timing, and each of them coincided with the hard crash and reboot. They all looked more or less like this:

Log Name:      System
Source:        Microsoft-Windows-Kernel-Power
Level:         Critical
Description:
The system has rebooted without cleanly shutting down first. 
This error could be caused if the system stopped responding, crashed, or lost power unexpectedly.

BugcheckCode: 278
BugcheckParameter1: 0xfffffa80140da4e0
BugcheckParameter2: 0xfffff8800fc16828
BugcheckParameter3: 0xffffffffc000009a
BugcheckParameter4: 0x4
SleepInProgress: false
PowerButtonTimestamp: 0

This was not very helpful, but after some research I found out that Bugcheck 278 is actually equivalent to BSOD 0x116 also known as VIDEO_TDR_ERROR. The most approachable description of this issue I found was:

This indicates that an attempt to reset the display driver and recover from a timeout failed.

In other words it was a video issue that would normally result in a blue screen of death, but since it crashed the entire video processing stack said BSOD could never actually be displayed. Possible causes of this error were as follows:

  • Bad video driver acting up (not unusual from nVidia)
  • Bad RAM chip causing discrepancies when syncing with VRAM
  • Bad video card

So I went down this list, trying to nail down the exact issue. First, I upgraded to the latest nVidia driver. I actually don’t remember which version I had when I started the process, but I knew it was slightly behind. So I downloaded the latest and greatest, and updated it. This did not solve the problem. I decided to go the other way, and tried four previous versions of the driver, as well as two previous beta versions. None of them got rid of the crashes. It’s probably worth noting I was doing “clean” installs – meaning I would uninstall, the current driver, reboot and then install another one to avoid weird conflicts.

Next I tried doing the Dell pre-boot diagnostics. It is an on-board functionality on all Dell machines and is usually available from the selective boot menu (accessed by mashing F12 during POST). It doesn’t really do anything useful, but in case of detectable hardware failures it typically spits out an error code which can be given to Dell tech support circumventing a lot of bullshit like checking if the computer is plugged in, wiggling the wires and etc. Not only that – the Dell warranty support drones usually like to tell you to run the hour long extended test anyway and refuse to stick around on the phone as you do, necessitating a call-back.

Unfortunately, the pre-boot diagnostics module gave my computer a clean bill of health. Granted, it did not really have any extended tests it could run on the video cards – it would simply check if they were present and responding. It did however confirm that there was no issues with the memory. Just to double check that, I booted into a MemTest CD and ran it for about 12 hours (started in the evening, finished next day when I came back from work) and it did not show any errors.

The Alienware machine also came with something called Alien Autopsy which is yet another diagnostic tool. This one is a bit friendlier, since it does not require you to reboot your machine, and it also has seemingly more thorough tests for the video cards. So I decided to run that as well.

Alienware Alien Autopsy

Alienware Alien Autopsy

The video testing involves a thorough VRAM test and few video benchmarks during which it renders some spaceships on the screen, spins them around, and tests real time shaders, transparency, graphics pipeline and etc… As soon as I started running those, my machine started crashing and rebooting itself. It was reproducible and consistently failing about half-way through the benchmarks. I couldn’t pin down the crash to a single benchmark or test case, but I ran it about 20 times and I never managed to get through all of them without the machine shutting down on me. At this point I was fairly confident it was an issue with one of the video cards.

Armed with that evidence I phoned Dell Alienware support line and gave them all of the details outlined above. The guy on the other line listened to my spiel, looked through his notes and admitted I covered pretty much all the bases. He made me check my BIOS version to see if it needs to be updated but it turned out I had the latest and greatest one. So he agreed I need video cards replaced. I was expecting him to tell me to disable SLI and start pulling cards out to narrow down which one is the faulty one, but he just set up a dispatch to replace both of my cards.

Luckily I purchased the next business day on-site service warranty, so it only took them a week and a half to get it fixed:

I’m happy to report that replacing the cards completely fixed my issue. I was a little concerned this was going to turn out to be a motherboard problem – because knowing my luck it would. But I haven’t seen the dreaded Bugcheck 278 crash since the new cards were installed. I’m currently trying to finish FarCry 4 so that I can go through some of my Steam Holiday Sale backlog, and probably Dragon Age Inquisition.

I also have a few book, and comics reviews in the pipeline, and I’ve been toying around with an idea of doing a Ravenflight style series but for a SF themed setting. So I’m not dead, do not unsubscribe from the blog yet.

]]>
http://www.terminally-incoherent.com/blog/2014/12/22/spectacular-computer-failures-the-next-generation/feed/ 5
Installing Arch Linux on the PogoPlug http://www.terminally-incoherent.com/blog/2014/11/03/installing-arch-linux-on-the-pogoplug/ http://www.terminally-incoherent.com/blog/2014/11/03/installing-arch-linux-on-the-pogoplug/#comments Mon, 03 Nov 2014 15:05:35 +0000 http://www.terminally-incoherent.com/blog/?p=17977 Continue reading ]]> Back in 2012 I wrote about how I set up a $30 linux server by installing Debian Squeze on a PogoPlug. I have been using the device for close to two years, but it died. I grabbed an identical replacement few days ago, but for some reason I was having trouble getting Debian working again, despite using identical device, similar thumb drive and following the same procedure. The truth is that Debian is not the best OS to run on this device. Pretty much everyone’s go-to system of choice for these devices is Arch linux which has excellent ARM support.

I’ve been itching to try Arch for a while not but I never really had an opportunity so I figured I might as well use it now. It worked amazingly well, so I figured it would be wroth while to document the procedure for future reference. Especially considering it is slightly different from the Debian procedure. I used this guide but with some significant alterations (see below).

Logging into the PogoPlug

First, you need to figure out the IP of your plug. Best way to do this is to log into your router and match it by name or Mac address. Once you know the IP address you can ssh into it using root as the username and ceadmin as password.

Preparing the Thumb Drive

The default OS on the PogoPlug is locked down pretty tight. Pretty much the only place with write access on the device is /tmp so you won’t be able to install to the internal drive (or rather it is fairly impractical to do). Instead you want to set up Arch on a thumb drive.

First, you will need to figure out which devices is the drive recognized as. I’m fairly sure the top-most USB port on the back of the device always registers as /dev/sda but you can easily check it by plugging it in and then running:

dmesg | tail

The last few lines should reveal which device the system thinks was plugged in. I’ll assume it was /dev/sda. First thing you want to do is to repartition it using fdisk:

/sbin/fdisk /dev/sda

Create two new partitions:

  • Press o to blow away all existing partitions on the drive.
  • Press n to create a partition, p to set it as “primary” and 1 to designate it as first
  • Hit Enter to accept the default starting point
  • Specify size using the format +[size]M where [size] is an actual value in MB. For example, I used +1536M designating majority of the space on my 2GB drive for my primary partition, and leaving 512MB for swap. If you have 4GB drive use +3582 and so on.
  • To set up second partition hit n, p, 2
  • Hit Enter to accept the default starting point
  • Hit Enter once again to use all the remaining space
  • Hit t then 2 to change the filesystem on partition 2 and use 82 (Linux Swap)
  • Hit a, 1 to make first partition bootable
  • Hit w to write changes to the disk

When you’re done the p command should return something like:

/dev/sda1   *           1         911     3501853  83 Linux
/dev/sda2             912        1018      411308  82 Linux swap

Since arch uses ext3 file system we will want to format the primary partition /dev/sda1 as such. Unfortunately the default OS on the PogoPlug does not ship with support for ext3 so we will need to download the mke2fs tool from the arch website and then use it to format the partition:

cd /tmp
wget http://archlinuxarm.org/os/pogoplug/mke2fs
chmod +x mke2fs
./mke2fs -j /dev/sda1
mkdir alarm
mount /dev/sda1 alarm

Installing Arch

Now we are ready to download the Kirkwood Arch binaries. The latest builds are close to 200MB in size, which was too big to fit in on the PogoPlug system partition. I recommend downloading it it to the newly formatted drive instead:

cd alarm
wget http://archlinuxarm.org/os/ArchLinuxARM-kirkwood-latest.tar.gz

The official PogoPlug write-up on the Arch website tells you to use bsdtar to extract this archive. This may or may not work for you. I had major issues unpacking that way due to a locale mismatch and the UTF-8 encoding being used for file paths within the compressed bundle. Extracting the file the old fashioned way however worked just fine which is what I recommend you do:

tar -xzvf ArchLinuxARM-kirkwood-latest.tar.gz
sync
cd ..
umount alarm

Finally, download the U-Boot installer which will flash the ROM memory of the PogoPlug and force it to boot off the USB drive. Note that this step can brick the device (though I’ve done it a dozen times by now and never had any issues):

wget http://archlinuxarm.org/os/armv5te/boot/pogo_e02/pogo_e02.sh
chmod +x pogo_e02.sh
./pogo_e02.sh

Once this is done, reboot manually:

/sbin/reboot

If everything worked correctly the device should now boot into Arch. When the device reboots, log in with username root and password root.

Configuring Arch

First thing you will probably want to do is to update the system. You use the default Arch package manager pacman for that:

pacman -Syu

Next, you probably want to change a root password and add a new regular user for yourself (remember to add yourself to the wheel group):

passwd
useradd -m -g users -G wheel -s /bin/bash luke
passwd luke

The Kirkwood install is very bare bones and it does not ship with sudo so you will probably want to install it:

pacman -S sudo

Configure it with visudo and append the following to the end of the file:

%wheel      ALL=(ALL) ALL

This will give your regular user and all the other potential future members of the wheel group access to sudo command. At this point it may be a good idea to log out and log back in to make sure the user account you just created works, and that you can use su and sudo to elevate your privileges. If everything works, you may want to disable the remote access to the root account like this:

passwd -l root

You will probably want to change the devices hostname. On Arch this is done via the hostnamectl command:

hostnamectl set-hostname mypogoplug

If you’re on a Windows network and you want to be able to use the hostname instead of the ip address when you ssh you will need to install samba and give it a netbios name:

pacman -S samba
cp /etc/samba/smb.conf.default /etc/samba/smb.conf

Modify the smb.conf file to include:

workgroup = MYWORKGROUP
netbios name = mypogoplug

Now start samba and set it to start on boot:

systemctl start samba
systemctl enable samba

You should now be able to ssh into your plug using mypogoplug rather than the IP address from Windows machines. If you have Apple machines on the network and you want to be able to access them using mypogoplug.local then you will need to install two additional packages: avahi and nss-mdns:

pacman -S avahi nss-mdns

Now open the /etc/nsswitch.conf file and change the following line:

hosts: files dns myhostname

into:

hosts: files mdns4_minimal [NOTFOUND=return] dns myhostname

Finaly, start the avahi-daemon and set it to be run on boot:

systemctl start avahi-daemon
systemctl enable avahi-daemon

At this point your device should be discoverable on the network and more or less ready to go for whatever purpose you may want to use it.

]]>
http://www.terminally-incoherent.com/blog/2014/11/03/installing-arch-linux-on-the-pogoplug/feed/ 4
Spectacular Computer Failures: Part 2 http://www.terminally-incoherent.com/blog/2014/08/05/spectacular-computer-failures-part-2/ http://www.terminally-incoherent.com/blog/2014/08/05/spectacular-computer-failures-part-2/#comments Tue, 05 Aug 2014 14:47:06 +0000 http://www.terminally-incoherent.com/blog/?p=17664 Continue reading ]]> My PC has died once again. This has happened before but it turned out to be a video card failure. I was able to identify the problem by listening to the beep-codes, ordered a new card and I was back in business. This time it seems like it is something a tad more serious.

I was talking about it this morning on Twitter so to avoid repeating myself over and over, let me embed my rant here:

From what I have gathered so far, the culprits can be:

  • The motherboard – when I replaced the video card last year I noticed one of the PCIe ports was not working at all, so it is likely that something on the board shorted out back then and I was simply really lucky to bypass it and get a whole another year out of the hardware. Replacing it wouldn’t make my sense because I would also likely need to replace the PSU and the CPU, expand memory, and with a new PSU I could actually get a new video card so I would essentially be building a brand new machine.
  • The PSU – I am told that a failing power supply can cause such symptoms. If the CPU and/or video card are not getting enough juice the system won’t even POST, but the chassis lights, fans and LED’s which only require low voltage may still be fully operational. Unfortunately I don’t have a spare PSU lying around to test this theory.
  • The CPU – someone said to take the fan and radiator off and re-apply thermal paste which may work, unless it’s already to late and it fried itself. You technically ought to be able to see/smell when a immolates itself, but other than eyeballing it I don’t really have a way of testing it.

I kinda hate these situations where it is really hard to tell what is going on. I honestly kinda hate working with hardware. I can tinker with software all day, but with hardware I always feel like I’m just throwing money on the problem and I can never be sure if I:

  • Replaced the wrong part (and the real problem is actually elsewhere)
  • Replaced the right part but somehow attached it or configured it wrong
  • Just damaged a brand new component by doing something stupid

At this point I am seriously tempted not to be frugal and just irresponsibly splurge on something I can have shipped to my house within a week, that I will be able to take out of the box and start using right away:

Anyone ever had a similar issue? Any troubleshooting steps that I’m missing? Any recommendations?

Troubleshooting Update: Friday, Aug 8

As per some of the great suggestions here and on twitter I did some troubleshooting. First I bought this PSU tester:

It did give me some interesting readings that I am not entirely sure how to read. The manual that came with the device is more than useless:

The LL symbol under the +12V2 is flashing. The 1 page manual claims it either means no voltage or voltage lower than acceptable minimum. Greg on Twitter suggested my PSU might be missing that rail so I checked it:

The +12V2 rail is listed on the PSU spec block so it does seem like it is supposed to have it. So am I correct interpreting this as my PSU having developed a fault on that rail and needing to be replaced?

]]>
http://www.terminally-incoherent.com/blog/2014/08/05/spectacular-computer-failures-part-2/feed/ 12
WordPress: Vanishing Categories http://www.terminally-incoherent.com/blog/2014/07/09/wordpress-vanishing-categories/ http://www.terminally-incoherent.com/blog/2014/07/09/wordpress-vanishing-categories/#comments Wed, 09 Jul 2014 14:03:47 +0000 http://www.terminally-incoherent.com/blog/?p=17155 Continue reading ]]> Roughly a month or so ago, something weird happened to this website. It was one of those weird and a bit scary glitches that make you question your own sanity because they come out of nowhere and they have seemingly no reasonable explanation. I was busy typing away a new post when I noticed that all my tags and categories simply vanished.

The content, mind you was still there. All the posts and images were intact. They simply lost their category and tag associations. I have never actually seen anything like this before so my first thought was “database corruption”. I’m not sure how the DB could get corrupted, but WordPress is famously finicky about these sort of issues. It is not uncommon to see a badly written plugin touch one of the core WordPress tables in a bad way and make it freak out.

I promptly logged into the server and started running exploratory queries, but most of them came back looking very normal. The tag and category tables still had all of the entries in there, and posts were still correlated with them via foreign keys. The schemas of all the tables looked normal and I couldn’t detect any sign of plugin induced damage or even malicious tampering. All the information was in the database, but the UI refused to make the connections.

Luckily this is not the first time (and probably not the last time) I have seen WordPress go completely wonky. I have learned that running an active WordPress site without nightly backups is pretty much actively seeking out headaches. So after scratching my head for two hours, I decided to roll my VM back to the last night’s snapshot and see if that fixes the issue. Before that I decided to check how much disk space I have left.

It turned out that I had literally zero bytes.

Just on a lark I blew away the contents ~/temp and refreshed the site. The categories and tags have magically returned, but only partially. It appears that in order to render the tags and categories and associated pages WordPress needs to write a buch of temp files to disk. I have no clue why this happens, but I’m assuming it is an optimization strategy of some sort that is intended to limit the number of database reads per page view. However, if your disk is filled to the brim, it can’t do that. Therefore it fails silently and does the best to render the page without that additional information.

On one hand it is quite amazing that inability to perform some internal core caching does not bring the entire site down. On the other hand it seems wrong to me that such an operation is necessary. But despite using WordPress for many years now I have never actually felt compelled to peel the hood back and look at it’s database queries, so I guess I shouldn’t criticize something I don’t know all that much about.

Over the years I have gotten pretty good at cleaning up linux machines from accumulating temp file cruft. So it only took me a few minutes to identify the source of my disk bloat. It was the temp directory used by my WP-Cache plugin which ballooned up to few Gigabytes somehow. Apparently the old cache files were being discarded but never deleted. Blowing away the entire cache reduced by disk usage from 100% to 35%.

To prevent this sort of thing happening again I wrote a tiny guard-dog script that checks my disk usage on a weekly basis and prints out a nice report that is then emailed to me via a cron job:

#!/bin/bash

# use colors if available
[ -f "$HOME/scripts/colors" ] && source $HOME/scripts/colors

command -v awk >/dev/null 2>&1 || { echo "awk not found. Please install it and try again"; exit 1; }
command -v du >/dev/null 2>&1 || { echo "du not found. Please install it and try again"; exit 1; }
command -v df >/dev/null 2>&1 || { echo "df not found. Please install it and try again"; exit 1; }

# grab the % usage of the primary partition (typically line 2, col 5 on df)
read USAGE <<< $( df -h | awk 'FNR == 2 { print $5 }' )

# Make red if usage is above 60
if [ ${USAGE%?} -lt 60 ]; then
    Color_On=$Color_Green
else
    Color_On=$Color_Red
fi

echo -e "\nDisk Usage Report"
echo -e "-----------------\n"

echo -e "Disk usage: \t $Color_On$USAGE$Color_Off\n"

df -h

echo -e "\nLog file spot check:\n"

# Adding output to temp file so we can sort it later
# -sh provides human readable summary
# -BM sets the block size to Megabytes
du -shBM /tmp 2>/dev/null >> /tmp/du$$
du -shBM /var/log 2>/dev/null >> /tmp/du$$
du -shBM /srv/www/*/logs 2>/dev/null >> /tmp/du$$
du -shBM /srv/www/*/*/*/wp-content/cache 2>/dev/null >> /tmp/du$$
du -shBM /srv/www/*/*/*/wp-content/uploads 2>/dev/null >> /tmp/du$$

sort -nr /tmp/du$$

rm /tmp/du$$

The full version of the script is actually available here. The colors script I’m importing up top is also on Gighub if you want to check it out.

If you ever notice tags or categories vanishing from your blog, don’t panic. It probably just means your disk is full.

]]>
http://www.terminally-incoherent.com/blog/2014/07/09/wordpress-vanishing-categories/feed/ 2
Super Lazy Provisioning with Vagrant http://www.terminally-incoherent.com/blog/2013/11/20/super-lazy-provisioning-with-vagrant/ http://www.terminally-incoherent.com/blog/2013/11/20/super-lazy-provisioning-with-vagrant/#comments Wed, 20 Nov 2013 15:07:38 +0000 http://www.terminally-incoherent.com/blog/?p=15877 Continue reading ]]> It has been almost a year since I posted anything even remotely related to my php series. I guess I should have suspected that SITS would be the death-kneel for the project. It seemed like a good, small sized project but I didn’t anticipate for the fact that it was kinda boring and unexciting. Not to mention the fact that I was over-complicating it in the name of doing things the right way™. Maybe one day I will actually manage to finish it… Maybe. No promises though.

It just so happens that this series was the first time I talked about Vagrant – the super sleek virtual machine manager for developers. Since then I’ve been working on and off with various Vagrant boxes and experimenting with different ways of setting them up in efficient, hassle free ways. So I figured I might as well make a post about it and share a thing or two I learned.

Old Vagrant Logo

Btw, can you believe this used to be the official Vagrant logo?

The simplest way to get a vagrant box up and running is to follow the official manual and simply do something like:

vagrant init my_dev_server http://files.vagrantup.com/precise32.box
vagrant up

This gives you a naked, bare bones Ubuntu installation that you can ssh into and customize to your liking. By default it ships without things like Apache or PHP and that’s actually a good thing, because you can make it into whatever you want. Why would you need PHP on a RoR box for example?

The downside of this is that every time you want to start a new project with a squeaky clean environment you have to spend time setting it up. One way of avoiding this overhead is to set the environment up once, and then package it as a box file using the vagrant package command. This will produce a portable virtual machine that you can share with other people or upload somewhere. Next time you need to set up the same type of environment you just do:

vagrant init new_dev_server http://path/to/my_dev_server.box

This works extremely well and makes sharing environments with co-workers or team-members extremely easy. In fact it lets you give someone the exact copy of your development environment at any time without much hassle.

But box files tend to be rather big. The more complex the setup, the bigger the file tends to be. You have to host them somewhere, or figure out an efficient way of sharing them (like Dropbox for example) because sure as hell you won’t be emailing them to anyone.

Not to mention that forking off copies of your working environment and making them “official” dev boxes for new team members is probably not the best idea. Chances are that your personal vagrant box will diverge from the stock copy and get customized. You will likely import your .bashrc, aliases your text editor config files and etc. You probably don’t want to distribute all that, so chances are that you have some “official” dev box you keep pristine clean installing only the bare bones dev-dependencies.

What is the difference between your “deployment-ready” box and the stock one provided by the vagrant team? In most cases the exact delta between the two is one 15 minute session of furiously typing apt-get commands. That’s honestly about all you need to do to set up a workable lamp box.

So, here is a crazy idea: why not start with a bare bones stock box, and then just run a script that includes all those furiously typed commands? Congratulations my friend, you almost invented provisioning. Vagrant actually has a full “balls to the walls” support for all kinds of hard core provisioning tools such as Puppet, Cheff and Ansible. All of these tools are great and extremely useful, and if you are a sysadmin you might already be familiar with some or all of them. And if you do not, then learning them will definitely be beneficial to you.

That said, many of us are developers who are merely trying to set up a standardized development environment for a team of three to five people. Learning how to create a Puppet Manifest, Cheff Cookbook or an Ansible Playbook might be an overkill. Fortunately latest versions of Vagrant has something for us too: shell provisioning.

The idea behind shell provisioning is exactly what we came up with few paragraphs above: use a stock box, then run a script to set it up. It is simple, easy to configure and hard to screw up. How do you do it?

Well, let’s write the simplest Vagrantfile we can think of:

Vagrant.configure("2") do |config|
  config.vm.box = "lamp"
  config.vm.box_url = "http://files.vagrantup.com/precise32.box"
  config.vm.network :forwarded_port, guest: 80, host: 8080
  config.vm.synced_folder "./www", "/var/www"

  config.vm.provision "shell", path: "./setup.sh"
end

The first five lines are standard. Like 3 tells Vagrant where to find the base box, line 4 sets up port forwarding and line 5 shares the /var/www directory on the guest os with the host. Line 7 is the important part, and it simply points to a shell script, path being relative to your Vagrantfile.

What do you put in the script? Anything you want really. For example if you just want to set up Apache and PHP then it can be dead simple:

#!/bin/bash
sudo apt-get update
sudo apt-get -y -q install apache2 php5

You put both the Vagrantfile and setup.sh in the same directory and run: vagrant up. The script will run immediately after the machine boots for the first time. It is that simple.

If you want to install a full LAMP stack you need to do something a tiny bit more complicated. Why? Because in their infinite wisdom Ubuntu package creators decided that mysql-server package needs to use a dialog to ask for root password. As you can imagine this does not really work during a initial provisioning boot sequence. So you need to be sneaky and set up an unattended mysql install like this:

#!/bin/bash

# Set up unattended install of mysql-server
export DEBIAN_FRONTEND=noninteractive
sudo debconf-set-selections <<< 'mysql-server-5.1 mysql-server/root_password password toor'
sudo debconf-set-selections <<< 'mysql-server-5.1 mysql-server/root_password_again password toor'
sudo apt-get -y update 
sudo apt-get -y -q install apache2 php5 php5-mysql mysql-server mysql-client

What else can you do? Anything you want really. Anything you can think of, and put down in the form of a script should work, as long as it does not need any real-time user input. For example, if you wanted to enable the Apache mod_rewrite module, and install the Composer package manager you could add this to the bottom of our file:

# Enable mod_rewrite
sudo a2enmod rewrite 
sudo service apache2 restart

# install additional non-essential stuff
sudo apt-get -y -q install curl vim

# Install Composer
curl -sS https://getcomposer.org/installer | php
sudo mv composer.phar /usr/local/bin/composer

One important note is that Vagrant will check for existence of the provisioning file every time it stars. So if you decided you don't want to keep your provisioning script around on your working instance you have to remember to remove it from the Vagrantfile. Or you can make the script clean up after itself automatically. This is exactly what I did in my personal LAMP setup.

You should note that the linked repository is almost empty. There isn't much there other than the Vagrantfile and setup.sh which is basically all you need. The idea is to distribute these two tiny text files, instead of a gigantic packaged box, because the end result is the same: a vm that is set up and ready to go.

Anyone who wants to create a clean environment with this setup simply has to clone it, and run it:

git clone https://github.com/maciakl/lamp.git
cd lamp
vagrant up

Feel free to use it for your own nefarious purposes. As usual, comments and bug reports are more than welcome.

]]>
http://www.terminally-incoherent.com/blog/2013/11/20/super-lazy-provisioning-with-vagrant/feed/ 3
Powershell http://www.terminally-incoherent.com/blog/2013/10/14/powershell/ http://www.terminally-incoherent.com/blog/2013/10/14/powershell/#comments Mon, 14 Oct 2013 14:08:48 +0000 http://www.terminally-incoherent.com/blog/?p=15715 Continue reading ]]> As you may have heard already, my desktop is back to working condition. I decided to turn my hardware failure into a positive thing and take the time to upgrade my desktop. Not only did it get a new and shiny video card (nVidia GeForce GTX 760 – I kinda wanted the 780 but I would have to replace my PSU) but also a secondary hard drive on which I installed Windows 7. My rig was running Vista up until that point mostly due to the fact I was way to lazy to perform the upgrade. I figured that I would probably never get a better opportunity to do a clean OS install on this machine.

As a result the last week or so I have been running with a reduced set of tools on my machine. For example, the first few days I did not have Cygwin on board because I didn’t feel like going through the installation process which requires you to manually pick packages from a long list (that or install bare bones setup that typically doesn’t even include ssh). So as I was installing my regular programming environments (Ruby, Python, Node) I needed a working command line client. I typically use Cygwin for most things, and then cmd.exe when I’m in a pinch. The problem with cmd is that it is very, very limited in what it can do, and scripting in it is cumbersome. Bash can be quirky as a thousand fucks, but compared to cmd shell it is a walk in the park.

Windows does however have a proper shell these days. It is called Powershell and it was designed specifically to provide Windows admins with a Unix style experience. Let’s face it – while Cygwin is great for getting shit done it is not the best tool for system administration. That’s not necessarily the fault of Cygwin but rather design philosophy difference between different platforms. POSIX systems are designed around flat files. On any Unix-like systems config files are plain text, and so are log files. In fact even system and process info is accessible as flat files in the /proc directory (except Macs – Macs don’t proc). As a result most of the Unix tools used by admins evolved towards being great at parsing, processing and editing text files. On Windows on the other hand almost all admin-relevant data is encapsulated in some sort of object stores. Configuration is sealed in the Registry hives, system info is hiding in WMI store and logs are stored as Event Viewer blobs. All these constructs are proprietary Microsoft inventions and can’t be parsed using standard POSIX tools. As a result a Unix transplant system such as Cygwin can only be a second class citizen in the Windows universe.

Powershell was designed to be cmd.exe replacement that works with Windows rather than battling against it. It provides easy access to all the underlying data structures that are foreign to Unix tools, and much too complex for it’s simplistic, DOS derived predecessor. Not only that, but it also throws a bone to anyone coming from Unix background by aliasing a lot of its internal functions to classic short POSIX commands. So, for example, if you type in ls at the prompt it will work exactly as expected.

Let me give you a quick example just how neat Powershell can be sometimes. Whenever I build Win Boxen for work purposes, I like to change their Windows name to their vendor service tag / serial number because this helps a great deal with asset tracking automation and the like. How difficult is that to do in Powershell? Super easy. It can be done in 3 lines like this:

$serial = (gwmi Win32_SystemEnclosure).SerialNumber
(gwmi Win32_ComputerSystem).Rename($serial)
Restart-Computer -Force

So what is the problem with Powershell? How come people still bother cmd.exe batch files or VBS deployment scripts? How come Powershell did not become the one and only Windows shell?

Well, in its infinite wisdom Microsoft decided to cripple Powershell long before it could ever become popular. By default it starts with restricted execution policy. This means, no scripts of any kind will ever be allowed to run on your machine. When you double click a .ps1 file it throws an error. When you try to invoke a script you wrote yourself from the command line, it will puke half a page of errors at you before it crashes and burns.

  1. Design powerful new shell that can effortlessly hook into all the subsystems and data stores a Windows admin could ever need.
  2. Make all the commands have long-form aliases that are conducive to scripting (yielding clean and readable code) such as cd being just an alias of Set-Location.
  3. Install this shell on all modern Windows versions
  4. Register .ps1 as the executable Powershell script file type so that anyone can double-click these files to run scripts.
  5. Disable running of all scripts by default.

What the fuck?

Seriously, what the fuck happened there? What was the reasoning behind it? This is like creating an image editing software and disabling editing of images by default.

Actually, lets back up. Powershell is not new. It has existed since Windows XP days, back when Microsoft’s approach to system security was more or less “lol, buy a Mac”. They of course rescinded on that policy as soon as OSX came out and people figured out that Macs were actually a viable option. Today we exist in a world in which Windows actually ships with somewhat sane security setup. This was not the case in XP era when you couldn’t let a machine touch any networks until it was fully patched and bristling with at least 3 brands of security software. Back then the engineers saw the .ps1 file format and went great, yet another malware delivery vector. So they did the best thing they could: they plugged that hole before it became a serious security threat.

Naturally this ended up being only a half measure, because you can still easily trick people into running Powershell scripts by asking them to copy and paste code into the command line window. For example, this is how you install Chockolatey. Granted, this requires slightly more social engineering than just giving someone a script renamed to appear as porn.

Which is why, we don’t really have Powershell based viruses out there. Powershell was forever enshrined as the scripting language created by sysadmins and for sysadmins to do some admin stuff, but only if it does not involve users in any way. Why? Because to enable scripting you need to have admin privileges on the machine. Which is something sysadmins usually do, and end users they support do not. Which means that you can’t just write a shell script and give it to users, but you might be able to deploy them to deploy something across the domain.

If you are planning to use Powershell as a cmd.exe replacement, the first thing you need to do is to change the execution policy to enable scripting. To do that, you need to run Powershell as Admin and then execute the following command:

Set-ExecutionPolicy RemoteSigned

From that point on, scripts you write yourself or download from the internet will run as expected. The second thing you probably want to do is to create a profile. In Powershell, profiles work exactly the same way Unix profiles do. It is a script that gets automatically executed whenever you launch a new shell. The default path to your profile is kept in the $profile environment variable. That path is always there, but typically the file itself won’t exist unless you create it yourself. This can be easily done from the command line like this:

new-item -path $profile -itemtype file -force

At that point you can open it up in your favorite text editor like so:

notepad $profile

Substitute your preferred editor for notepad, but only if it is in the path. What goes in the profile? One thing you should probably consider putting there is a fancier shell prompt. Other things could for example be aliases. Here is mine:

function prompt
{
    # Check for Administrator elevation
    $w=[System.Security.Principal.WindowsIdentity]::GetCurrent()
    $p=new-object System.Security.Principal.WindowsPrincipal($w)
    $a=[System.Security.Principal.WindowsBuiltInRole]::Administrator
    $isAdmin=$p.IsInRole($a)

    if ($isAdmin) 
    {
        Write-Host "ADMIN" -NoNewLine -ForegroundColor White -BackgroundColor Red
        Write-Host " " -NoNewLine
    }

    Write-Host $ENV:USERNAME -NoNewLine -ForegroundColor Green
    Write-Host " @ " -NoNewLine
    Write-Host $ENV:COMPUTERNAME -NoNewLine -ForegroundColor Yellow
    Write-Host ": " -NoNewLine
    Write-Host $(get-location) -NoNewLine -ForegroundColor Magenta

    Write-Host " >" -NoNewLine
    return " "
}

set-alias gvim "C:\Program Files (x86)\Vim\vim74\gvim.exe"
function g { gvim --remote-silent $args }
function gd { gvim --servername DEV $args }

To change the command prompt you simply define a prompt function. The only caveat here is that it must return something that is non-zero and not an empty string. If you omit the return statement or return zero or blank string Powershell will simply append PS> to whatever was there. Other than that you can echo-out just about anything.

As you can see above, I’m using a unix-style prompt with my username, host name and path. If Powershell is called with elevated rights, there is also big, red, honking “ADMIN” tag up front to let me know I’m in the danger zone. Once you launch it, it looks more or less like this:

Powershell

My Powershell Prompt

And yes, my desktop is Gandalf, and my laptop is Balrog and they are usually on the same desk. I do realize I’m probably courting a disaster with this naming scheme.

Naturally, Powershell is not a replacement for good old Cygwin. For one, Cygwin provides me with native ssh and scp whereas with Powershell I have to use PuTTY and similar tools as proxies. Well, that and I don’t think I could ever wean myself from basic POSIX tools. Especially I tend to jump between Ubuntu, OSX and Win7 all the time.

Do you use Powershell on any Windows boxes? Do you have any nifty tips or tricks? What is in your Powershell profile right now? Let me know in the comments.

]]>
http://www.terminally-incoherent.com/blog/2013/10/14/powershell/feed/ 7
You Need Backups http://www.terminally-incoherent.com/blog/2013/10/02/you-need-backups/ http://www.terminally-incoherent.com/blog/2013/10/02/you-need-backups/#comments Wed, 02 Oct 2013 14:05:30 +0000 http://www.terminally-incoherent.com/blog/?p=15663 Continue reading ]]> Quick question: is your data backed up? If I walked into your house right now with a garden hose, grabbed your computer and hurled it out the window, how much data would you lose? If it would be any more than a day worth of work, you are in big trouble. In fact I might already be in your house getting ready to do this. Before you ask, the garden hose is there just for misdirection.

My point is that if you do not have backups you will lose data. No one likes to lose data. In fact, most people absolutely hate it. The UN has recently designated the following as the official data loss position:

Data Los Position

International Data Loss Position

It is kinda like fetal position, but caused by trauma of data loss. If you see anyone looking like this, be gentle with them – they are in a state of shock.

If you don’t have backups, I would really like to know why. Do you think there is nothing on your computer worth backing up? Well, you are wrong. There is always something irrecoverable on your hard drive. Be it that one elusive bookmark you use once a year to do that one thing, or the one and only picture you have of your ex from before he or she transformed into a giant spider. Maybe it’s that old college essay you wrote that you won’t miss until your kid needs to write one. Or maybe it’s that children’s story you wrote about a boy wizard or a hungry girl archer that you hope to one day publish (sorry, you might be a bit late on this, but I hear you can totally sell fanfics now). There is always a thing on your computer that you will miss when its gone. Frequently you won’t even know what it is until disappears forever.

Throughout my life, I have re-formatted and re-imaged a lot of computers for friends, family and coworkers. I have never, ever met a person who told me to wipe their hard drive without backing it up first, and did not regret that decision. Not a single soul out there has nothing worth preserving on their computer. If there are absolutely no files on your machine that you would like to keep, you might not be human. You are probably a robot, in which case I would implore you to stop spamming my internets (like seriously dude, what is with the robotkind and the penis enlargement emails).

Or maybe you are one of these folks who think data loss is something that happens to other people. Sorry to break it to you but you are dead wrong.

You Need Backups

You need backups because hard drives are designed to fail. Magnetic hard drives are probably the most vulnerable component of your computer. They are one of the very few devices that have moving parts. An average drive consists of one or more spinning platters and a read/write head assembly that glides above them. A sudden shock may cause the head crash into the surface of the platter at any time, typically resulting in catastrophic hardware failure. If the drive ever loses it’s hermetic seal, the dust buildup on the platter will at best cause read/write errors and at worst result in a head crash and impressive scratches throughout the surface. And even if nothing ever goes wrong, the moving parts of the hard drive are under constant stress, and will eventually wear out. The longer you use a drive the more likely it is that it will fail.

Wear and Tear

Disks usually don’t catch on fire like this. But the chance your disk will spontaneously combust is higher than the chance it will never fail.

Solid state drives are marginally better, but not by much. They are by no means exempt from the laws of entropy and the memory cells they use have a limited number of writes they can withstand before they cease to function. If anything, solid state drives fail more reliably and predictably than magnetic drives. But fail they do.

Unfortunately, most people never see their storage medium fail, because their computer suffers a catastrophic mechanical failure long before that. Especially laptops. Do you use a laptop? According to a recent poll I conducted, most people do. A laptop is a portable computer you lug around with you in a flimsy shoulder bag or a backpack. Do you know what happens when you take a computer outside?

Tigers

Tigers Happen!

If I had a quarter for every time a friend or relative of mine got their laptop stolen by a tiger, I would have… About buck fifty in my pocket right now. Tigers are assholes.

They are not the only cause of data loss. A lot of people simply sit on their laptops. Some drive over them with their cars. Others spill gallons of coffee directly onto the keyboard (despite the positioning trick I teach them). There are so many ways to damage a laptop just by being careless or absent minded that you should never take it’s physical integrity for granted.

Remember this: any data that has not been backed up, is only temporarily not lost.

You Need Automated Backups

I have met a few people in my life who claimed they do backups religiously. Back when I was a kid I used to believe in such fairy tales. But then I grew up, and realized that an overwhelming number of people “do backups” by dragging and dropping files to an external drive whenever a fancy strikes them. That’s most definitely not backup. Do you know what that is called?

Makin Copies

Makin Copies

Yeah, I know most of you here are probably to young to understand this reference. Trust me though, it’s kinda funny. I’d tell you to google it, but I know you wont.

The point is that backups need to be automated. They need to happen without your knowledge or intervention. They need to be a background process that kicks in regularly regardless of whether you are at your computer or dead in a ditch somewhere. Please don’t be dead in a ditch. Seriously, stay away from ditches in general. Nothing good has ever come out of ditching.

Any backups that are not automated are only temporarily not forgotten.

Procrastination is like a force of nature. You put away your backups once, twice, then three times. Next thing you know you are in your 40’s with three children, your wife has run off with an Alligator and there are Daleks living in your attic. You don’t want to end up like this. But if you do, just use the “Bad Wolf” cheat code to summon The Doctor. That’s for the Daleks though, not the backups. On that end you are royally screwed no matter what.

You Need Offsite Backups

Did I mention that backing up to an external hard drive is not sufficient? Well, it is not. Why? Let me use an animated gif to demonstrate this:

House Blown Apart

in case of nuclear strike, put hard drives in the fridge.

Right, I know what you are going to say. Nuclear explosions are extremely easy to survive by the means of a refrigerator as shown by that one Indiana Jones movie. You do have to keep in mind however that in time of emergency you have to figure out how to stick your entire family into a tiny kitchen appliance, and you may not have time to scurry around collecting external hard drives from every room in your house.

Statistically speaking, the external media people use for backups tend to be kept in the same room and/or carried in the same bag as the computer they are backing up. This means that any disaster, calamity or inter-dimensional rift that will affect the computer, will also likely destroy the backup media.

You can have the best, most regular and thorough backup scheme in the world, but if a tiger jumps out of the bushes and steals your laptop bag, and that bag contains all your backup disks then you are back to square one. Its like having no backup plan at all.

Someone may try to advocate backup disk rotation: you could for example always have current disk in your laptop back, yesterdays disk in your house, and a disk from 3 days ago in your car. You could even put a backup disk in a safety deposit box located in a different zip code once a week. But that won’t work. You know why? Because such rotation is manual.

You will eventually get tired of it, get bored, procrastinate, forget and BOOM! Daleks! If you can’t automate it, it’s not backups. It is willpower exercise and not much else. And one you are positioned to lose every time.

You Need Onsite Backups

You may think to yourself: I got it. I will go out and get me some Cloud Backup. I will get Mozy! I will get Crashplan! I will get Carbonite! And then I will never have to worry about backups again.

Wrong! Cloud backups are only useful when you can get to them. Why wouldn’t you be able to get to your cloud backups?

No Internet

One of your Internets was deleted. Please insert coins to continue.

If you live in US like me, you have probably noticed that internet is complete and utter shit. We are currently behind Antarctica in terms of average broadband speeds. There are fucking penguins out there walking around with gigabit fiber cables hooked up directly to their cloacas (that’s where you install a router on a penguin – I don’t make these rules, geez) whereas over 10% of Americans still can’t get anything better than 56Kbps dialup in their areas.

I am lucky enough to live in a rather densely populated suburban area so I have a choice between using Comcast “best effort” connection and not using Comcast. Best effort of course means that on any given day they will make their best effort to ensure that you get some internet connectivity at some point during the day, but no promises. Also a kilobyte per second costs about as much as seven gallons of blood plasma, but we make do with what we have.

What I’m trying to say is that networks are unreliable. You are never guaranteed internet access. In emergency situations Internet always goes down and leaves you stranded. Don’t make Comcast of Verizon be you life boat. These guys are barely capable of streaming standard def Youtube clips without buffering at 4am in the morning when 90% of their customers are asleep. If you make them your life line, you are going to have a bad time.

Cloud backups are great when they work, but it is all to easy to get cut off from them for extended periods of time. Especially if you have deadlines to keep.

Also, sometimes shit like this happens:

Mega Raid

Your data is temporarily evidence. Please try again in 20 years.

This wasn’t really a problem a few years ago, but currently any US based (or based in a country that likes the US) hosting service can be raided, dismantled and sequestered as evidence in a criminal investigation of some sort. This does not necessarily need to be related to piracy. Terrorism and journalism are also potential causes for closure. Look at what happened to the Lavabit email service: it was forced into closure because the feds suspected it was being used by a whistleblower.

Granted, some services are more susceptible to this sort of closures than the others. Your best bet is to pick a NSA friendly service with a thick PRISM pipe back to Washington. For me that’s actually an additional layer of security. If all else fails, you can always try to Freedom of Information Act your lost data from the government. Though I’m told this doesn’t always work since officially we are not supposed to know about it. It has something to do with snow men… I don’t know. I don’t really pay attention.

In either case, while Cloud is super convenient it can be volatile. You should never rely solely on remote backups. Having both local and remote copies of your data is the only way to ensure your information is safe from both tigers, nukes, Comcast and government raids on data centers.

In Conclusion

Back. Your. Shit. Up.

Make sure it is automated. Data that is not backed up is as good as lost. Backups that are not automated are as good as forgotten. Data that is only backed up locally, will go down with your computer. Data that is only backed up remotely can be disconnected or deleted at a whim. If you want to have a slim chance of preserving your data, you must have it in as many places as possible. Back up locally and remotely at the same time. Have more than one backup plan.

Spread the word.

]]>
http://www.terminally-incoherent.com/blog/2013/10/02/you-need-backups/feed/ 10