ruby – Terminally Incoherent http://www.terminally-incoherent.com/blog I will not fix your computer. Wed, 05 Jan 2022 03:54:09 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.26 Unit Testing Sinatra Apps http://www.terminally-incoherent.com/blog/2015/02/24/unit-testing-sinatra-apps/ http://www.terminally-incoherent.com/blog/2015/02/24/unit-testing-sinatra-apps/#respond Tue, 24 Feb 2015 21:14:30 +0000 http://www.terminally-incoherent.com/blog/?p=18353 Continue reading ]]> Testing is the safety harness for your code. They are not magical, and they will not prevent you from missing bugs you did not anticipate. They do however automate the boring chore of making sure various edge cases and special conditions do not blow up your code. As such they help to catch bugs you could have totally anticipated, but did not bother checking for because of reasons.

Manually testing web apps is a nightmare because it forces you to pull up a web browser, refresh pages, make sure you clear your cache between tests, etc.. No one wants to fiddle with the browser all day, so automating basic testing tasks will not only save time, but also greatly improve your workflow.

Unfortunately testing web apps can be a bit tricky sometimes. They are designed to be accessed over the network, and to render in a web browser, and so they require your test framework to do network-like and browser-like things to emulate those conditions. While unit tests for classes can be easily mocked, pretending to be a web browser is definitely non trivial. When I work with PHP I usually use the excellent Codeception tool-set to do acceptance testing. When I work in Node or just build front end stuff, I typically use Grunt with Phantom.js.

When working with the Sinatra framework, most unit/acceptance style testing can be easily done by using the minitest and rack-test gems. Let me show you how.

Let’s set up a simple Sinatra app. Our folder structure ought to be something like this:

myapp/
├── app.rb
├── Gemfile
├── public/
│   └── style.css
├── Rakefile
├── tests/
│   └── app_test.rb
└── views/
    ├── layout.erb
    └── main.erb

When setting up your dependencies in your Gemfile it is a good idea to isolate the test related gems from the actual run time dependencies. You can do this by using the group keyword:

source 'https://rubygems.org'
gem 'sinatra'

group :test do
  gem 'minitest'
  gem 'rack-test'
end

When deploying to production you can exclude any group using the –without argument:

bundle install --without test

If you are deploying to Heroku, they exclude test and development groups by default, so you don’t even have to worry yourself about it.

Here is a simple Sinatra app:

require 'bundler'
require 'bundler/setup'
require 'sinatra'

get '/' do
  erb :main
end

You know how this works right? The above will render the contents of main.erb and envelop them in layout.erb which is the auto-magical default template. For the time being lets assume that the contents of the former are simply the words “Hello World” and that the later provides a basic html structure.

To test this application we need to create a test file somewhere (I put them in the test/ directory) and inside create a class derived from Minitest::Test and include the Rack::Test::Methods mixin.

Mixins are a wonderful Ruby feature that let you declare a module and then use the include keyword to inject it’s methods into a class. These methods become “mixed in” and act as if they were regular instance methods. It’s a little bit like multiple inheritance, but not really.

In the example below, this gives us access to standard Rack/Sinatra mock request methods such as get, post and etc…

ENV['RACK_ENV'] = 'test'

require 'minitest/autorun'
require 'rack/test'
require_relative '../app'

class MainAppTest < Minitest::Test
  include Rack::Test::Methods 

  def app
    Sinatra::Application
  end

  def test_displays_main_page
    get '/'
    assert last_response.ok?
    assert last_response.body.include?('Hello World')
  end
end

Once you invoke the mock request method (see line 15 above) the last_request and last_response objects become available for making assertions. The last_response object is an instance of Rack::MockResponse which inherits from Rack::Response and contains all the members and methods you could expect. For example, to check whether or not my app actually displayed "Hello World" I simply had to test if that string was somewhere inside last_response.body (see line 17).

To run this test you simply do:

ruby tests/app_test.rb

The minitest gem takes care of all the boring details. We just run the test and see the results.

Let me give you another example. Here is a bunch of tests I written when working on the Minion Academy web service. My goal here was to make sure my routing rules worked correctly, that the requested pages returned valid JSON objects, with the right number of nodes, and that no JSON would be generated if the URL was formatted wrong:

 
  def test_json_with_1
    get '/json/1'
    assert last_response.ok?
    response = JSON.parse(last_response.body)
    assert_equal 1, response.count
  end

  def test_json_with_1_trailing_slash
    get '/json/1/'
    assert last_response.ok?
    response = JSON.parse(last_response.body)
    assert_equal 1, response.count
  end

  def test_json_with_0
    get '/json/0'
    assert last_response.ok?
    response = JSON.parse(last_response.body)
    assert_equal 0, response.count
  end

  def test_json_with_100
    get '/json/100'
    assert last_response.ok?
    response = JSON.parse(last_response.body)
    assert_equal 50, response.count
  end
  
  def test_json_with_alphanumeric
    get '/json/abcd'
    assert_equal 404, last_response.status
  end

Note that those are not all of the tests I have written for this particular bit, but merely a representative sample.

The neat thing is that these tests will seamlessly integrate with other unit tests you write against regular, non-Sinatra, non-Rack related classes. You can simply dump all the test files in the tests/ directory and then add the following to your Rakefile:

require 'rake/testtask'

Rake::TestTask.new do |t|
  t.pattern = 'tests/*_test.rb'
end

This will add a test task you can run at any time that will iterate through all the files matching t.pattern and run them in a sequence.

If you're like me and you don't feel content unless successful tests are rendered in green, and errors in bright red, I recommend using the purdytest gem which colorizes the minitest output. There are many test report filters out there that make the output prettier, but Purdytest probably the simplest and least obtrusive. You simply require it at the top of your test files and then forget all about it.

]]>
http://www.terminally-incoherent.com/blog/2015/02/24/unit-testing-sinatra-apps/feed/ 0
Building a Jekyll Site http://www.terminally-incoherent.com/blog/2014/05/05/building-a-jekyll-site/ http://www.terminally-incoherent.com/blog/2014/05/05/building-a-jekyll-site/#comments Mon, 05 May 2014 14:03:00 +0000 http://www.terminally-incoherent.com/blog/?p=17057 Continue reading ]]> Back in 2009 I got a brilliant idea into my head: I was going to build a site on top of Joomla. Why? I still don’t exactly understand my own thought process that lead me to that decision. I think it had something to do that it was branded as a content management system and I had some content I wanted to manage. Perhaps it was because it looked serious and enterprisey and I wanted to try something different hoping it would be less of a pain in the ass than WordPress. Or perhaps it was a bout of temporary insanity.

Don’t get me wrong, Joomla is a wonderful CMS with a billion features that will let you do just about anything, but typically in the least convenient and most convoluted manner. I’d be tempted to say that Joomla engineers never actually tested their software on live humans before pushing it out into the public but I suspect that’s wrong. I suspect they have done a ton of usability testing, and purposefully picked the least friendly and the most annoying user experience. Because, fuck you for using Joomla.

Granted, they might have made some great improvements since 2009, but I wouldn’t know because upon slapping it on the server, and vomiting my content all over it, I decided I never actually want to touch anything on the Admin side of it ever again. On day two of my adventure with Joomla I decided that shit needed to go, but since I just manually migrated (read copy and pasted) like dozens of Kilobytes of content into it, I couldn’t be bothered. So I took out my scheduling book, penciled the site upgrade for “when I get around to it” and then threw the book out the window, because scheduling things makes me sad and hungry which is why I never do it.

Fast forward to 2014 and I was still happily “getting around to it”, when my host sent me a nasty-gram saying my Joomla is literally stinking up their data center. I had no clue what they were on about, since the installation was pristine clean, vintage 2009 build in a virgin state of never having been patched, updated or maintained. But since they threatened to shut all of my things down unless I get that shit off their server I decided it was time. I got around to it.

Fist step was straight up deleting Joomla. Second step was picking the right toolkit for the job. I briefly considered WordPress, but that’s a whole other can of worms, but for different reasons. WordPress is actually pretty great as long as no one is reading your blog. As soon as you get readers, the fame goes to it’s head and it decides it owns all the memory and all of the CPU time on the server, and demands a monthly sacrifice of additional Rams as your user base grows. It is literally the bane of shared servers, and most WordPress “optimization” guides start by telling you to abandon all that you know, and run like seventeen layers of load-balanced proxy servers in front of it. Not that Joomla performance is any better, but that site had no readers so it was usable. But since I was getting around to updating it, one of the goals was making it more robust and scalable, rather than trading a nightmarish clustefuck of crap for a moderately unpleasant pile of excrement. I figured I might as well go for broke and trade it for something good: like a mound of fragrant poop or something.

Since the site was on a shared host with a Quintillion users, and I didn’t feel like paying for and setting up yet another droplet I opted for a statically generated site. I tried a few static site generators and Jekyll is the one that did not make me want to punch the wall in the face (if it had a face) so I opted for that. Plus, I already had some basic layout done, so I figured I might as well use it.

The huge benefit of having a static site running on a shared host is that in theory you will never have to touch it, other than to update the content. The host will take care of the updating the underlying OS and web server, and since you have no actual “code” running on your end, there is noting there to break. Once you put it up, it can run forever without framework or plarform upgrades. It is a low effort way to run a site.

As far as front end went, I knew I wanted to work with HTML5 and that I wanted a grid based systems because making floating sidebars is a pain in the ass. So I whipped out Bower and installed Bootstrap 3.

I know what you were going to say: fuck Bootstrap, and I agree. Bootstrap is terrible, awful and overused. In fact, I think the authors of the project realized how much of a crutch it is, which is why they introduced a conflict with Google Custom Search in the latest version. Bootstrap literally breakes Google’s dynamic search box code, because fuck you for using Bootsrap.

But, it’s easy, clean and I love it, so I bowered it. Bootstrap consumes jQuery as a dependency so I got that for free. This is another useful framework people love to shit all over (though for good reasons) but since I already got it I figured I might as well use it for… Something.

One crappy thing about Bower is that when it fetches dependencies it puts all of them in bower_components directory, including useless garbage such as Readme files, build files and etc. Some people package their distributable code for Bower, but most projects don’t give a shit and just use their main repository and give you un-compressed, un-minified files along with all the associated miscellaneous garbage. I loathe having Readme files showing up on a deployment server, so I decided to manually minify and concatenate my scripts and stylesheet with Bootrstrap ones. For
that I needed Grunt. For grunt I needed Node. And so it goes. It is funny how one decision cascades into a dependency tree.

Runtime Dependencies

Pretty much at the onset, I decided I will be using the following:

Only the first item on the list is something you would want to install locally. The rest can be run off a CDN pretty reliably. Actually, you could run jQuery off a CDN too, but I decided not to. This makes your bower.json incredibly simple:

{
  "name": "My Site",
  "version": "0.0.0",
  "authors": [
    "Luke Maciak "
  ],
  "description": "Blah blah blah, website",
  "license": "MIT",
  "homepage": "http://example.com",
  "private": true,
  "ignore": [
    "**/.*",
    "node_modules",
    "bower_components",
    "test",
    "tests"
  ],
  "dependencies": {
    "bootstrap": "~3.1.1"
  }
}

This is the nice thing about static sites. Your production does not need a lot of setup – you just copy the files over and your done. All the heavy lifting is done at development time.

Dev Dependencies

Here our list is longer. I need things to build the code, manage the dependencies and some way of deploying it all to a server in a non annoying way.

  1. Ruby and Gems
  2. Jekyll
  3. Node and NPM
  4. Grunt
  5. rsync for moving the files around between servers

I already had Ruby, Jekyll node and Grunt running on my system because… Well, why wouldn’t you. I mean, that’s sort of basic stuff you install on the first day when you get a new computer. So all I had to do was to steal a project.json file from another project and slap it in my directory:

{
  "author": "Luke Maciak",
  "name": "My Website",
  "version": "1.0.0",
  "dependencies": {},
  "devDependencies": {
    "grunt-html-validation": "~0.1.6",
    "grunt-contrib-watch": "~0.5.3",
    "grunt-contrib-jshint": "~0.1.1",
    "grunt-contrib-uglify": "~0.1.1",
    "grunt-contrib-concat": "~0.1.3",
    "grunt-contrib-cssmin": "~0.5.0",
    "grunt-contrib-csslint": "~0.1.2",
    "grunt-contrib-copy": "~0.4.1",
    "grunt-shell": "^0.7.0"
  }
}

Once it was in place, fetching all the grunt dependencies was a matter of running npm install. Now comes the hard part: setting up your Gruntfile.

Basic Setup

For the sake of completion, here is my complete Gruntfile

/*global module:false*/
module.exports = function(grunt) {

    // Project configuration.
    grunt.initConfig({
        validation: {
            options: {
                reset: grunt.option('reset') || true,
            },
            files: "_site/**/!(google*).html"
        },
    watch: {
        files: "",
        tasks: 'validate'
    },
    jshint: {
      files: [  'Ggruntfile.js', 
                'scripts.js'
             ],
      options: {
        white: false,
        curly: true,
        eqeqeq: true,
        immed: true,
        latedef: true,
        newcap: true,
        noarg: true,
        sub: true,
        undef: true,
        boss: true,
        eqnull: true,
        smarttabs: true,
        browser: true,
        globals: {
            $: false,
            jQuery: false,

            // Underscore.js
            _: false,

            // Chrome console
            console: false,

          }
      },
    },
    csslint: {
        lint: {
            options: {
               'ids': false,
               'box-sizing': false
            },
            src: ['style.css']
        }
    },
    cssmin: {
        compress: {
            files: {
                'style.tmp.min.css': ['style.css'],
            }
        }
    },
    concat: {
        options: {
            separator: ';' + grunt.util.linefeed,
            stripBanners: true,
        },
        js: {
            src: [
                    'bower_components/jquery/dist/jquery.min.js',
                    'bower_components/bootstrap/dist/js/bootstrap.min.js',
                    'scripts.tmp.min.js'
            ],
            dest: 'resources/js/scripts.min.js'
        },
        css: {
            src: [
                    'bower_components/bootstrap/dist/css/bootstrap.min.css',
                    'style.tmp.min.css'
            ],
            dest: 'resources/css/style.min.css'
        }
    },
    copy: {
        main: {
            files: [  
                {   expand: true, 
                    flatten: true,
                    src: 'bower_components/bootstrap/dist/fonts/*', 
                    dest: 'resources/fonts', 
                    filter: 'isFile'
                }
            ]
        },
    },
    uglify : {
        main: {
                 src: ['scripts.js'],
                 dest: 'scripts.tmp.min.js'
             }
    },
    shell: {
        jekyll: {
            command: 'jekyll build'
        }
    }
    });

    grunt.loadNpmTasks('grunt-contrib-watch');
    grunt.loadNpmTasks('grunt-html-validation');
    grunt.loadNpmTasks('grunt-contrib-uglify');
    grunt.loadNpmTasks('grunt-contrib-jshint');
    grunt.loadNpmTasks('grunt-contrib-concat');
    grunt.loadNpmTasks('grunt-contrib-cssmin');
    grunt.loadNpmTasks('grunt-contrib-csslint');
    grunt.loadNpmTasks('grunt-contrib-copy');
    grunt.loadNpmTasks('grunt-shell');

    grunt.registerTask('default', ['jshint', 'uglify', 'csslint', 'cssmin', 'copy', 'concat']);
    grunt.registerTask('all', ['default', 'shell', 'validation']);
};

It is a huge, monolithic pile of configuration so let me explain what I’m trying to accomplish here. In an ideal world, you want to have a single CSS file linked at the top of your page, and a single JavaScript file linked on the bottom. If you use Bower to handle dependencies (as you should) this is not possible, because every little thing you install gets it’s own folder in bower_components folder. So your first task is to pick out the important parts from each of those folders, smush them together into these two files. This is what is happening here.

For example, lines 56-62 run my custom CSS rules (style.css) through a minifier (using grunt-contrib-ccsmin) that removes all the spaces, and makes it super-ugly for the purpose of loading faster. Likes 96-101 do the exact same thing to my custom JavaScript code in scripts.js via grunt-contrib-uglify. So I end up with two very ugly files style.tmp.min.css and scripts.tmp.min.js. All of these files will be excluded from Jekyll compilation via _config.yml exclude list.

Once I have those, I use the grunt-contrib-concat plugin to concatenate my custom stylesheets and scripts with those provided by Bootstrap. You can see that in lines 63-83. I end up with my two ideal production ready files named: style.min.css and scripts.min.css. The new files are placed in resources/ directory.

A side effect of re-locating the Bootstrap script and CSS is that you break the glyphicons. The css files have relative paths to the web-font included in the bootstrap package, so if you want it to work it has to be in the fonts/ directory relative to the css location. This is what the 84-95 section is about. I’m taking all the files from bootstrap_components/dist/fonts/ and placing them in resources/fonts/ like this:

resources/
├── css
│   └── style.min.css
├── fonts
│   ├── glyphicons-halflings-regular.eot
│   ├── glyphicons-halflings-regular.svg
│   ├── glyphicons-halflings-regular.ttf
│   └── glyphicons-halflings-regular.woff
└── js
    └── scripts.min.js

The rest of the file is mostly concerned with linting. I check my css code with grunt-contrib-csslint and my JavaScript with grunt-contrib-jshint which is fairly standard. In both cases I’m relaxing the linting notes a little bit to preserve my own sanity, and to get around ugly hacks. For example ‘box-sizing’: false on line 51 is there to allow me to fix the aforementioned css that completely breaks Google’s Custom Search functionality. Similarly on line 35 I’m declaring $ as a global, because JSHint does not uderstand jQuery and freaks out for no reason.

I’m also using the excellent grunt-html-validation plugin to make sure my HTML is valid.

Finally, here is my _config.yaml file for Jekyll. It is mostly unremarkable, save for the exclusion list where I prevent Jekyll from copying all of the useless files into production.

name: My Site
description: blah blah blah
author: Luke

category_dir: /
url: http://example.com

markdown: rdiscount
permalink: pretty
paginate: 5

exclude: [
            package.json, 
            bower.json, 
            grunt.js,
            Gruntfile.js, 
            node_modules, 
            bower_components,
            validation-report.json, 
            validation-status.json,
            scripts.js, 
            scripts.tmp.min.js,
            style.css,
            style.tmp.min.css,
            lgoo.psd,
            Makefile,
            exclude.rsync,
            README.markdown
         ]

Grunt takes care of compiling and linting all the front end code, while Jekyll builds the site from an assortment of html and markdown files. I already wrote a lengthy article about setting up a basic Jekyll site before, so I won’t bore you with the details here. The basic skeleton looks like this though:

.
├── _config.yml
├── _drafts/
├── _layouts/
│   ├── category_index.html
│   ├── default.html
│   ├── page.html
│   └── post.html
├── _plugins/
│   ├── generate_categories.rb
│   └── generate_sitemap.rb
├── bower.json
├── bower_components/
│   ├── bootstrap/
│   └── jquery/
├── exclude.rsync
├── favicon.ico
├── feed.xml
├── Gruntfile.js
├── imag/
├── index.html
├── Makefile
├── node_modules/
├── package.json
├── README.markdown
├── resources/
│   ├── css/
│   ├── fonts/
│   └── js/
├── robots.txt
├── _site/
├── scripts.js
└── style.css

The bower_components directory as well as the “naked” JavaScript and CSS files are excluded from compilation, in lieu of the resources which contains files generated by Grunt. Other than that this is a fairly standard structure.

Deployment

As I said before I decided to use rsync to deploy the website. There are many ways to deploy a Jekyll website, but this is probably the most efficient tool for the job. In an ideal world, you compile a Jekyll site, and then rsync compares your _site directory to what is on the server and only copies/deletes files that are different. This means you first upload will be massive, but from that point on, you are just going to transfer the deltas.

There is a little caveat here though: by default rsync compares files based on timestamps. This is a problem because Jekyll clobbers the _site directory every time you build your site. This means that every file inside of it will look brand spanking new to rsync even if it has not technically changed. This downgrades our delta-sync tool to just a crude file uploader that is no more sophisticated than an rm -rf command followed by scp _site/* host:~/site.

Fortunately, I found an excellent tip by Nathan Grigg which suggests telling rsync to use checksums instead of timestamps. By force of habit, when setting up an rsync script most of us might be tempted to write something like:

rsync -az --delete _site/* user@host:~/site

This is the traditional and wrong way of doing this. What Nathan suggests instead is:

rsync -crz --delete _site/* user@host:~/site

Or perhaps, more descriptively:

rsync --compress --recursive --checksum --delete _site/* luke@myhost:~/site/

I actually like to use the long arguments when I write scripts, because years down the road they will make it easy to understand what is going on without looking up cryptic letter assignments in the man pages.

To simplify deployment I wrote myself a little Makefile like this:

.PHONY: check, deploy, tunnel-deploy, build

default: check, build

check:
	@command -v ssh >/dev/null 2>&1 || { echo "ERROR: please install ssh"; exit 1; }
	@command -v rsync >/dev/null 2>&1 || { echo "ERROR: please install rsync"; exit 1; }
	@command -v grunt >/dev/null 2>&1 || { echo "ERROR: please install grunt"; exit 1; }
	@command -v jekyll >/dev/null 2>&1 || { echo "ERROR: please install jekyll"; exit 1; }
	@[ -d "_site" ] || { echo "ERROR: Missing the _site folder."; exit 1; }

build: check
	grunt
	jekyll build

deploy: check, build
	rsync --compress --recursive --checksum --delete --itemize-changes --exclude-from exclude.rsync _site/* luke@myhost:~/site/
	ssh luke@myhost 'chmod -R 755 ~/site'

Tagging

Jekyll kinda supports tags and categories, but those are still rather underdeveloped features. When I build Jekyll sites I like to use Dave Perret’s plugin to get nice category archive pages. It also injects the category names into the “pretty” permalinks adding taxonomy to your url structure.

I have a very specific idea abut how tags and categories should be handled and how they differ. For me, categories group posts of a certain broad type, while tags are used to indicate specific topics/keywords that cut across the categories. So for example, you could have a category named “videos” and bunch of tags like “interview”, “trailer”, etc.. That said, the tag “interview” is not unique to the “videos” category and could also be used to tag posts in other categories like “pictures” for example. I like to have one category per post, but multiple tags. These is not a hard rules and most systems out there allow for more liberal use of both concepts. Dave’s plugin actually allows for multiple categories per post. But I typically stick to one. It is a personal preference of mine.

Categories are big, broad and there are few of them. I will often list them on the sidebar, and use them for navigation. Tags are different – they are more messy. So I opted to have a single page that would essentially be a table of contents by tag.

Michael Lanyon did an excellent writeup on how to alphabetize your tag list using nothing but Liquid tags in your template.

{% capture site_tags %}{% for tag in site.tags %}{{ tag | first }}{% unless forloop.last %},{% endunless %}{% endfor %}{% endcapture %}
{% assign tag_words = site_tags | split:',' | sort %}

Table of Contents:

Posts for each tag:

{% for item in (0..site.tags.size) %}{% unless forloop.last %} {% capture this_word %}{{ tag_words[item] | strip_newlines }}{% endcapture %}

{{ this_word }}

    {% for post in site.tags[this_word] %}{% if post.title != null %}
  • {{ post.title }}
  • {% endif %}{% endfor %}
{% endunless %}{% endfor %}

This works very nicely generating an alphabetized list that is easy to search through. You can link to the list for individual tag using a hashmark in the URL: http://example.com/tags/#tagname and it will take you to that section. That said, it can be a bit confusing for the user to get dumped into the middle of a huge list of unrelated things. So I built upon Dave’s idea and added some Javascript to the mix.

I figured I already have a jQuery dependence, so I might as well use it:

var MySite = MySite || {};

MySite.showSingleTag = function showSingleTag() {
    $(".tag-list").hide();
    $(window.location.hash).show();
};


$( document ).ready(function() {
    if( window.location.hash )
    {
        MySite.showSingleTag();
    }

    $(window).on('hashchange', function() {
        Gigi.showSingleTag();

        // scroll to the element
        $('html,body').animate({scrollTop: 
            $(window.location.hash).offset().top},0);
    });       
});

This script detects if there is a hash in the URL, and if so hides all the entries, except the ones related to the relevant tag. I left the list of tags alone, because I figured the user might want to explore what else is available. Because of this a little bit of additional logic was added. If you click on a hash-link the browser page won’t reload, and thus my hashmark check won’t trigger. So on line 15 I check if the URL hash changes, and if so I re-do the hiding, and then I forcefully scroll the user’s viewport back to the tag list.

TL;DR

I have successfully switched from Joomla to Jekyll and it’s great. I’m totally not going to regret this choice 5 years down the road, right? I mean, what could go wrong, other than everything. Actually, I’m already begging to see cracks forming in this master plan. You see, the site has a lot of images. They are mostly low to medium resolution screen-shots, but there are a lot of them, and there will be many more if I actually keep updating this thing more than once a year. As part of the update I added about 100MB worth of images, which is not a terrible lot but it has slowed the Jekyll compilation times quite a bit. So this is bound to get super annoying real quick… But I guess that’s par for the course: all software sucks, and it is a fucking miracle the internet even works seeing how nearly every website in existence is held in place with a digital equivalent of duct tape.

You can see the fruits of my labor at gigiedgleyfansite.com. While it’s not perfect, I think it is a huge improvement over the old Joomla based site. Let me know what you think.

]]>
http://www.terminally-incoherent.com/blog/2014/05/05/building-a-jekyll-site/feed/ 10
Private Journaling: Reading http://www.terminally-incoherent.com/blog/2013/07/29/private-journaling-reading/ http://www.terminally-incoherent.com/blog/2013/07/29/private-journaling-reading/#respond Mon, 29 Jul 2013 14:08:18 +0000 http://www.terminally-incoherent.com/blog/?p=15335 Continue reading ]]> Back in March, I more or less definitively resolved my private journal writing problem when I created MarkdownJournal.com. I made it primarily for myself and so it was designed to scratch all of my particular itches. It was made to be a web app that has a decent mobile interface so I can add entries whenever and wherever I am. I made it use Dropbox as a back-end so that I would have direct, file system access to my journal files from any machine I own. I built it to use Markdown so that I could edit the source files with any text editor if and when I needed to.

The only issue it didn’t really address was reading the damn thing. It turns out that sometimes you just want to go back and read what you wrote, and I didn’t actually put that functionality into my app because I didn’t really want to host or cache anyone’s personal writings on my server. I always imagined my app being just a web-editor front end for the files you store in your private Dropbox account. I figured that if you wanted to read your journal you’d just go to the source files and read them. Of course based this assumption on a survey I conducted on users of the application with the sample size of: me.

Dogfooding guys! Haven’t you heard about it? It means you build a tool for yourself and you don’t give two shits about users that aren’t you. Or something like that.

Anyways, turns out that’s actually not how I was using my journal.

My private journal happens to be much like this blog, just more boring, and more concise because I don’t explain things or even attempt to be coherent half of the time. It also contains much more of Aubrey Plaza (massive celebrity crush at the moment, don’t judge), Warhammer (so I’m collecting a Skaven army now because I apparently hate having money in my wallet) and randomly terribad ideas (what if cats are natural plane-walkers and when they purr the resonance actually shifts the reality according to their unknowable but likely sinister agenda). I also occasionally paste in links or code snippets into my entries when jotting down ideas for possible future projects (though I’m to lazy to actually implement even a fraction of these). Hell, sometimes I even put pictures in there.

Markdown makes this easy, but reading files with links, code snippets, and image tags can become a little tedious. Often I’d actually render the entry into HTML before I read it just so that all the links and markup would look the way it was supposed to.

At one point I got tired of manually converting the files all the time, so I wrote a nifty script that would grab current month’s journal file, run it through pandoc and then open resulting output in a web browser:

#!/bin/bash

PAGE_TITLE=$(date +"%B-%Y")

# Set these environment variables to make it work on other systems
[ -n "$JOURNAL_BROWSER" ] || JOURNAL_BROWSER=links
[ -n "$JOURNAL_DIR" ] || JOURNAL_DIR=/home/luke/Dropbox/Apps/Markdown\ Journal
[ -n "$JOURNAL_PANDOC_OPTS" ] || JOURNAL_PANDOC_OPTS="--section-divs --title-prefix=$PAGE_TITLE"

# cygwin users - link your dropbox folder in cygwin file system - like: /foo/bar
# use the unix style path in your variable - the script will translate later
# you will need windows version of pandoc since cygwin doesn't have it yet

FILE=$(date +"%Y-%m.%B.markdown")

if [ -f "$JOURNAL_DIR/$FILE" ]; then

    # Make sure the script still works under cygwin with windows version of pandoc
    if [[ $(uname -s) == "CYGWIN"* ]]; then
        # translate paths to windows ones for pandoc.exe
        JOURNAL_FILE="$(cygpath -aw "$JOURNAL_DIR")/$FILE"
        JOURNAL_STYLE="$(cygpath -aw "$JOURNAL_DIR")/style"
        JOURNAL_TMP="$(cygpath -aw "/tmp")/$$.html"
    else
        # use normal unix paths
        JOURNAL_FILE="$JOURNAL_DIR/$FILE"
        JOURNAL_STYLE="$JOURNAL_DIR/style"
        JOURNAL_TMP=/tmp/$$.html
    fi

    # convert file
    if [ -f "$JOURNAL_STYLE" ]; then
pandoc -s "$JOURNAL_FILE" -H "$JOURNAL_STYLE" -o $JOURNAL_TMP $JOURNAL_PANDOC_OPTS
    else
pandoc -s "$JOURNAL_FILE" -o $JOURNAL_TMP
    fi

echo $JOURNAL_TMP
    # open in a browser
    eval "$JOURNAL_BROWSER" '$JOURNAL_TMP'
else
echo Sorry, no such file: $JOURNAL_DIR/$FILE
fi

It’s all pure bash. I know I should man up and switch to zsh one day but… Well, I know bash kinda. And I do stuff like this in it, so it works for me most of the time. As you can probably see I went out of my way to ensue the damn thing worked on Mac, Linux and also Windows (via Cygwin). I’ve been using that for a while. I even had a nifty trick which would exploit pandoc feature set to inject a custom stylesheet into the output files so they would be rendered according to my specifications.

But, alas having a dedicated reader was a bit limiting. So I decided Markdown Journal needed a built-in, native reader feature. One thing I was adamant about though was that I didn’t actually want to cache or save anything on the server. In essence I didn’t want any user content touching the file system. I figured that the only way to do this is to slurp the entire file into memory, then render it and throw it all away when the user leaves the page. it turns out that in Sinatra rendering markdown formatted strings is actually super easy:

get '/read/:file' do

    # make sure the user authorized with Drobox
    redirect '/login' unless session[:dropbox]
    
    # get DropboxSession out of Sinatra session store
    dropbox_session = DropboxSession::deserialize(session[:dropbox])

    # make sure it still has access token
    redirect 'login' unless dropbox_session.authorized?

    client = DropboxClient.new(dropbox_session, ACCESS_TYPE)
    temp = client.get_file(params[:file])

    erb :read, :locals => { :content => markdown(temp) }
end

Note that majority of that block above is just house-keeping and access control code. The relevant lines are #13 where your journal file is actually read into memory and #15 where it is rendered onto the page. That last line is especially interesting because it does a lot of work. It converts the markdown formatted string contained in the temp variable into a HTML string, and assigns it to the :contents variable. Then it passes it to an e-ruby template which is rendered onto the page. At no point is anything saved to the disk.

Granted, this is probably not the best way to do this and it definitely wont scale in the long run but somehow I don’t expect my little app to explode and become mainstream. For one, the pool of people who both enjoy markdown, have a dropbox account and are in the market for a simple journalling app is very, very limited.

If you log into Markdown Journal you will now see links to hour past journal entries listed right below the input box like this:

Reader Feature

Reader Feature

The way I enumerate these is actually very simple – I use the metadata dump from Drobpbox to get a list of all the files in the Markdown Journal app directory (remember, it only has access to it’s own dedicated folder – not all of your Dropbox) like this:

client = DropboxClient.new(dropbox_session, ACCESS_TYPE)
list = client.metadata('/')
@files = list['contents']

Then I render the bullet point list like this in the e-ruby template:

This is really concise due to the fact that much like in Perl, regexps are handled natively in Ruby. As in, there is no need to wrap them into strings and double-escape everything. If you don’t speak regexps let me explain these. The first one matches anything that:

  • ^\/[0-9]* starts with bunch of numerical characters
  • -[0-9]* followed by a dash and some more numbers
  • \.[A-Za-z]* followed by a dot and some alpha characters
  • \.markdown$ followed by .markdown which is the last thing on the line

In other words, match only files that follow the naming convention used by Markdown Journal which is YYYY-MM-Monthname.markdown. This might be slightly too generic (I’m not restricting the length of the numerical strings) so it might possibly match something that’s not a journal file but so far it worked. if anyone has suggestions on how to improve this, I’d love to see your take on it.

The second regexp pretty much just extracts the month name: give me the string of alpha characters sandwitched between two dots immediately followed by the word markdown more or less. Then on top of that I only grab the first three characters of that result to save space. It doesn’t really matter when viewed on a desktop, but in the mobile layout long month names like “february” would wrap to the next line screwing up the alignment.

TLDR: Markdown Journal has a new feature that lets your read your journal entries.

]]>
http://www.terminally-incoherent.com/blog/2013/07/29/private-journaling-reading/feed/ 0
Ruby Gems and Warhammer http://www.terminally-incoherent.com/blog/2013/06/12/ruby-gems-and-warhammer/ http://www.terminally-incoherent.com/blog/2013/06/12/ruby-gems-and-warhammer/#comments Wed, 12 Jun 2013 14:06:27 +0000 http://www.terminally-incoherent.com/blog/?p=14536 Continue reading ]]> The other day I wrote about my attempts to get back into Warhammer. Today I wanted to touch upon a slightly different aspect of the hobby. The open secret of war gaming community is that models are technically optional.

This may sound counter intuitive at first, seeing how the entire “hobby” aspect of war games revolves around models: collecting, painting, converting, trading, etc.. Undeniably, they are very important and without them the companies that release and market such games would have no profits. But when you just want to play a quick game with a friend, it can be done without models.

Unlike strategy games that are played over the network via keyboard and mouse under strict rules enforced by a soulless machine, tabletop war games are gentlemanly affairs conducted between consenting man-children. In essence, when you play Warhammer you are on the honor system – there is no game engine to make sure rules are followed, and no referee to ban someone for cheating. You simply have to agree to follow the rules and conduct yourself in a sportsman-like way because otherwise the game ceases to be fun.

Because of this, you can technically play a game without any models. Or rather you can substitute actual models with just about anything: lego figurines, old plastic army men, poker chips or even paper cut-outs. When you play Wahrammer Fantasy you typically deal with square or rectangular units that all move as one. Models are typically mounted on square bases that are 20mm on the side (or 25x50mm for mounted cavalry). If you are really lazy all you need to do is to draw a grid on paper, cut it out and you have a unit. When the models “die” instead of taking them off, you can either cross them off, or rip them out of your paper cut-out.

This is more or less how we have been playing lately. None of the participating mentlegen had a complete army – at least not by the rules as they stand today. We all had some old models at our disposal, and so we decided to just create the types of armies we would hope to one day assemble, and simply use paper cut-outs for all the units we didn’t own.

Unfortunately drawing grids is actually not that easy when you can’t find a metric system ruler (or any ruler for that matter) in the house, and you are forced to trace model bases on notebook paper to create crooked grids that line up only in theory. After scribbling down a dozen units like this I decided there ought to be a better way of doing this. And being the lone programmer in the group, I was uniquely equipped with the know-how on how to accomplish such a thing.

The next day I sat down, rolled up my sleeves and created MovementTray – a quick and dirty Ruby script that spits out printable grids that can be used as stand-in Warhammer Fantasy units, or as movement trays (pieces of paper you put underneath the models so that they are easier to move as a unit).

Screenshot

Screenshot

Before you give me any credit for it, let me explain: this took about 20 minutes and was absolutely trivial. I put it online mostly just to make it downloadable. The core functionality is roughly 20 lines of code, and the rest is just fluff, error checking and user prompts. It was much, much easier to write than I expected, primarily because of Prawn.

Prawn is an amazing Ruby Gem that lets you generate PDF files on the fly. It is feature rich, ridiculously comprehensive and super easy to use. Let me give you a dumb simple hello world example:

pdf = Prawn::Document.new
pdf.text "Hello World!"
pdf.render_file = "hello.pdf"

Yep, it is that simple. Drawing custom shapes is just as easy – Prawn defines a whole array of shape drawing methods. So to generate my grids all I had to do was to repeatedly call the rectangle method (which took 3 arguments: starting coordinates, width and height) in a loop. Here is an example of the document my little script generates:

Output of mt.rb

Output of mt.rb

If you ever need to programatically generate PDF files, Prawn is definitely the way to go. I highly recommend, especially seeing how it has a strong team, a lot of contributors and sees a lot of activity on github.

The other really nifty gem I discovered was Trollop. What does it do? It simplifies parsing of command line arguments. I stumbled upon it almost accidentally, but now that I know about it I wonder how I ever created command line tools without it.

If you think about it, it is both baffling and staggering how much code typically gets written to parse command line arguments. Simply grabbing one or two values from ARGV is easy and quick, but when you want to have a unix-like command line switches, some of which are optional toggles while others take sub-arguments the problem of parsing them suddenly becomes non-trivial. Especially if you want the user to be able to specify the arguments in random order, and be able to leave out arguments providing sane default fall-backs.

In my particular case, I was looking at a program that generates PDF files in 20 lines, and requires 100+ lines to parse the arguments which seemed absolutely ridiculous. Trollop allowed me to reduce all that mess into more or less this:

opts = Trollop::options do
    opt :base, "Base size: standard, large, monster, cavalry", :short => "-b", :default => "standard"
    opt :rows, "Number of rows", :short => "-r", :type => :int, :required => true
    opt :cols, "Number of collumns", :short => "-c", :type => :int, :required => true
    opt :file, "Output file", :short => "-f", :type => :string, :required => true
end

Basically you need a single line per command line switch. The snippet above defines both long (double-dash) and short option switches and checks for their presence if they are required. It also automagically defines –help and -h options that will display all the options along with their descriptions.

I don’t know what it is about Ruby, but regardless of what I set out to do, there is usually an amazingly designed, well maintained and meticulously documented gem that firs my needs exactly. Half the time it just feels effortless.

]]>
http://www.terminally-incoherent.com/blog/2013/06/12/ruby-gems-and-warhammer/feed/ 4
Building Sinatra apps with Dropbox-SDK http://www.terminally-incoherent.com/blog/2013/03/11/building-sinatra-apps-with-dropbox-sdk/ http://www.terminally-incoherent.com/blog/2013/03/11/building-sinatra-apps-with-dropbox-sdk/#comments Mon, 11 Mar 2013 14:04:12 +0000 http://www.terminally-incoherent.com/blog/?p=14029 Continue reading ]]> When I was building my Makdown Journal app I have noticed that there were no good tutorials showing you how to use the official Dropbox-SDK Gem with the Sinatra framework. Granted, this is not necessarily an issue if you know what you are doing, but since this project was the first time I was using both the SKD and Sinatra I was a little bit shaky on how to combine the two at first. So I figured I might as well write it up here, so that the next person who decides to do this has something to work off.

The Dropbox Core API tutorial is pretty good, but it only shows you how to handle the authentication for a single user, client side application running from the console. It does not show you how to authenticate in a web based, multiuser environment. If you google for Sinatra and Dropbox you will get a few hits, such as this app and this gist. You should however note that they are both using the third party provided Dropbox Gem which is different from the Official Dropbox SDK Gem. I wanted to use the official one to avoid future inconsistencies. Not that I have anything against third party gems like this. It’s just a fact of life that services such as Dropbox like to tweak their authentication schemes, and gem authors loose interest and get tired of chasing a moving target after a while. I’ve been burned by this sort of thing in the past so I try to use “official” stuff whenever possible.

So, how do you authenticate with Dropbox? Well, if you happen to be just running a console app, it is easy. First you create session with the APP_KEY and APP_SECRET values you get when you sign up for Dropbox dev account. Then you generate a request token, send the user to the Dropbox Auth page using a specially generated URL that includes your app id, and the token request. You then request the access token which is only returned if the user successfully logged into Dropbox and authorized your app using the URL you provided. The code looks more or less like this:

require 'dropbox-sdk'

# Create session
session = DropboxSession.new(APP_KEY, APP_SECRET)

# Create request token
session.get_request_token

# Make the user sign in and authorize this token
authorize_url = session.get_authorize_url
puts "Please visit that web page and hit 'Allow', then hit Enter here.", authorize_url
gets

session.get_access_token
# Now you can upload/download files

When you use Sinatra however this won’t work. Usually what you do on the web is present the user with a log-in button, which they click, authorize your app then gain access to a protected section of your app which can then manipulate their Dropbox files. Ideally you want to write your app a bit like this:

require 'sinatra'
require 'dropbox-sdk'

get '/' do
    erb: index
end

get '/login' do
    # Authenticate with Dropbox and create a session
    redirect '/stuff'
end

get '/stuff' do
    # redirect to login unless authenticated
    # do actual dropbox stuff
end

This won’t necessarily work since Sinatra does not store any session data between requests. Each route defined in the example above is completely stateless and self contained. To be able to “log in” and maintain the session for your users you have to enable the cookie sessions feature of the framework using the enable :sessions keyword. This gives you an auto-magical array called session which works more or less like the $_SESSION global in PHP.

The second problem is that the Dropbox session object can’t be easily passed around between the Sinatra routes. The way to actually authenticate a Dropbox session in Sinatra is:

  1. Create Dropbox session object
  2. Get request token
  3. Serialize the session object
  4. Stuff the serialized object in the session array
  5. Redirect user to Dropbox auth-url

Or in other words, you basically do this:

enable :sessions

get '/login' do
    # Create dropbox session object and serialize it to the Sinatra session
    dropbox_session = DropboxSession.new(APP_KEY, APP_SECRET)
    dropbox_session.get_request_token
    session[:dropbox] = dropbox_session.serialize()

    # redirect user to Dropbox auth page
    redirect authorize_url = dropbox_session.get_authorize_url("http://example.com/stuff")
end

The get_authorize_url() method takes a callback URL as parameter. This is the address to which your user will be sent to upon successful authorization. For us this happens to be the /stuff address. What happens now?

  1. You deserialize the dropbox session object
  2. You get the access token
  3. You make sure session is authorized

Here is some sample code:

get '/stuff' do
    # Make sure session exists
    redirect '/login' unless session[:dropbox]

    # deserialize DropboxSession from Sinatra session store
    dropbox_session = DropboxSession::deserialize(session[:dropbox])

    # check if user authorized via the web link (has access token)
    dropbox_session.get_access_token rescue redirect '/login'

    # Do actual dropbox stuff
end

Now you can create a DropboxClient and do all the fun stuff like uploading and/or deleting files. Once you are done fiddling around and you want to log the user out the easiest way to do this is probably to “nil out” your session array like this:

get '/logout' do
    # destroy session data
    session[:dropbox] = nil
    redirect '/'
end

If you would like to see a real life example, you can check out my Markdown Journal code on Github. Note that it will only show you how to download and/or upload files to a designated App folder but it is probably a good place to start if you are planning to make your own Sinatra based app using Dropbox-SDK.

]]>
http://www.terminally-incoherent.com/blog/2013/03/11/building-sinatra-apps-with-dropbox-sdk/feed/ 1
Revisiting Private Journaling http://www.terminally-incoherent.com/blog/2013/03/04/revisiting-private-journaling/ http://www.terminally-incoherent.com/blog/2013/03/04/revisiting-private-journaling/#comments Mon, 04 Mar 2013 15:02:34 +0000 http://www.terminally-incoherent.com/blog/?p=13970 Continue reading ]]> A while ago I wrote about my search for a good private journaling solution. Since then I have tried dozens of different apps and services without finding anything I would like. At one point I even wrote my own little Jounraling client thinking I could hit the sweet spot. Unfortunately I was wrong, and my own design turned out to be just as bad as the competition.

Looking at it in retrospect, now I know that I have been trying to solve the wrong problem. I was designing a generic journal tool, rather than building something specific to my own needs. Part of the problem was that I wasn’t exactly sure how exactly I was going to be using this tool. I had an idea of a long form journal I wanted to write but that’s not what I ended up doing. I guess I should have started with a journal and built a tool around it rather than the other way around.

It turns out that creating long exhaustive daily posts was not something I was enjoying or what I had time for. I seem get most of the long form rants out of my system via this blog (and the few other public sites I run) – that’s sort of where I channel most of my clever musings. After externalizing all of my interesting musings via these public outlets, the the remainder was just boring, trivial or redundant.

I don’t actually enjoy chronicling every minute of my day. While there would be some utility to keeping a log of things I have done each day for future reference, actually putting these things down on paper are needless drudgery to me.

I actually enjoy browsing archives of my own blog because it takes me back to those old moments. Since I tend to write about things I love or care about all of the posts have a good deal emotional weight attached to them. I found out that creating more detailed and personal accounts of my daily life had the opposite effect – I noticed I was mostly chronicling my mistakes and being overly critical of myself. It seemed like a good idea at the time (self improvement and all) but reading it back would just bum me out and force me dwell on past failures.

That combined with the fact that on your average day I usually don’t have the time or inclination to play a recollection game to write a five thousand word essay of useless daily trivia made me stop trying altogether. This sort of journal does not work at all for me.

What does work? Well, I realized that for a Jounral to work it must meet the following requirements:

  1. First and foremost, the entries have to be short. I’d rather write a bunch of twitter style thought-bursts throughout the day, than a long thoughtful essay at the end of the day.
  2. Entries should be positive in nature – random musings, reflections, celebrating accomplishments, writing down jokes, etc.. They should concentrate on what I’m into at the moment, and what I want to remember about a given day years from now.
  3. Because it is sort of a continuously updated stream, it should be accessible over the internet. So it ought to be either web based, or stored in the Cloud somehow.
  4. Since I don’t always have a computer with me to jot down my thoughts, the journal should also have a dedicated iPhone App or be accessible via mobile friendly web interface.
  5. It should be stored in plain text files and not tied to a single client, unless that client is open source and available on all platforms. This is especially important with respect to the iOS based journaling tools. While there are a lot of great little diary type apps in the AppStore, most of them either store your data on their own servers, or use some proprietary file format. They are also frequently either exclusive to iOS or OSX/iOS combo leaving Windows and Linux out completely.

I gave up on my encryption requirement which I mentioned in the old post for the very simple reason: I forget passwords. If I keep my accounts reasonably secure, then I shouldn’t worry about setting per-file password on bunch of text files. The aim of a journal is to have something to aid in recollection and memory retrieval long time down the road. Chances are that in a decade, I will no longer remember the passwords I was using circa 2013. So I run a risk of locking myself out of my own journals which seems counterproductive.

Looking at my list of requirements, it almost seems that the best tool for the job would simply be Dropbox and a text editor of some sort. I could use Vim when on an actual computer, and something like Plain Text App on the phone. Unfortunately this sort of solution does not have any built-in automation. I would have to date my entries myself, which may not seem like a huge issue at first. After all, this is what people have done when they used dead-tree medium to do this.

The thing is I don’t really want a traditional “journal”. I want something like Twitter, but without the character limit (so like Pownce I guess) that saves to plain text files in a Dropbox folder. I want my entries tagged with date and time automatically, so I don’t have to think about it. I just don’t want to be bothered typing in a date/time string into a text editor 6-7 times a day.

Surprisingly enough, a guy called by Matthew Lang built a tool that is almost perfect for my needs. His creation is called Journalong and it is basically just a simple text box that sends the data to Dropbox:

Journalong

Journalong

It does everything I need, but it is not without flaws:

  1. I don’t know who Matthew Lang is and how much should I trust him not to read my entries.
  2. Matthew wants you to pay $10 per year, for the privilege of using his app. This may seem like a bargain, until you actually realize how little code is needed to build the application he built.
  3. Matthew never set the DNS record for his wwww subdomain, so half the time when my browser auto-completes the URL for me I get sent into cyberspace void.
  4. Journalong has no apple touch icon, so when I put a link to it on my iPhone it looks like absolute shit.
  5. Matthew seems to be a fan of military time. I want my entries to be dated using the 12 hour format but there isn’t a way to do this.

I sent Matthew an email asking if he ever plans to do anything about the last 3 of these issues about a week ago, but it was completely ignored. This puts #1 and #2 on the list in perspective. In this day and age, majority of the shit I use on the web is free. If I was to shell out money for this simple app, I at least ought to get some support… Otherwise, why not write something like this myself.

No, seriously. I’ve been using Journalong’s free trial and I actually kinda like the functionality it provides. But it is super simple, and the more I used it the more I realized I could actually do better. So I sat down and implemented my own Dropbox Journal in approximately 150 lines of Ruby (not counting HTML markup). I called it Markdown Journal.

Markdown Journal

Markdown Journal Screenshot

How does it differ from Matthew Lang’s application?

For one, Markdown Journal is open source. You can find the code on Github and easily verify I’m not doing anything shady with your data. Secondly, it is completely free and I intend it to stay that way.

In all honesty, I would actually feel bad charging anyone a recurring fee to use something that took me about two evenings to put together, and is shorter than about 150 lines of code, plus some HTML. The webapp runs on the excellent Sinatra miniframework and uses the official Dropbox SDK gem for authentication and file uploading. It was really simple and fun to put together and I’m hosting it on a super cheep shared host, along side some other sites because it really doesn’t require that much resources, nor do I see it ever becoming terribly popular.

I also have Matthew beat on features. My date/time headings in the file are completely configurable via a simple YAML file you can drop into the same directory you will be keeping your journal files in. Granted, creating text files is scary to some people, but chances are if your choose to write a journal in Markdown format and host it on Dropbox you can probably figure out how to create a YAML file based on an example on the site.

Finally, I have properly sized apple touch icons so when you save the link on your home screen it actually shows a proper icon rather than a blank square.

Let me know what you think. Would this tool be useful to you? What do you use for note-taking and journaling? Let me know in the comments.

]]>
http://www.terminally-incoherent.com/blog/2013/03/04/revisiting-private-journaling/feed/ 17
Rails 2.0 on Ubuntu Gutsy http://www.terminally-incoherent.com/blog/2008/04/03/rails-20-on-ubuntu-gutsy/ http://www.terminally-incoherent.com/blog/2008/04/03/rails-20-on-ubuntu-gutsy/#respond Thu, 03 Apr 2008 15:23:59 +0000 http://www.terminally-incoherent.com/blog/2008/04/03/rails-20-on-ubuntu-gutsy/ Continue reading ]]> I must confess that Rails makes me feel stupid every time I use it. The accepted truism about the framework is that it boosts your productivity like no other. Unfortunately people forget to tell you that there is second part to this statement that goes something like this: “once you learn to think the Rails way”. It really forces a certain mindset upon you, and deviating from it means that you are actually working against the framework, rather than having it do the work for you. It takes a little while to get used to it, and there are moments when you have a great scaffold thing going on with bunch of interacting tables/entities, but you sit there for 20 minutes trying to figure out how to make a simple pull down menu (aka select statement) which would let you choose the foreign key from the other table. You could do it the hard way, but it turns out that it is astonishingly simple:

<%= collection_select :foo, :bar_id, bar.find(:all), :id, :bar_name %>

This will create a select statement looking something like this:

It really took me some digging to figure that out – mainly to realize that what I needed was in a ActionView::Helpers::FormOptionsHelper class. A lot of the online tutorials simply gloss over little details like that – for example, the importance of helpers, which I now know are pretty damn convenient.

Then there is that whole Rails 2.x vs. Rails 1.x debacle. The two are not entirely compatibile, and there are significant differences in the way they work. Needless to say, when version 2.0 instantly invalidated every single Rails book on the market by removing the active scaffolding which everyone was using in the initial examples. I already got burned on it once, and now I was hit with it again, only from the other side. When I decided to install rails on Gutsy, I did what any reasonable Ubuntu would do:

sudo aptitude install ruby rails mongrel

Few minutes later I was all set up and ready to go. Or was I? I gently issued a command like this:

script/generate scaffold Foobar foo:string bar:string

I got hit by some cryptic error about unknown string identifier or something among those lines. WTF? It took me few minutes of useless googling, and cursing to realize I simply had the old version of rails installed. I could have went along and simply use the nice active scaffolding for my project, but I figured if I am to learn this damn framework, I should probably use the latest and the greatest version. How to install it on Gutsy though? The answer is – via gems.

First, get rid of the 1.x rails installation if you actually have it on your system:

sudo aptitude remove rails

Next, install the new rails:

sudo gem install rails --include-dependencies

That should do it. I think it’s possible to downgrade back to 1.x if you remove rails via gem and then install it back via apt bur I haven’t tried it.

Also, small caveat – you may or may not need to update your gems package to do that. You can do it by issuing a command:

sudo gem update --system

Be warned that it will actually break the gem command itself. If you try running it, you will get the following error:

/usr/bin/gem:23: uninitialized constant Gem::GemRunner (NameError)

Why is this? Take a look at this:

$ ls -l /usr/bin/ | grep gem
-rwxr-xr-x  1 root   root        701 2007-08-24 01:18 gem
-rwxr-xr-x  1 root   root        785 2008-04-01 11:25 gem1.8
-rwxr-xr-x  1 root   root       3201 2007-08-24 01:18 gemlock
-rwxr-xr-x  1 root   root       1778 2007-08-24 01:18 gem_mirror
-rwxr-xr-x  1 root   root        515 2007-08-24 01:18 gemri
-rwxr-xr-x  1 root   root         70 2007-08-24 01:18 gem_server
-rwxr-xr-x  1 root   root       1813 2007-08-24 01:18 gemwhich
-rwxr-xr-x  1 root   root       7947 2007-08-24 01:18 index_gem_repository

Apparently all the gem_* commands have been deprecated in the 1.x releases of rubygems. The version in gutsy repo is 0.9.4 which means it still uses them. The update command brings you to 1.1.0 release but unfortunately does not remove the old scripts from /usr/bin. So the original gem and gem_ commands are useless. Quick workaround here is:

sudo mv /usr/bin/gem /usr/bin/gem.old
sudo ln -s /usr/bin/gem1.8 /usr/bin/gem

You could probably remove the gem binary, but I simply renamed it, and then created a link to gem1.8 in it’s place. It works well enough, and if you need to do a downgrade later on, all the files are still intact.

[tags]ruby, ubuntu, gutsy, rails 2.0, rails, gems, rubygems, gems 1.10, gems 0.9.4[/tags]

]]>
http://www.terminally-incoherent.com/blog/2008/04/03/rails-20-on-ubuntu-gutsy/feed/ 0
Starting a Rails 2.0 Project http://www.terminally-incoherent.com/blog/2008/01/17/starting-a-rails-20-project/ http://www.terminally-incoherent.com/blog/2008/01/17/starting-a-rails-20-project/#comments Thu, 17 Jan 2008 16:47:05 +0000 http://www.terminally-incoherent.com/blog/2008/01/17/starting-a-rails-20-project/ Continue reading ]]> Since Rails 2.0 fucked up just about every single online tutorial out there by removing the scaffold method, I decided to document a step by step process of starting a brand new project. I think it’s important because the approach changes. In the past you could start from designing database, and then work your way from there. This is no longer the case.

I’m using SITS as an example here. What is SITS? As far as you are concerned it is my little vaporware project that exist only in my feeble mind, and as a empty google code page. Oh, and it also exists as part of the monstrous 45,000 line php craziness I maintain at work. It used to be an individual PHP project at one point, then it got swallowed by the all purpose evaluation and report tracking monster and extracting it from there proved to amount almost to a total rewrite. I figured – why not use Rails and learn something along the way?

So let’s get started. First thing we want to do is to actually create a project. I’m assuming you have Ruby, Rails and MySQL installed. Don’t touch the database yet. Btw, I’m using windows because I don’t feel like switching machines for this – sue me.

Go go gadget RAILS:

C:\projects>rails sits
      create
      create  app/controllers
      create  app/helpers
      create  app/models
      create  app/views/layouts
      create  config/environments
      create  config/initializers
      create  db
      create  doc
      create  lib
     
      --- snip ---

      create  log/server.log
      create  log/production.log
      create  log/development.log
      create  log/test.log

It’s a whole lot of output so I snipped it. You will also note I’m using C:\projects as my directory. This is actually a junction that points somewhere else, but you don’t need to worry about this. I just got annoyed that the paths were so long they were causing my code box scroll sideways so I C:\projects to my real projects folder.

Anyways, on to configuration. \project\sits\database.yml is the database config file – and practically the only config file we will need to write for this project. I have set it up like so:

development:
  adapter: mysql
  database: sits_dev
  username: root
  password: [passwd]

test:
  adapter: mysql
  database: sits_test
  username: root
  password: [passwd]

production:
  adapter: mysql
  database: sits_prod
  username: root
  password: [passwd]

I’m using the root account right now because we will be using built in rails magic to add and drop tables and do all kinds of other fun things. Once the project is ready for production, I will probably go back and change this to a locked down user which only has select, update and delete privileges for the tables it needs. As a side note, when you edit this file make sure that the config lines for each environment are indented exactly two spaces. No more no less. I mean, unless you enjoy cryptic errors – cause you gonna get plenty if you fuck up the layout. It’s kinda like in python, but only for this file. Ruby itself doesn’t really care about indentation that much.

Let’s create the 3 databases:

C:\projects>rake db:create:all

Yup, rake is like fucking magic – you don’t even need to touch MySQL tools. I touched them though to verify the databases were created:

Databases Created by Rake

We have the databases now, now it’s time for scaffolding. What about models, you ask? What about controllers? Scaffolding does that for us. The bad part is that we need to roughly know what do we want to scaffold from the get-go. I kinda know what I’m doing since I did this once already in php and then fucked it up by integrating it into a monster that will one day devour my soul. So have an idea of what we should be designing. I will start with user table because it is stand-alone (ie. it does not belong_to or is not composed_of anything). Our user will have a username, a password (hashed of course), an email (naturally), account creation date and user level. The level will of course be on a scale from luser to admin, and I will use numeric values to represent them. Why? Because I used enums in the past and I ended up with contraptions like lpuser (low priv user), superuser, superadmin, and etc.. Numeric level system is easy to extend.

C:\projects\sits>ruby script\generate scaffold User username:string 
	password:string created_at:datetime level:integer
      exists  app/models/
      exists  app/controllers/
      exists  app/helpers/
      create  app/views/users
      exists  app/views/layouts/
      exists  test/functional/
      exists  test/unit/
      create  app/views/users/index.html.erb
      create  app/views/users/show.html.erb
      create  app/views/users/new.html.erb
      create  app/views/users/edit.html.erb
      create  app/views/layouts/users.html.erb
      create  public/stylesheets/scaffold.css
  dependency  model
      exists    app/models/
      exists    test/unit/
      exists    test/fixtures/
      create    app/models/user.rb
      create    test/unit/user_test.rb
      create    test/fixtures/users.yml
      create    db/migrate
      create    db/migrate/001_create_users.rb
      create  app/controllers/users_controller.rb
      create  test/functional/users_controller_test.rb
      create  app/helpers/users_helper.rb
       route  map.resources :users

Can you see what I’m doing here? I’m specifying the fields and their data types as arguments for the scaffold script. These fields will be included in my views, and I will also get an automagical database migration file. I can go in and edit it adding extra fields that won’t show up in the views if I want to. To do so I just need to open the db\migrations\001_create_users.rb file:

class CreateUsers < ActiveRecord::Migration

  def self.up
    create_table :users do |t|
      t.string :username
      t.string :password
      t.datetime :created_at
      t.integer :level
      t.timestamps
    end
  end

  def self.down
    drop_table :users
  end
end

Note - I didn't write that code. Rails did. Neat huh? All I needed to do was to specify bunch of field names and data types. What data types can you use in a migration (or when specifying scaffold)? I think the list is as follows:

Rails Migration Types

List is courtesy of the Rails Migration Cheatsheet. They have an expanded version of this table on their site - I just stole a piece of it for quick reference.

What now? We can use our old friend Mr. Rake:

C:\projects\sits>rake db:migrate
(in C:/projects/sits)
== 1 CreateUsers: migrating ===================================================
-- create_table(:users)
   -> 0.0630s
== 1 CreateUsers: migrated (0.0630s) ==========================================

In effect we should see two new tables in the _dev database like this:

Tables Created by Migration

Just to be sure, let's look at the schema of the users table to make sure everything is correct:

Users: Database Schema

Yup, everything works! Now let's launch this shit! I'm using Mongrel instead of WebRick because it is ♪Harder, Better, Faster, Stronger♪. Also it has a picture of a dog on it's webpage so clearly it must be superior. Not even mentioning the Win32 service support that ma To install mongrel just do:

gem install mongrel

To start it, simply navigate to the project and run:

C:\projects\sits>mongrel_rails start
** Starting Mongrel listening at 0.0.0.0:3000
** Starting Rails with development environment...
** Rails loaded.
** Loading any Rails specific GemPlugins
** Signals ready.  INT => stop (no restart).
** Mongrel 1.1.2 available at 0.0.0.0:3000
** Use CTRL-C to stop.

I actually have no clue why the default mongrel output displays the ip as 0.0.0.0 instead of 127.0.0.1 but I assure you it works. We can test it by simply navigating to http://locahost:3000/Users

Scaffolded Project

All the CRUD actions are supported here so I can easily add a new user by simply hitting a link on the bottom of this page:

Create a New User

I don't really like the creation date dialog - it probably would be smoother if the date got appended automatically at the moment of creation. But I think this is something I can easily change by editing either controller or the view. As it is right now the password is not getting encrypted so this is another bit that we need to fix But this is pretty much the point of scaffolding - it gives you a starting point, not really a complete solution.

Is this better than the old way? On one hand having the code generated for you is nice. On the other hand the auto-generated code might be a little bit confusing to a newbie. The dynamic scaffold gave you a blank slate you could fill out by coding from scratch. Now we get a pile of cryptic code that needs to be deciphered before we can move on. Perhaps we will learn better this way, but I'm not sure. I kinda like the from-scratch approach. But there is not much I can do about it other than downgrading to a previous version of Rails. :P

That's all I have for today. I mostly just wanted to document all the little things you need to do to start a scaffolded project from scratch. I might post more as I re-write SITS but I guess that really depends on whether or not I run into any interesting problems, or cool Rails thing I want to talk about.

[tags]rails, ruby on rails, ruby, sits, simple issue tracking system, scaffold, scaffolding, rails 2.0[/tags]

]]>
http://www.terminally-incoherent.com/blog/2008/01/17/starting-a-rails-20-project/feed/ 3
What happened to scaffold in Rails 2.x? http://www.terminally-incoherent.com/blog/2008/01/16/what-happened-to-scaffold-in-rails-2x/ http://www.terminally-incoherent.com/blog/2008/01/16/what-happened-to-scaffold-in-rails-2x/#comments Wed, 16 Jan 2008 16:21:08 +0000 http://www.terminally-incoherent.com/blog/2008/01/16/what-happened-to-scaffold-in-rails-2x/ Continue reading ]]> Rant time! This one may or may not ruffle some feathers, but frankly I don’t care. I’m probably late to this party and missed out on all the big flame wars seeing how 2.0 was released at the beginning of December. Still, this kinda pissed me off. I just wanted to ask the Rails team what exactly was so horribly wrong with the scaffold method that they had to rip it out in 2.x release?

Let me back track a second. Some of you probably don’t have a clue of what I’m talking about here so let me paint you a picture here. This is how just about every Rails book and online tutorial starts the same. First thing they teach you is to create a Rails app, create the database and then set up a model and a controller for some database table (let’s call it things).

Then, inside the things_controller.rb file you were supposed to do:

class ThingsController < ApplicationController
  scaffold :thing
end

This would generate an invisible dynamic scaffolding that would take care of basic CRUD operations. The idea was to start clean, and then slowly chip off the scaffolding by overloading it with your own content. For example, once you define index method in the controller, and and index view the listing is no longer handled by the scaffold.

At any point of development process you could have dropped this single line in any of your controllers to get bare-bone basic functionality on the fly. It had that wow factor, that was bringing all the boys to the yard, like the milkshake... Or something like that. My point is was that it was easy to use, did not clutter your codebase with auto-generated gunk and every fucking Rails tutorial on the planet was using it.

Do you know what happens when you try to use the scaffold line in the current 2.0.1 release of Rails? First you get an error message. Then a little guy jumps out of the closet and kicks you in the balls, steals your wallet and then jumps out the window. Or maybe that's just happened to me. Either way, it's not a pleasant experience.

At first I thought that maybe I just can't spell scaffold. But no - it just no longer exists. I actually had to google it cause I had no fucking idea. They just riped it out and junked it. Yay backwards compatibility. I can't wait to see what useful features you will remove in Rails 3.0 to fuck up my work flow! Wohoo!

I know that scaffolding was just a neat feature which really didn't influence how a deployed rails application performs or behaves. I know it was not essential, and I know it was only useful in the very early stages of development. Blah, blah, blah. Still, why remove it? The official release post doesn't even mention this change.

I looked through some mailing lists and forum posts, and I on every single one I saw pompous jackoffasaurs taking turns enlightening masses on how only idiots used scaffold method, that it was abused, that it was a "crutch" and how the new way actually forces retards to look at the code. Really? You are serious?! They removed it out of goodness of their hearts to save us heathens from our wretched scaffolding practices? Thank you there, Dijkstra - why don't you write me a "considered harmful" essay while you are at it? Geezz...

So I'm asking - why was it removed? Perhaps someone can give me an objective, unbiased and non-elitist analysis on why it was not fit for 2.0. I just want to rationalize this. I want to find some sort of justification. And no, "cause u were doin it rong" just doesn't cut it for me. Give me a benchmark showing that dynamic scaffolding is slow, tell me about security concerns, about compatibility migration issues. Tell me how how the "new way" improves the work flow, improves coding practices and is generally better. Give me something. I might buy it - who knows. I just couldn't find any explanation like that anywhere out there. Everyone is simply viciously bashing n00bs and passive-aggressively flogging anyone even daring to ask about the scaffold method. :evil:

You can naturally still scaffold in 2.x but only via code generation. And when you do it, the script generates everything including the model, controller, views and database migrations. In fact, they dumbed it down to the point where it no longer even looks at database to dynamically detect the correct fields and data types.

The 2.0 way is to scaffold first, then create a database. The script generates a migration based on the list of fields you specify as arguments. Same fields are also used for the views. If you don't specify any fields on the command line, you will get a half-assed views that do nothing. Nice thing is that if you do specify correct fields on the command line, you end up with a complete set of controllers, views, migrations and etc. All you need to do is to run the migration script, then fire up the server and you are good to go. It does make sense conceptually - after all in real world you put scaffolding up before you start building. Maybe this is a superior approach, but I just don't see why we can't have it both ways.

Needless to say, I'm a little bit disappointed with this change. I can only imagine how confusing it must be for new Rails users who are trying to follow a simple tutorial from a book. I was confused as hell for a minute there, and the whole thing left me a bit turned off to rails. It's a great framework, and I'm still planning to use it on a few projects, but this whole scaffolding debacle really did a great job of dousing and subduing my enthusiasm. Maybe it's for the better and it will allow me to look at RoR objectively instead of jumping on the "OMG this is FANTASTIC!" bandwagon.

Wantonly breaking backwards compatibility on a whim and without any justification is a cardinal sin of software development and frankly it scares me a bit that they chose to commit it. I'm not saying breaking backwards compatibility it's always bad. Far from it - sometimes you need to ditch old code if it is holding you back. And that's perfectly acceptable. But fucking up chapter one of just about every published Rails book, and every single online tutorial in one swooping move is not a good thing.

Since the begging of December people have been reading the now outdated tutorials, and scratching their heads trying to figure out why Rails is not working. It's a good thing everyone loves Rails these days. I bet most people will forgive and forget, but I think they really shot themselves in the foot with this. :P

[tags]ruby, rails, ruby on rails, ror, scaffold[/tags]

]]>
http://www.terminally-incoherent.com/blog/2008/01/16/what-happened-to-scaffold-in-rails-2x/feed/ 7
Aptana – First Impression http://www.terminally-incoherent.com/blog/2007/06/06/aptana-first-impression/ http://www.terminally-incoherent.com/blog/2007/06/06/aptana-first-impression/#comments Wed, 06 Jun 2007 07:36:29 +0000 http://www.terminally-incoherent.com/blog/2007/06/06/aptana-first-impression/ Continue reading ]]> Yesterday I spent some time talking to some dudes running a computer security company in NYC. They have used Ruby on Rails for one of their recent projects, and they were raving about it like madmen. I literally couldn’t make them shut up about RoR. Their enthusiasm was actually kindoff contagious, and now more than ever I’m determined to jump into Ruby again.

So I downloaded Aptana (aka former Rad Rails IDE) which is essentially an Eclipse geared for web development. The vanilla version is just a generic web application IDE, but you can download a Rails specific version – or if you already have the vanilla you can install the Rails components via Help→Software Updates.

Eclipse is probably the best Java IDE I have used in my life, so using a similar tool for Rails should really make the learning curve of RoR much smoother. I haven’t played with it much, but it looks awesome. It seemed a tad slow when starting up though – slower than Eclipse. Once the app loaded however it seemed responsive.

When I have a chance to play with it some more, I will post a review.

And yes, if you wondered, this is filler content. I had a writers block (blogger’s block?) for the last few days so I’m trying to ride it out by posting about anything and everything.

Anyways, here are some questions for you :

Developers: Love or hate Eclipse? Discuss!

Web Developers: Do you also have a boner for Ruby on Rails like these dudes from NYC? Or do you think that the framework is way over-hyped?

Bloggers: What do you do when you open up your “Write Post” page, and you can’t think of anything even remotely interesting to write about? How do you deal with writers’ block situations?

[tags]ruby, ruby on rails, aptana, rad rails, writer’s block[/tags]

]]>
http://www.terminally-incoherent.com/blog/2007/06/06/aptana-first-impression/feed/ 9