javascript – Terminally Incoherent http://www.terminally-incoherent.com/blog I will not fix your computer. Wed, 05 Jan 2022 03:54:09 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.26 Scraping Reddit’s Json for Cool Pics http://www.terminally-incoherent.com/blog/2014/06/04/scraping-reddits-json-for-cool-pics/ http://www.terminally-incoherent.com/blog/2014/06/04/scraping-reddits-json-for-cool-pics/#comments Wed, 04 Jun 2014 14:13:05 +0000 http://www.terminally-incoherent.com/blog/?p=17172 Continue reading ]]> Did you know that you can add /.json to any Reddit URL to get a machine readable JSON document you can screw around with? You can test it yourself. For example, go to /r/vim/.json. It works for pretty much any kind of url, including multiredits. This has been part of the Reddit API for about seven centuries now, but I have never really paid attention. Until now, that is.

People sometimes ask me where do I get inspiration for shit like Ravenflight. Part of the explanation is of course being a natural genius like I am. Part is hanging out with other nerds, because crazy random stuff is bound to come up in a conversation. Finally, part of is the stuff I get exposed to on the internet. For example, I subscribe to a multitude of picture subs. Pretty much if it has “Imaginary” in the title or it is part of the SFWPorn thing (no, it’s not porn, it’s just pictures… Though /r/AnimalPorn should really consider changing the name to something that would raise less eyebrows).

One day I got a bright idea: what if I could create a multiredit of all these cool picture subs, and then scrape it for cool pictures and display them as a scrolling gallery. This way they would be much easier to browse (no need to click on the links or use the RES to expand them) and I could distill away all the unpleasant Redditry. Like the obligatory: “you idiot, why isn’t this imgur” or “way to go posting imgur instead of linking to source, you idiot” fight that happens every time anyone posts a picture on the internet ever. But mostly it was just a cool idea… Once I realized reddit was serving jSOn files for everything it was just too tempting not to mess around with them.

I briefly flirted with the idea of using an API wrapper such as ReditKit or Snooby and doing everything on the server side, but I quickly gave up on the idea. Part of it had to do with the fact that none of the wrappers I looked at actually did any rate limiting, which is one of the chief reasons why I wanted to use one in the first place. Syntatic sugar is really nice, but parsing jSon is relatively painless, whereas designing throttling and caching is exactly the kind of dumb and boring busy work I was trying to avoid. It also did not help that after an hour of impatiently flipping through the docs and running things in irb I still had no idea how to parse multiredits. It seems that 90% of the documentation was written with the expectation that people using these wrappers would be building funny comment-bots, and that remaining 10% of stuff was either self-explanatory or irrelevant.

Eventually I got annoyed and started fucking around in JSFiddle just so see if what I was thinking about was possible. It turns out it was, and that it was working remarkably well on the client side. You can see my prototype here:

Click on the results tab to see how it looks. I’m not sure if this is an impolite script because I’m still doing no caching or rate limiting here. But since all the fetching and processing it is happening on the client side I think I might be getting off on a technicality here. Even though the code might generate a lot of simultaneous requests, they will all technically come from different IP addresses so perhaps admins won’t yell at me for doing this.

I went ahead and dressed it up a little bit, and slapped a final, polished version at imaginary.pics for everyone to enjoy. So any time you want to look at some fantasy themed pictures of monsters and heroes, you can just type that into the address box in your browser and get inspired.

And yes, that’s a .pics domain, because why not. I like descriptive domains and I’m not afraid to use non-standard TLD’s if I can get away with it. You should have known that about me after I committed dontspoil.us back in the day. I’m quite excited about the crazy new TLD’s and being able to register all kins of dumb domains. Btw, it took me like an hour to stop clicking the “go again” link on that website, so you’re welcome. I’m calling dibs on wank.bank when that becomes available: I’m gonna just copy-paste some buggy porn-tube-clone code onto that and make like $millions.

By the way, the dumb.domains site seems to have an affiliate deal with somewhat shady registrar. If you are actually planning to buy a fun domain name, I’d recommend iwantmyname.com. Someone recommended them to me, and I really like the cut of their jib. Then again it might just be me. I previously bought domains through sites like Godaddy and Network Solutions so I was actually really confused when the registration process did not involve clicking through 17 pages of up-sell bullshit, and some lady’s cleavage was not being thrusted into my face from advertising banners. Their site is well designed, everything is intuitive and they seem like cool people. I wish I knew about them years ago.

Where do you usually buy your domains? Are you currently sitting on any domains that you bought because they were cool, but never actually put them to a good use? Have you ever stupidly bought a domain just to host five lines of Javascript like I just did? If so, what did you host?

]]>
http://www.terminally-incoherent.com/blog/2014/06/04/scraping-reddits-json-for-cool-pics/feed/ 6
Modern Front End Development http://www.terminally-incoherent.com/blog/2013/02/06/modern-front-end-development/ http://www.terminally-incoherent.com/blog/2013/02/06/modern-front-end-development/#comments Wed, 06 Feb 2013 15:15:34 +0000 http://www.terminally-incoherent.com/blog/?p=13821 Continue reading ]]> Whenever you sit down to create a new website (or a web project), there is usually a litany of things you need to do before you can start hacking. And I don’t include brewing a strong cup of coffee in that list, although it is nevertheless very important. I’m talking about rather boring, menial tasks of “setting up” your environment to the point where a website is ready to be born.

The days when you could just drop an index.html in public_html/ are long gone. Modern front end development is much more involved. And for the most part this is a very good thing. It helps us create quality things rater than the kind of Microsoft Front Page generated “websites” that populated the Geocities ghetto in the nascent years of the interweb.

Typically, when I sit down with my cup of coffee, I go through the following motions:

  1. First, you need some HTML boilerplate to start off. It doesn’t really matter if you are designing a stand-alone website, a Twig or Django template or something else. Most of the time you need some robust starting point. So chances are you are going to be downloading the very popular HTML5 Boilerplate and unzipping it into your project folder.
  2. Next step is usually to ensure that you have some sort of nice UI framework – something to make nice buttons and pretty forms. This is by no means necessary but bare HTML forms are quickly falling out of grace and it is usually a good idea to add a little bit of flair to them. Plus, you do want a robust CSS grid implementation. You could fuck around and try to design your own layout with float elements but why bother if you could just take one that works and has been thoroughly tested. So chances are you will also be grabbing the Twitter Bootstrap package and adding that to your folder.
  3. Twitter Bootstrap consumes jQuery as a dependency, but does not include it in it’s package (and rightfully so). So in order to get the flashier bits of Bootstrap working you need to download it as well.
  4. If there will be a lot of scripting involved in the project, you might want to also grab Underscore.js since it adds so much general purpose utility to your toolkit.
  5. JavaScript tends to be one of those languages which really, really benefit from linting code to ensure quality. Languages like Ruby or Python simply won’t allow you to be sloppy. JavaScript interpreters however are extremely permissive, and will swallow quite terrible code without batting an eyelid. The web is full of unmaintainable spaghetti code JavaScript and you should not be adding to the problem. So you probably want to make linting with something like JSHint a part of your workflow.
  6. In addition to linting Javascript you should also be linting your HTML. Compared to the (X)HTML4 craziness, HTML5 spec is actually mostly sane and rather nice. It is actually not difficult to write code that conforms to the spec, and rigid adherence to the rules actually improves the readability of your code. So you should be validating your pages as you write them.
  7. Javascript can create performance bottlenecks so if you are using large, expansive scripts you could really benefit from adding a minifier to your toolkit. Running your code through something like Uglify.js or Google Closure Compiler really goes a long way in making your project more responsive.
  8. As you are writing JavaScript, chances are you will want to test it. So you need a unit testing framework – so go ahead and download Qunit.js and add it to your project folder.
  9. At this point you are already 15-20 minutes into the process so you might as well git init in your project directory. But that also carries some weight. All of these components you just downloaded are now littering your project folder with over a dozen files. Nearly all of these files are standard boilerplate stuff you will never, ever touch. Now the question is, do you commit that mess to your repository? Doing so means you will have a lot of unnecessary static redundant garbage in there. And every time Bootstrap or jQuery releases a new version you will have to update and commit it with your project. This is not necessarily a big deal, but I (and many other people) don’t like it very much. I’d rather keep only the stuff I actually work on, or edit regularly in the repository. But when you do that, then it means you have to re-download all these prerequisites all over again when you decide to clone the repository or fork it.

All of this is problematic and tedious stuff web designers deal with every single time they start a new project. The funny thing is that there is no need to do all of this. Most of that decision making and file shuffling could be easily automated and streamlined.

For example, there is no need for you to download Boilerplate, Bootstrap and jQuery as separate chunks. You can actually download the whole kit and caboodle from Initializr.com. It comes pre-packaged as a single zip file that includes everything you need to set up a simple website (except Underscore and Qunit but those are more for JavaScript intensive web apps rather than regular sites).

This is incredibly useful, but it solves only one of our two (or three/four) problems. Initializr helps you scaffold a new project really quickly but does nothing in terms of managing the dependencies. You still end up with either junk in the repository, or a 12 step setup instruction process in Readme.md. How do you manage dependencies and static assets?

Bower

Personally, I’m very fond of Bower. It was designed by folks at Twitter and essentially it is the PHP Composer equivalent for the front end development. For example, it allows you to do something like:

bower install jquery

This command will find the jQuery package in the Bower package repository, download it and put it inside components/ folder. This is more or less how it looks like:

bower install

Installing jQuery with Bower

Much like Composer, Bower is controlled by a single config file in the root of your project directory. That file is called components.json and it looks like this:

{
  "name": "YourProjectName",
  "version": "0.0.0",
  "dependencies": {
    "jquery": "~1.9.0"
  }
}

As you can probably guess, the dependencies section is the important one. You define them by package name and version number ensuring you get the correct version. Here is something Bower does but Composer cannot do: it is not limited just to pre-defined packages. Bower will happily fetch and install anything into the components/ directory as long as it has a web accessible URL. For example you can do something like this:

{
  "name": "YourProjectName",
  "version": "0.0.0",
  "dependencies": {
      "bootstrap": "http://twitter.github.com/bootstrap/assets/bootstrap.zip",
  }
}

This will fetch the official bootstrap download package right off their download page, and unzip it for you. This is not as robust and dependable as linking to pre-defined package but it allows you to use assets and scripts that do not have a bower compatible package as of yet. If the URL you specify is a Git repository, Bower will be more than happy to check it out for you.

The components/ directory is configurable by a separate file .bowerrc which is also a JSON file:

{
    "directory": "path/to/components"
}

Why a separate file? Well, the idea is that some other package management tool (like Components, Jam or perhaps Volo) could consume components.json config file (they all use nearly identical format) and plugging bower specific values there would ruin portability.

The nice thing about using bower is that you can now safely add the components/ directory to your .gitignore and never, ever worry about committing static boilerplate garbage into your repositories. If and when you clone it later, you can automatically fetch all the needed dependencies by doing:

bower install

Let me sum this up for you. Here is what you get:

  1. Tool that automatically tracks and downloads dependencies
  2. Ability to add arbitrary files as dependencies via URL
  3. No need to commit the dependencies with your code

The price you pay for all of this is putting a single json file at the root of your project. To me this is worth it.

That said Bower is for managing dependencies only. It follows the unix philosophy of doing one thing, but well. So does not scaffold anything for you. Not only hat, but if you are using bower to manage dependencies you really can’t take advantage from the Initializr scaffolding… So you are back to square one, where you have Boilerplate, Bootstrap and jQuery installed as separate entities which you need to manually combine in your HTML files.

The other issue I mentioned above was linting, concatenating and minifying your scripts. The tools to do this are actually available as online services, but you shouldn’t be using these. The last thing you want to do when coding is to stop everything, open up a browser and start copy/pasting stuff into a textarea box on the web. Ideally you want to use a command line tool – something that is quick, easy and can accomplish all of these things in a single operation.

GruntJS

Enter GruntJS – a build tool for the front end. It belongs to the “configure rather than code” school of thought, but do not get discouraged. Unlike a lot of the XML based build tools, this one uses a config file that is basically a Node.js script. This means you can easily define new functionality as a Node.js function, and make it a build task. The neat thing is that you don’t have to. The big chunk of your average grunt config file is a large JSON object that consists of a built in task name, the list of files it is to be performed on and custom config options for that task.

What kind of tasks does Grunt support out of the box? It will:

  • Lint your JS code with JSHint
  • Concatenate your JS and CSS files
  • Minify JavaScript with Uglify.js
  • Watch your project folder and trigger build tasks on file change
  • Let you run a local mini-server on any port you wish
  • Integrates with Qunit and let you run tests from the command line using PhantomJS

If that wasn’t enough, it also sports a huge plugin database (just look at the home page) which greatly extends its already formidable functionality. There are plugins to validate HTML5 code, wrapper plugins for different JS and CoffeScript linters, plugins that integrate other unit testing frameworks (like Mocha or Jasmin) and more. All of these are simple Node.js packages which can be installed via quick npm command.

Just to give you an idea, here is a sample grunt file – one that the tool might auto-generate for you if you run grunt init and answer a few questions:

module.exports = function(grunt) {
  grunt.initConfig({
    lint: {
      // lint everything in lib and test dirs
      files: ['grunt.js', 'lib/**/*.js', 'test/**/*.js']
    },
    qunit: {
      // run tests with PhantomJS on these files
      files: ['test/**/*.html']
    },
    concat: {
      // concat everythin in lib, put it in dist
      dist: {
        src: ['lib/*.js'],
        dest: 'dist/scripts.js'
      }
    },
    min: {
      dist: {
        // same files as concat task
        src: [''],
        dest: 'dist/scripts.min.js'
      }
    },
    watch: {
      // lint and test files as they are changed
      // use same files as lint task
      files: '',
      tasks: 'lint qunit'
    },
    jshint: {
      // configuration options passed to JSHint
      options: {
        curly: true,
        eqeqeq: true,
        immed: true,
        latedef: true,
        newcap: true,
        noarg: true,
        sub: true,
        undef: true,
        boss: true,
        eqnull: true,
        browser: true
      },
      globals: {
        // don't complain about jQuery and Underscore
        $: false,
        jQuery: false,
        _: false
      }
    }
  });

  // Default task (executes when you run grunt without arguments).
  grunt.registerTask('default', 'lint qunit concat min');

};

As you can see it is fairly readable. Here is the best part: if you find yourself in a situation (an you will – you always will) where neither built-in tasks, nor the available plugins will help you, adding a custom build task is as easy as:

grunt.registerTask('mytask', "description of my task", function() {
    // do stuff here
});

In other words, Grunt is sort of a “best of both worlds” scenario. 90% of the time you just configure tasks, but when you reach that 10% condition where your build process is just too crazy and too specific to be covered by third party plugins, you can easily code around it.

It is my honest opinion that if you are not using Grunt, you are either doing way to much stuff manually, or conversely not doing nearly enough to ensure the high quality of your code. In either case you are fucking yourself over, so do yourself a favor and take 10 minutes out of your busy schedule to add it into your toolkit. It makes a huge difference.

And yes, you could accomplish everything Grunt does with a Makefile or a Rakefile but those are not as portable (welcome to the how do I *make* on Windows-ville, population: half of the fucking interweb) and they take longer to set up. Why? Because you have to code almost everything by hand. Grunt comes pre-configured with a nice set of tools, and has tons of plug-and-play plugins that are just a quick npm install away.

Now, if there was only a single tool that could scaffold, manage dependencies and do all the nice shit grunt does at the same time – that would be quite useful, wouldn’t it? It would be like having your personal servant handling all the mundane, boring stuff and letting you write code with ease and comfort. Too bad that such a tool doesn’t exi…

Wait, it does! What? Did you think I’m gonna write this whole long post and end it on a sad note? No, a personal servant for front end development exists and it is good.

Yeoman

The project is called Yeoman and it is quite a marvelous little tool.

Internally it uses Bower for dependency management and Grunt for build management but it is not just a simple wrapper. It ships with a whole kit of plugins and extensions, a grab bag of external tools, and meticulously written build scripts that just work. Out of the box, Yeoman will not only concatenate and minify JS and CSS – it will also:

  • Automatically scaffold HTML5 Boilerplate and manage dependency injection into index.html
  • Manage dependency loading using RequireJS AMD modules or new ES6 module proposal
  • Compile Coffeescript code to JS
  • Compile Compass style sheets to CSS
  • Optimize your images using JPEGTran and OptiPNG
  • Have a live-reload feature which will let you see your changes in the browser without reloading the page

All I can say is: download it and try it. It is very, very neat tool. Let me just give you a little peak at how it operates. This is what you are going to see if you run the yeoman init command:

yeoman init prompt

Yeoman init prompt

And based on your answers this is approximately what will be generated in your current directory:

yeoman generated files

Yeoman generated files

As you can see from the screenshots, HTML5 Boilerplate, Twitter Bootstrap, jQuery and Modernizr.js are included in a standard package. If you need something extra, you can add it using the yeoman install command which runs bower on autopilot for you:

yeoman install

Installing Underscore.js with Yeoman

I could probably spend quite some time talking up this tool, but perhaps it is better if I show you. Here is a guy scaffolding a very basic HTML5 page with Yeoman and Bootstrap in about 5 minutes. I’m showing you this video rather than some of the more official demos because those like to show off the Compass and CoffeeScript related features, whereas this guy just does a simple css site, and highlights a few interesting quirks of this tool in the process:

It is an extremely powerful tool, but it also is very opinionated one. By that I mean it makes a lot of decisions for you. Whereas tools like Bower and Grunt are almost infinitely configurable, and get out of your way allowing you to structure your project the way you want, Yeoman likes to impose a structure upon you. Granted, that structure and the defaults it enforces are quite sane, and actually follow best practices and time tested patterns. Still, some people don’t like that.

Sometimes Yeoman can be a bit of an overkill. Especially when your project is to create quick and dirty, single page HTML5 + Bootstrap website. For all intents and purposes such a project only really needs 3 files:

  • A components.json file with a list of dependencies
  • A minimalistic grunt.js
  • The index.html file with the actual website
  • Optional .gitignore and package.json files

Yeoman will give you that, but also a lot of additional cruft that can be extremely useful for bigger projects, but which you will probably end up chucking.

This is why I rolled my own little scaffold: Instant Website. What does it offer you? Well, let me go over the files it contains.

The web/ directory contains the actual stuff you will be deploying to the server. This includes index.html is standard pristine, validating HTML5 boilerplate. It assumes that you will be using Bootstrap and jQquery and that these will be found in their own respective folders under web/components/. You will probably note that the repository does not contain these. Why? Because it does not need to. There is a components.json file in the root of the directory that looks like this:

{
  "name": "InstantWepbage",
  "version": "0.0.1",
  "dependencies": {
    "bootstrap": "http://twitter.github.com/bootstrap/assets/bootstrap.zip",
    "jquery": "~1.9.0"
  }
}

It tells Bower to download the latest twitter bootstrap zipfile, and the latest jQuery package. When you clone this repository all you need to do is run bower install and your static assets will fall into place immediately.

Next, there is my grunt.js file:

module.exports = function(grunt) {
  grunt.initConfig({
    watch: {
      files: ['index.html'],
      tasks: 'htmllint'
    },
    htmllint: {
        files: 'index.html'
    },
    server: {
        port: 3000,
        base: '.'
    }
  });
  grunt.loadNpmTasks('grunt-html');
  grunt.registerTask('default', 'server watch');
};

This essentially gives you three tasks: validate HTML with grunt-html plugin, watch files and validate them as they are changed and run local server. Note that I don’t have any JS linting or minification in place because those things are a bit out of scope for this project. This is designed to scaffold very, very simple web pages – not web apps. For web apps, definitely use yeoman.

You have probably noticed that grunt-html is not part of the basic Grunt package. You need to install it as a plugin. This is where the package.json file comes in:

{
  "author": "",
  "name": "InstantWebsite",
  "version": "0.0.1",
  "dependencies": {},
  "devDependencies": {
    "grunt-html": "~0.2.x"
  }
}

That defines grunt-html as a development time dependency so that it can be downloaded and installed at your leisure. You are of course not required to use Grunt, and if you do not, you can skip installing the plugin altogether.

How do you use this thing? Like this:

git clone https://github.com/maciakl/InstantWebsite.git
cd InstantWebsite
bower install
npm install
grunt

Or, let me show it to you in a picture form because that’s always fun:

instant website

Bootstraping with InstantWebsite

Now you have a basic scaffold, a server running on port 3000 and Grunt will watch your folder for changes and run all the HTML files through a HTML5 linter/validator ensuring your code is nice and non-broken.

I use this repository to quickly bootstrap and scaffold pages such as my teaching site. You know, stuff that does not require extensive scripting. One advantage of doing it this way, rather than with Yeoman (other than the lack of cruft) is that Yeoman usually ships with a very bare bones HTML template.

I actually made it a point to include a fairly well designed a basic structure as part of my package. It’s actually the same template that ships with Initializr (which in turn is one of the standard Bootstrap examples from their webpage) so I’m not patting myself on the back here. It’s this one:

instant website template

Template that ships with InstantWebsite

With Yeoman you start with blank canvas and you build your page up (which is great if you want to get an original design). Things like Initializr and my Instant Website repository give you a basic structure that you can tear down and simplify to fit your needs. So it is basically a choice between building from ground up, or re-purposing already well designed site.

]]>
http://www.terminally-incoherent.com/blog/2013/02/06/modern-front-end-development/feed/ 5
Making Ajax Driven Websites without Server Side Scripting http://www.terminally-incoherent.com/blog/2011/12/05/making-ajax-driven-websites-without-server-side-scripting/ http://www.terminally-incoherent.com/blog/2011/12/05/making-ajax-driven-websites-without-server-side-scripting/#comments Mon, 05 Dec 2011 15:07:17 +0000 http://www.terminally-incoherent.com/blog/?p=10820 Continue reading ]]> As some of you may or may not know, I teach a introductory technology course at my old alma matter. I have been an adjunct professor then since 2007. Back when I was a student there I had a unix shell account so I had a nifty website that used Apache’s url-rewrite tricks, and copious amounts of PHP. It didn’t have any Ajax because it did not exist back then. Or rather it did exist, but we just called it Dynamic HTML with Java-fucking-script and it was not cool yet. The IT group that handled the shell accounts had an unwritten policy of forgetting to close them down after students graduated. I think it was mostly due to the fact that there was an account signup process to get one of these accounts, but no exist process by which they would get notified to close an account when a student graduated. Of course they made an exception for me because I abused the shit out of my account.

As an adjunct professor affiliated with the Computer Science department I felt that I needed to have some web presence. For that matter I needed a web presence hosted under my universities domain name – not a personal website. That’s the only way to be legit. So, having been stripped out of my shell account was a big blow for me. I went back to the university BOFHS, hat in hand, humbled, pleading, apologetic. I begged them to bestow a shell upon me once again, and I vowed to only use it for good and the betterment of the university. Their answer was among the lines of “LOL! No!”

Of course there was a backup plan, because I always have backup plans. I learned about backups and backup plans the hard way back in 1998 when CIH virus overwrote my BIOS. Since then I have been religious about making backups, an generally having backup/failover plans for when the shit hits the fan.

I ended up making a website on the WebDav driven, dedicated Novell NetStorage server. Everyone in the university gets an account there – it is tied to your campus wide, single-sign-in account, but few people know about it. The only problem is that while you get a public directory for hosting a personal website you don’t get any server side scripting. So the best you can do is static web-pages or client side scripting.

In may of 2008 I have designed my website using some jQuery magic. But it has been a few years now, the site became stale and I decided to overhaul it.

One of the problems of that previous design was that I was using URL’s that looked like they were designed for a server-side scripted, apache-url-rewrite driven site like this:

/?p=pagename
/?p=otherpagename
/?p=stuff

There was no server-side anything though. On each page load I would simply grab the GET request, scrape it for the value of the variable p and convert it to a real URL:

/pagename.html
/otherpagename.html
/stuff.html

Then I would use jQuery load() function to asynchronously fetch that file and load it into a div. It worked, but it was backwards. Clicking on a link forced a page reload, and then an assync request right afterwards. When I sat down to re-write it, I decided to avoid these page reloads. I also wanted URL’s that would make it clear to everyone that this is a javascript land. So I decided to format them as such:

/#!/pagename
/#!/otherpage
/#!/stuff

Yes, yes – I know. Hashbangs are the root of all evil, they break the web, yadda, yadda, yadda. Keep in mind my little website was horribly broken to begin with so I’m just making it clear how broken it is. Not to mention that I went to great lengths to make this site fully accessible without Javascript.

You see, if you go to my page with your Javascript disabled, my links will look like this:

Some Page
Other Page
Stuff

So you can browse the entire thing with no problems. When you do enable Javascript though, this nifty function will run on page load:

function fudge_links()
{
  $("a.aj").each( function() 
  {
    h = $(this).attr("href");			
      if(h.indexOf(".") > 0)
        $(this).attr("href", "#!/" + h.substr(0, h.indexOf(".")));
  });
}

If you can’t read jQuery-speak this will go through all the links on the page marked with class=”aj” and convert them to the hashbang format I shown you above. I can’t make the server rewrite your URL as you come in, so I rewrite all my links, dynamically.

That’s only part of the equation. The other part is to parse the hashbang links and asynchronously load content. How do you do this? Like this:

hashbang = location.hash;
hash = hashbang.substr(3);
$("#content").load(hash + ".html");

Easy-peasy, right? Well, not entirely. If this code fires as the page loads, the appropriate page will be loaded. Sadly, if you call the fudge_links function either before or after it, then any links that were on the asynchronously loaded pages will not be properly fudged. To make sure the links are properly altered, we have to modify our code like so:

hashbang = location.hash;
hash = hashbang.substr(3);
$("#content").load(hash + ".html", function() { 
  if(window[hash]) window[hash](); fudge_links();});

This works, but there is a small problem. Has links do not trigger a page reload, so you now have to capture the on-click event and run this code when the content comes in. Then you have to run it on page load for when someone accesses the link directly. Having this code in two places is less than optimal. Ideally, you would want to capture the hash change event and then fire that code. Problem is that not all browsers implement this event. Fortunately, Ben Alman wrote a jQuery plugin for that aptly called Hash Change. Upon including that plugin in your page, the entire javascript logic driving the dynamic hashbang links will look like this:

$(document).ready(function() 
{
  $(window).hashchange( function(){
    hashbang = location.hash;
    hash = hashbang.substr(3);
    $("#content").load(hash + ".html", function() { 
      if(window[hash]) window[hash](); fudge_links();});
  });

  // fire manually when someone hits the page directly	
  $(window).hashchange();
});

function fudge_links()
{
  $("a.aj").each( function() 
  {
    h = $(this).attr("href");			
      if(h.indexOf(".") > 0)
        $(this).attr("href", "#!/" + h.substr(0, h.indexOf(".")));
  });
}

This works like a charm. Of course there is always a random chance that someone will randomly stumble upon one of your static webpages – like pagename.html. By design that file is basically a page fragment. It has no header, no footer, and no navigation sidebars. It is just the static content that is supposed to be loaded into your page. If someone browses to it in a non-javascript browser, then that’s fine. But ideally, if javascript is available, we would want to redirect /pagename.html to /#!/pagename and load it dynamically so that it looks nice.

To do that, I made a tiny script called redirect.js and included it at the top of every page fragment. Here are the contents:

window.onload= function() {
  h = window.location.href;
   
  if(h.indexOf(".html") > 0)
  {
    f = "#!/" + h.substr(h.lastIndexOf("/")+1);
    f = f.substr(0, f.indexOf(".html"));
    window.location = "http://example.com/" + f;   
  }   
 }

Note that I had to use an absolute URL for the window.location call because if you use a relative hash link there it will not actually reload the page (because hash means same page).

You can see a live demo of this code in action here.

]]>
http://www.terminally-incoherent.com/blog/2011/12/05/making-ajax-driven-websites-without-server-side-scripting/feed/ 2
jQuery: Grid like table with keyboard navigation http://www.terminally-incoherent.com/blog/2009/10/08/jquery-grid-like-table-with-keyboard-navigation/ http://www.terminally-incoherent.com/blog/2009/10/08/jquery-grid-like-table-with-keyboard-navigation/#comments Thu, 08 Oct 2009 14:35:38 +0000 http://www.terminally-incoherent.com/blog/?p=3900 Continue reading ]]> Guess what time is it kids? It’s time for yet another boring, technical post. It’s that sort of week. Or rather it was – I usually queue my posts approximately 8-10 days in advance. This means that if I get hit by a bus one day, you won’t know anything is wrong until a week or so later. But I digress…

One of the web apps I maintain has a big table that looks like this:



	. . .
	. . .
	. . .

	

Basically, every cell contains an input box. Each of them has an onChange trigger which will submit it’s contents to the database via AJAX call. It basically allows the user to pull up a full page of records and edit them en masse without doing a lot of clicking.

Unfortunately this setup is a pain in the ass to navigate. Tabbing over works fine but only in one direction. The most intuitive way to traverse such a structure would be with the arrow keys – you know, just like a big spreadsheet. Sadly it just doesn’t work that way. But we can make it work like that with just a pinch of jQuery magic:

$(document).ready(function() {
    $("input.flat").keypress( function (e) {
        switch(e.keyCode)
        {
            // left arrow
            case 37:
                $(this).parent()
                        .prev()
                        .children("input.flat")
                        .focus();
                break;
			
            // right arrow
            case 39:
                $(this).parent()
                        .next()
                        .children("input.flat")
                        .focus();
                break;

            // up arrow
            case 40:
                $(this).parent()
                        .parent()
                        .next()
                        .children("td")
                        .children("input.flat[name="
                            +$(this).attr("name")+"]")
                        .focus();
                break;

            // down arrow
            case 38:
                $(this).parent()
                        .parent()
                        .prev()
                        .children("td")
                        .children("input.flat[name="
                            +$(this).attr("name")+"]")
                        .focus();
                break;
        }
    });
});

How does this work? Each time you press a key while one of the input boxes is in focus, I check the key code. If the code corresponds to one of the arrow key values I switch focus. The statements up there look a bit convoluted so let me explain one in more detail. Let’s do the left arrow:

  1. First we use the parent() function to get the parent node of the input box. This gives us the <td> tag
  2. Second, we use the prev() to get the previous sibling of our <td> node
  3. Third, we get the list of children of that node – and we narrow it down to just input boxes with the specific class. Because of the way our table is structured, we know there will always be exactly one child there
  4. Finally we call the focus() function on all the children (remember, there is only one there) which moves the cursor over

Moving up and down, requires extra steps. We call parent() twice to back out to the <tr> tag and grab previous or next sibling of that. Then we grab all the <td> children, and their children but narrowing it down to input boxes with the same name as the one that triggered the focus change. Once again there will always be exactly one node there.

End result is intuitive keyboard navigation that allows you to move in the table very much the way you move around in an Excel spreadsheet. I thought this was a pretty neat effect so I decided to share it here.

I created a live demo to demonstrate this effect. Mess around with it and let me know what you think. It’s a simple little tweak, but makes a huge usability difference – at least in my opinion.

Does anyone has an idea how to make this prettier, faster, better, stronger and etc? This code is probably suboptimal, but it works. I actually tested it on large-ish data sets, and it didn’t seem to cause visible slowdowns. But of course I’m always open for constructive criticism.

]]>
http://www.terminally-incoherent.com/blog/2009/10/08/jquery-grid-like-table-with-keyboard-navigation/feed/ 13
Free Application Cloud Hosting Not Feasible? http://www.terminally-incoherent.com/blog/2009/06/04/free-application-cloud-hosting-not-feasible/ http://www.terminally-incoherent.com/blog/2009/06/04/free-application-cloud-hosting-not-feasible/#comments Thu, 04 Jun 2009 14:09:38 +0000 http://www.terminally-incoherent.com/blog/?p=3211 Continue reading ]]> Back in December I wrote about AppJet – a fun and promising new Google App Engine like service. It was an incredibly well designed Javascript based framework that allowed you to deploy web applications on their cloud. The hosting was free, just like App Engine, but their interface was much more user friendly.

While App Engine makes you download a special toolkit, and memorize command line switches to deploy your code, AppJet utilized a web based IDE. You could create your application right there in your browser. What is more, you could could look the source code of existing apps, and even “clone” them at a press of a button. It was by far the most newbie friendly application hosting environment that I have seen on the web. Nothing else even came close with respect to ease of use, learning curve and intuitiveness of the interface.

Unfortunately that service is now gone. AppJet Inc decided to discontinue their cloud hosting framework to concentrate on their flagship product EtherPad which (as opposed to the hundreds of poorly written applications they were supporting) they can actually market for profit. I’m not sure what prompted this decision, but I can make an educated guess. Supporting the framework and it’s community was probably a resource drain that did create some hype and drive people to their website but ultimately made them no money. Some shrewd business-monkey probably noticed that and decided to axe the project and re-purpose it’s resources towards their money making product.

After all there is no such thing as free hosting (or fee lunch) – someone has to pay for it. Google can support their App Engine because… Well, because they are Google. They probably have more money to burn on superfluous projects in their budget this month than I will earn in my whole life. So for them offering cloud hosting for applications is entirely feasible and realistic. If you are a small startup like AppJet was it’s a whole different story.

Its sad to see this neat little service go away. Fortunately AppJet is not discontinuing their stand-alone server package so those who put time and effort into creating applications using their service can still migrate and self host them. Still, it is disheartening to see this happen to such a promising project. I was hoping that this type of hosting will catch on and other companies will jump on the bandwagon. AppJet gave me hope that in the near future it will be as easy and as straightforward to publish a personal application for free as it is to publish a personal web page right now. These days just about anyone can start a blog or a forum – there is nothing to it. You press a button and you are done.

Seeing AppJet make it possible to host a complex web application just as easily gave me think that this bright future is just around the cornet. I expected to see an explosion of Cloud hosted application frameworks everywhere. The opposite has happened. AppJet folded and got out of the cloud hosting business. I guess they were simply ahead of their time.

I’m sure we will see this type of project again at some point. The whole idea of in-browser IDE, one-button application cloning and rapid deployment is just to neat to abandon. Perhaps in a few years someone else will pick up this thread and hit it big. Perhaps it will be one of the big boys (Google, Yahoo, Microsoft). I’m pretty sure people would actually be willing to pay for this type of user friendly web interface. That said, AppJet can probably squeeze more profits from Enterprise version of EtherPad than they would from paid hosting. That of course does not mean that such a service would not be profitable.

]]>
http://www.terminally-incoherent.com/blog/2009/06/04/free-application-cloud-hosting-not-feasible/feed/ 3
SQL Emulation Tool in Javascript Part 2 http://www.terminally-incoherent.com/blog/2009/05/19/sql-emulation-tool-in-javascript-part-2/ http://www.terminally-incoherent.com/blog/2009/05/19/sql-emulation-tool-in-javascript-part-2/#comments Tue, 19 May 2009 14:51:07 +0000 http://www.terminally-incoherent.com/blog/?p=3083 Continue reading ]]> As promised, I’m posting my semi-working sql parser below. You should keep in mind that the code is still very immature and full of bugs. One of my reasons for posting it here is that people will start playing around with it and break it in numerous ways helping me to discover the way I can improve the code.

I think I covered most of the architecture in the previous post. One bit of code I wanted to share here is my metaprogramming function generator. This bit of code takes in a list of conditions, and will generate a jaascript function that will test for these conditions and return a boolean value.

When my SQL parser evaluates the WHERE condition it constructs an array that looks a little bit like this:

Conditions Array (click to embigen)

Conditions Array (click to embigen)

Each member object is composed of five elements. The first one is the name of the column, the second one is the value that is used in comparison, and the third one is the comparison symbol. The fourth value is just a flag which indicates whether or not the parser was able to successfully generate this object. It is needed there since malformed SQL could produce only partially generated object to be added into this array. All new objects are generated with this flag set to false, and it is changed only after they are populated without errors. This let’s us spot and ignore malformed and incomplete entries.

The fifth field is the logical operator which is used to join the comparison described in the current object to the previous one. The first element of the array will always have it’s logic field un-initialized. The following elements will have it set to a legal logical operator – this is enforced by the parser.

Once I have this array I pass it into this function:

function generateCondFunction (conditions) {
		
	var tmp = "cond = function(row) { if(";

	for(i in conditions)
	{
		current = conditions[i];

		if(!current.full)
			throw "Incomplete WHERE statement";

		if(current.action == "<>")
			action = "!=";
		else if(current.action == "=")
			action = "==";
		else
			action = current.action;

		if(current.logic != null)
		{
			if(current.logic == "and")
				tmp += " && ";
			else 
				tmp += " || ";
		}

		tmp += " row[\"" + current.first + "\"] " + 
			action + " " + current.second;
	}

	tmp += ") return true; else return false; };";
	
	eval(tmp);

	return cond;
}

The beauty of Javascript is that you can build a string, and then execute it as code. This is precisely what I’m doing here. I use the elements from my array to build a comparison function, then evaluate it and return a function pointer. For example, the array from the picture above will yield the following function:

function(row)
{ 
    if( row["foo"] > 2 and row["bar"] != "poo" ) 
        return true; 
    else 
       return false; 
}

I take this function pointer and pass it into the table rendering function. Then as I iterate over the elements stored in my mock db-obects I pass them through this function fitst to see if they ought to be displayed or not. This is of course not the most efficient way of doing things, but it works.

Anyway, go check out the working demo here and let me know of weird bugs that you encounter. Please keep in mind that a lot of stuff still doesn’t work the way you would expect it. Here is the stuff that I know is still broken:

  • The canonical SELECT * FROM FOO doesn’t work – I have not implemented the wildcard selectors yet
  • If you don’t put spaces between listed columns, the thing will break
  • If you don’t put spaces in the WHERE condition, things will break. Typing in “foo < 2” works fine but “foo<2” does not
  • No common procedures (like NOW() and etc..) are implemented

Don’t report these things. Anything else though, will help me debugging the code. If you want to read through the whole thing, and nitpick or criticize my questionable coding practices you can find all the code here.

Also, Chris already told me that someone already created an SQL parser in Javascript. As far as I can tell TrimQuery is far superior to my hackish code here, so if you would want to use something like this in your project, you are probably much better off stealing code from there rather than from here.

]]>
http://www.terminally-incoherent.com/blog/2009/05/19/sql-emulation-tool-in-javascript-part-2/feed/ 4
SQL Emulation Tool in Javascript http://www.terminally-incoherent.com/blog/2009/05/12/sql-emulation-tool-in-javascript/ http://www.terminally-incoherent.com/blog/2009/05/12/sql-emulation-tool-in-javascript/#comments Tue, 12 May 2009 14:35:00 +0000 http://www.terminally-incoherent.com/blog/?p=3057 Continue reading ]]> As you may know, I teach a introductory computer course in the evenings. One of the topics we cover are databases. Of course this is not a programming class, so there is a limited amount of things we can actually have them do. The students get to design and create databases in MS Access and learn a little bit of theory. Among other things we briefly cover SQL and show them how simple select or update statements work. Why? So that they get a better idea what are the strengths and limitations of a database.

What I like to do, is to put little tools that students can play around with on my website. For example, when we talked about binary system I made a little conversion tool they could use to check their work. They would type in a number in decimal, hit button and have it converted to binary and etc…

For the networking section I made a tiny script that validates IP addresses. When we talked about encryption, I implemented a Caesar Cipher tool they could mess around with and and encrypt/decrypt short passages. I also use it in class to show them special cases such as ROT13.

Mind you that I’m actually doing all of this with Javascript because I don’t actually have any server side scripting available. They took away my unix account after I graduated for some reason and they won’t give it back to me now. So I’m stuck with the flaky Novell NetDrive system which wont allow me server side scripting. Of course I could host these scripts on one of my own servers, but I figured I might as well use the MSU resources that I have.

I didn’t really have any tools for the database lecture. A while ago I got this crazy idea of making a nice little database application for the students to play with. I mean, we already have access but I wanted to have something on my website. Something really simple – like one table populated with some random data and a text box to type in SQL statements.

The problem is that since I’m working on the client side only, there is really no way for me to connect to a real database. Now I already have some experience in getting by without server side scripting. This however is less about massaging the browser into doing what I want it to do, and more about emulating a database on the client. What does it mean?

Well, first I need a “database” of sorts. Since this is a demo tool anyway, I actually don’t care about persistence. So my db will simply be a javascript object that is initialized when the page loads. When the student refreshes the page, it will be reset back to default. This way I can allow people to run insert and update statements on it, without worrying about spam, obscenity and SQL injection. That’s easy enough. I could implement it like this:

 var tables = {
     "person" : new Array(
          { 
              "id"        : 1,
              "fname"  : "John",
              "lname"  : "Smith",
          },
          { 
              "id"        : 2,
              "fname"  : "Jane",
              "lname"  : "Smith",
          },
     );
}

This way I can have multiple tables that can be easily accessed by name and I can easily pull stuff out of it this:

// to get a specific table
var mytable = tables["table_name"];

// to get the 1st row in the table
var myrow = tables["table_name"][1]

// to get the "id" attribute of the first row
var myrow = tables["table_name"][1]["id"]

It is also trivial to dynamically add and remove rows or even new tables to this model. Once I figure out what the query is supposed to do, I can easily modify it or extract the data and display it on the screen in a table form.

The hard and interesting part of the project is of course parsing SQL. Unfortunately I don’t have much code to show yet – I’m still trying to find an ideal way to do it. I hacked up a very simple parser that grabs the user input, sanitizes it (removing HTML tags, newlines and etc) and then splits it on white space. I iterate over the array and try to classify the tokens. I first check whether or not the token is on the list of legal SQL keywords and symbols. If yes, I classify it as such. If not, I try to guess what it is, based on it’s position in relation to other tokens.

For example, if I find a token that is not a keyword, and I already found the word SELECT but haven’t found FROM yet, I can probably assume this is the name of a column that is to be used in the statement. If I found FROM but haven’t found WHERE statement yet, I can assume that it is the name of a table instead. If I found where, then I look for triplets – two words separated by a logical comparison symbol. I have array of objects and I fill them with stuff as I find them.

It is still very buggy and actually too permissive. For example, it doesn’t care if you separate your column names by comas or if you write the logical comparison in postfix or prefix notation (ie. “foo = bar” parses the same way as “= foo bar” or “foo bar =”). I’m basically flying through the tokens and grabbing whatever matches, ignoring everything that does not. In other words, my SQL parser acts more like a modern HTML parser – it tries to make the best out of mangled code.

This is not necessarily what I want. I want this tool to allow kids to learn a thing or two about SQL. So my next step is to add more rigid error checking to emulate the way a database would act. As I’m stepping through the SQL statement I will start throwing exceptions as son as something is amiss.

I also need to figure out how to efficiently translate the the WHERE statement into some code that could actually – you know, do something. Fortunately, Javascript has a nifty evaluate function which means I can probably dynamically build code based on the user input. This is something I’m going with right now, but there might be a better way to do it.

The code I have right now is very flaky so I’m not going to show it yet. I will post an update within a day or two though. I’m pretty sure I can make this code functional using the methods I mentioned above. In the meantime I’m really considering hitting the books and actually reading up on compilers and formal parsing methods.A real parse tree would probably worm much better than my haphazard collection of objects, and binary flags that denote their state. I was to lazy to actually implement trees though – so I went for something much simpler.

Any suggestions? How would you go about doing this? Any resources and reading you would recommend? I know there might be a better method of setting up this sort of a demo – but I sort of like this project. I actually want to write this parser. So help me out if you have done this sort of thing before.

For the record just using Javascript, jQuery and the $.string plugin which ports all the useful string functions from the Prototype framework into jQuery.

]]>
http://www.terminally-incoherent.com/blog/2009/05/12/sql-emulation-tool-in-javascript/feed/ 4
Display Loading Screen while Rendering Large HTML Tables http://www.terminally-incoherent.com/blog/2009/04/06/display-loading-screen-while-rendering-large-html-tables/ http://www.terminally-incoherent.com/blog/2009/04/06/display-loading-screen-while-rendering-large-html-tables/#comments Mon, 06 Apr 2009 14:56:23 +0000 http://www.terminally-incoherent.com/blog/2009/04/06/display-loading-screen-while-rendering-large-html-tables/ Continue reading ]]> Here is a problem I’ve been having for a while now: large HTML tables take way to long to render, especially under IE. Why do you use tables, you ask? To display large amounts of data on the screen – usually in a sortable format using the Table Sorter plugin. It works out pretty nicely – once the table renders completely, and the sorting script kicks in, you have a very nice table on the screen that can be sorted in a fraction of a second and broken into multiple pages as needed. Since all the processing is done locally it is usually very fast.

The problem is that it usually takes the browser few seconds to load these large tables. Firefox usually starts rendering the table as it loads, so users can at least see the first few dozen rows right away. Unfortunately, the sorting script runs after the whole thing loads so clicking on the column names does nothing. IE is even worse – stalls until all of the HTML is loaded. I’ve been observing users dealing with these huge report pages that take a long time to load and they always get frustrated. They start clicking things, see no response and get annoyed. I’ve even seen people close the window and try again repeatedly – and would never wait the 5-10 seconds for it to load.

On the other hand, somewhere else there is a script that loads content asynchronously via AJAX call. Inside of the div where the dynamic content is loaded, I placed one of those spinning gif images accompanied by “Loading… Please Wait” message. Surprisingly, that script sometimes wouldn’t fire properly due to an internal bug – but people would still sit patiently watching the spinner for up to a minute or longer. Some would just minimize the window and continue doing other things and would check it every few minutes to see if it was done.

The lesson here is that loading screens are magical. I definitely needed one for all the stupid pages with the large tables. The only problem was – how do I display a loading screen that will disappear soon as the page is loaded. I tested several ideas and I found out that it is usually easiest to put some sort of overlay on the page. Something like this:

Loading ... ... Please wait!

This one actually covers the whole page. You would usually want to tailor yours so that it leaves the navigational elements (sidebar, menus, etc) intact and only covers the loading table. You place this somewhere on your page, and it covers the partially rendered content with a user pacifying spinning animation. Then just go to where you load your table sorter and other jQuery crap and add this as the last line

$(document).ready(function() 
	{
		$("#loadpage").hide();
	});

That’s it. Just hide the div – no additional logic is needed here. Because of the way most browsers render content and run Javascript this command will not be triggered until your HTML table is loaded and good to go. Since I’ve put this simple trick into place, users actually commented that the large reports seem to load faster. They are not – they take the same exact amount of time to load, but it seems faster because now they see a nice little message and a spinner instead of just dealing with partially rendered content. This works pretty well in IE which tends to stall and become unresponsive when working on especially large tables – when users see this message they usually back of, and let the damn thing load.

I wish I had known this trick earlier. It would save my users much frustration.

]]>
http://www.terminally-incoherent.com/blog/2009/04/06/display-loading-screen-while-rendering-large-html-tables/feed/ 6
IE, JQuery, Hovering and Option Elements http://www.terminally-incoherent.com/blog/2009/01/12/ie-jquery-hovering-and-option-elements/ http://www.terminally-incoherent.com/blog/2009/01/12/ie-jquery-hovering-and-option-elements/#comments Mon, 12 Jan 2009 16:04:13 +0000 http://www.terminally-incoherent.com/blog/2009/01/12/ie-jquery-hovering-and-option-elements/ Continue reading ]]> I hate Internet Exploder with a passion. Why does it need to be such a piece of non-compliant, non standard crap? It’s not like I was trying to do something outrageous. It was simple and logical script, but it wouldn’t work in IE.

I wanted to have a <select> box that would look something like this, only with more entries (it would be over 100 dynamically generated from the database):

Next I wanted to add a JQuery script which would load a short description of each item into a div for each of the <option> items inside the <select> when you hover your mouse over it. Sort of like a tooltip, but it would show up inside of the page. Initially I tried to do it the logical way, by invoking the hover method on the <option> element.

$(document).ready(function(){
    $("option").hover($("#somplace").load($(this).attr("id")));
});

It worked perfectly in Firefox, but IE did not even throw a Javascript error. It simply ignored the script without as much as a warning. Apparently <option> elements do not fire hover events in IE. The only way around it was to do it backwards. Catch the hover event on the <select> element and then find out which <option> was selected when the event fired.

$("select").change(function () {
    $("#someplace").load($(this).children("[@selected]").attr("id"));
});

This is not the worst solution – I have seen much uglier hacks. Still, the first script seems a bit more readable, at least to me. But what are you going to do. In retrospect, this turned out to be a blessing in disguise, because when I tested the original script in Chrome it did not work either. Apparently Webkit doesn’t think that <option> elements are to be treated like regular DOM elements either.

If you ever stumble upon this problem, this is probably the best way to get around it.

]]>
http://www.terminally-incoherent.com/blog/2009/01/12/ie-jquery-hovering-and-option-elements/feed/ 6
Serializing Javascript Objects into Cookies http://www.terminally-incoherent.com/blog/2008/11/25/serializing-javascript-objects-into-cookies/ http://www.terminally-incoherent.com/blog/2008/11/25/serializing-javascript-objects-into-cookies/#comments Tue, 25 Nov 2008 16:11:26 +0000 http://www.terminally-incoherent.com/blog/2008/11/25/serializing-javascript-objects-into-cookies/ Continue reading ]]> A while ago I mentioned that my school gives students and faculty Novel NetDrive accounts. This means we can all publish simple websites on their service, but get no server side scripting. This makes that space great for teaching students HTML but relatively useless for everything else.

O previously described how to get a semi-presentable website constricted without server side includes. This is a sort of follow up to that article showing you more tricks. The big issue you get working in a client-side only environment is persistence. The only way to accomplish it is to use cookies. So I set out to see what exactly can I store in a cookie.

It turns out I can store just about anything in there. For example the code below generates a Javascript object with few fields and functions, and then stores it in a cookie:



The toSource function does most of the work here. It serializes an object into a string which can be then deserialized using the eval function. Note that this will work on stand alone functions too, since in Javascript they are first class objects. Neat, eh?

I’m using Klaus’ Hartl’s jQuery cookie plugin because the native Javascript handling of cookies is it is retarded and error prone. It basically returns all the cookies you set as a single string containing semi-colon separated list. As you can imagine, retrieving serialized objects which are bound to contain lots of semi-colons can be problematic. Klaus’s code was not designed for this sort of thing but it uses the encodeURIComponent method to escape special characters. Which works out great because it escapes the ‘;’ character inside of the serialized objects making them easy to retrieve.

I probably don’t need to tell you that the example above is a very, very bad coding practice. You really don’t want to eval any code that might have been tampered with. Since we are storing our object in a cookie which can then be modified on the client machine, we are really opening ourselves for abuse. So while storing functions inside cookies is possible, I would not recommend it.

What you want to do is to serialize your objects into JSON, and the safely parse it back while making sure you are actually getting back JSON object rather than random code. There is a really good plugin that does this for you. So your code will look something like this:




The secureEvalJSON method is much safer than just running eval on arbitrary code. The pluging also has an “unsafe” eval version, but I would not recommend using it unless you can guarantee the cookies haven’t been tampered with (and in most cases you can’t).

There is a small caveat you need to keep in mind. The space you have to work in is very limited. Fore example, IE only allows you to store around 4KB of data per domain. This is not per cookie, but a total space you have for all your name-value cookie pairs. This means that sticking a huge JSON object (or many smaller ones) into a cookie just won’t work. IE will silently drop cookies that exceed this limit. So use this technique sparingly, and if you can, compress the data as tightly as possible.

]]>
http://www.terminally-incoherent.com/blog/2008/11/25/serializing-javascript-objects-into-cookies/feed/ 3