hax – Terminally Incoherent http://www.terminally-incoherent.com/blog I will not fix your computer. Wed, 05 Jan 2022 03:54:09 +0000 en-US hourly 1 https://wordpress.org/?v=4.7.26 Ubuntu Hardy on Compaq Presario 1240 (Living Without X) http://www.terminally-incoherent.com/blog/2008/06/10/ubuntu-hardy-on-compaq-presario-1240-living-without-x/ http://www.terminally-incoherent.com/blog/2008/06/10/ubuntu-hardy-on-compaq-presario-1240-living-without-x/#comments Tue, 10 Jun 2008 15:05:34 +0000 http://www.terminally-incoherent.com/blog/2008/06/10/ubuntu-hardy-on-compaq-presario-1240-living-without-x/ Continue reading ]]> Well, the Nethack Server is gone. I know that couple of people played on it, and Anthony even ascended but it is now gone. It was actually off line for months now and I didn’t even notice. It got knocked off the network when I switched from WEP to WPA few months ago. Then it crashed hard and sat there for another month or two without a reboot. Now I have some nice dead pixels on the LCD which I guess might be sort of a screen-burn side effect or something. It was acting slow, and erratic so I nuked it. I should have saved the high scores and the bones files but hindsight is always 20/20. So big apologies to anyone who played on the server. I might resurrect it at some point, but I guess the old Presario with the flaky Wifi is just not a very reliable platform for something like that.

It is a good platform for experimenting with bare bones installations. I always wanted to set up an X-less machine and see how usable it would be. I already have a library of very useful CLI apps so really this should be a great experiment. I briefly messed around with a net install of Debian Etch but in the end I went right back to what I know and decided to give Hardy a whirl. It would be a nice test to see how the brand new system stacks up on very old hardware.

So I grabbed the mini ISO, and run it with the cli boot parameter. Why the mini iso? To make a long story short, I wasted 4 CD’s burning corrupted copies of the alternate install CD. What I wanted was a bare bones CLI system but every time I started installation some file was corrupted. I guess this will teach me to check the md5 checksums before burning next time. Either way, the minimal install ISO seemed like a good solution – weighing in at 8MB wouldn’t get corrupted during transfer.

Installation was uneventful, but tad long since a lot of packages needed to be downloaded. I ended up with a lean CLI system. Since I initially did not want to install X on it, the first thing I did was to update my tty resolution This is done by editing the grub menu like this:

sudo vim /boot/grub/menu.lst

You will see something like this:

title      Ubuntu, kernel 2.6.15-20-386
root      (hd0,4)
kernel    /boot/vmlinuz-2.6.15-20-386 root=/dev/hda5 ro
initrd    /boot/initrd.img-2.6.15-20-386
savedefault
boot

The Compaq can only handle 800×600 with 24 bit color at most, so I add the following to the kernel line:

vga=789

The like should look like this once you are done with it:

kernel    /boot/vmlinuz-2.6.15-20-386 root=/dev/hda5 ro vga=789

Now how does the 789 relates to my resolution? Good question! You can find other magical values you can add to that line listed on the Ubuntu FrameBuffer wiki.

Next ting was setting up the wifi connection which was not that difficult. I ended up with a fully functional CLI system. I could browse the web with w3m, use Midnight Commander as my file system browser and etc. Btw, this takes me way back:

Compaq Presario 1240 Running Hardy

I sat there staring at this screen reminiscing all the good times I had working with Norton Commander back in the day. Oh man! These were the good days. I vividly remember excitement involved in unpacking a pirated copy of Wolfenstein 3D and Alone in the Dark and running it from the Norton interface. Ah, these were the days. I was young, naive and was just discovering wonders of technology. Unfortunately, I seem to be the only person with fond memories of that time. When my brother saw the screen, he made a face and exclaimed: “man, I used to hate that thing”. But how can you hate the commander? I mean, it’s a fucking commander! It commands respect! Here is a better shot of the app itself running in 800×600 mode on the tty:

Midnight Commander on Hardy

My cousin had a similar reaction. She pretty much said “You know, it’s 2008 – you can use Windows Explorer now”. :cry: No, thank you – I will stick with linux and my ultra light applications that hardly take up any memory and/or CPU power to run.

But I wanted a little bit more flexibility. Swapping between TTY’s all the time is not always the optimal solution. Sometimes I would for example want to split my screen into 2 or 4 panels with different content. Screen does that, but it does not do vertical splits out of the box. You can get that functionality by applying a patch and recompiling, but I decided to get something a tad more complex. So I turned to DVTM which is labeled as a tiling window manager for the CLI. It’s not in the repositories so you have to compile it yourself:

aptitude install make build-essential libncurses5-dev
wget http://www.brain-dump.org/projects/dvtm/dvtm-0.4.1.tar.gz
tar -xzf dvtm-0.4.1.tar.gz
cd dvtm-0.4.1/
make
sudo make install

The result is pretty neat. You can see me running a split panel session with htom (upper left), aptitude doing updates (upper right), and w3m (bottom):

DVTM Running in a TTY

DVTM has two major disadvantages. First is that you loose all the neat framebuffer features – for example like the images being displayed in w3m. No drawing or image viewing utilities will work in it – just like they don’t work in screen. Seconly, DVTM is not very configurable. There is no dot file (like .dvtmrc or something) that you could use to customize settings, set default layout and etc. The author suggests changing the configuration in one of the .h files, and then recompiling the whole thing every time you want to change something. :( Not really what I want to do.

Finally I broke down, and decided that that running mimimal x configuration with Ratpoison is not much more taxing on the hardware. I still intended to use mostly CLI apps (since that’s really what Ratpoison is all about) but the possibility of actually being able to run some gui apps (such as Dillo or Kazekahese for example) was a nice touch. Installing ratpoison was easy:

aptitude install x-window-system-core ratpoison
aptitude install xserver-xorg-video-vesa xserver-xorg-input-mouse

I got a lot of scrolling text on my barebones install (over 30 MB of x related packages) but installation was swift and simple. But it was not perfect. If you follow these instructions you will notice that the display is bit “gritty” and pixelated under this setup. This bothered me a bit. Running lshw told me that this system actually has NeoMagic NM2160 MagicGraph 128XD card installed on board. Quick aptitude search told me that there is a package which caters to this brand of graphic devices. To get full use out of your card you will need to do:

aptitude install xserver-xorg-video-neomagic

After that, the image was crystal clear and sharp as ever.

Now my rig starts in cli mode, but and I can continue like that. If I need X for some reason I can start it manually, and it actually takes about 2 seconds to start up. Ratpoison is rather nifty – and not much different from dvtm, only it uses X and is completely configurable. Here is a screenshot of me running htom (upper right), Midnight Commander (upper left) and opera (bottom):

Ratpoison!

Here is something that I did not expect or anticipate: Opera actually outperforms Kazehekaese on this machine. It’s bizzare but true. I guess the geko engine was choking due to limited memory or something.

I might try another windowing manager like awesome at some point in the future because of the floating window feature. Ratpoison only knows how to tile windows which is fine but not really practical in the long run. As you can see on the screenshot above, the 3 way split is already kinda cluttered. Add 2-3 more panels, and you won’t be able to see anything on any of them. With mere 800×600 resolution, the screen estate on that thing is priceless.

Anyway, for those interested my .ratpoisonrc looks like this:

bind c exec rxvt -fn "Lucida Console-8"
bind b exec links2 -g
wrap off
escape Pause

exec xsetroot -solid black -cursor_name left_ptr
exec xli -onroot -fullscreen /home/luke/eva.jpg

Note that I remapped the escape key from Ctrl+T to Pause mainly because of the keyboard layout. On the laptop keyboard the Ctrl+T becomes a two hand salute due to the weird positioning of the keys. Pause on the other hand is strangely withing the reach of my right hand. Go figure.

One thing that did not work well under Ratpoison was Midnight Commander. Something about rxvt and mc not agreeing on the locale settings. It was easily resolved by adding the following line to my .basrc:

LANG=C

I’m also bound a key for links2 since it is really the faster, and more responsive browser on this machine. If needed I can always launch opera, but for quick lookups links2 is more than enough.

So there you have it – it’s a minimalistic system, which runs mainly CLI apps. It does have X but it does not need to be running. And when it does it uses ultra light window manager that is also stripped down for simplicity and performance. What do you think?

[tags]hardy, ubuntu hardy, midninght commander, dvtm, screen, ratpoison, opera, x, linux[/tags]

]]>
http://www.terminally-incoherent.com/blog/2008/06/10/ubuntu-hardy-on-compaq-presario-1240-living-without-x/feed/ 9
Adding Comments to Tumblr http://www.terminally-incoherent.com/blog/2007/09/02/adding-comments-to-tumblr/ http://www.terminally-incoherent.com/blog/2007/09/02/adding-comments-to-tumblr/#comments Sun, 02 Sep 2007 18:53:01 +0000 http://www.terminally-incoherent.com/blog/2007/09/02/adding-comments-to-tumblr/ Continue reading ]]> Someone asked me about this so here it is. I realized that adding comments to your Tumblr is not as straightforward as it may seem. Here is how you want to do it.:

  1. Sign up for a Haloscan account
  2. Go to the Install tab and choose Other/Manual to get the javascript code to paste on your site
  3. Switch your Tumblr to a custom layout so that you can edit the code
  4. Paste the first Haloscan snippet somewhere in the head
  5. Edit and paste the second snippet in your post blocks

The last point is the confusing one to some. Let me explain. Tumblr uses special tags to define blocks of code that do something different. For example take the following block:

{block:Regular}
	
{block:Title}

{Title}

{/block:Title} {Body}
{/block:Regular}

It tells Tumblr how to display a single post (here we are looking at a regular text post). Note that all the stuff in braces is treated as special Tumblr tag. For example {Permalink} will expand to the full permalink URL of this given post, and {Title} will expand to it’s title.

Haloscan requires that you somehow create a unique identifier for each of the pages for which you want comments and pass it into their javascript calls. There are different ways to do it, but we can exploit Tumblr’s dynamic tags to automatically get a unique post ID for Haloscan. I simply chose to use the {Permalink} tag for this purpose.

My edited regular block looks like this:

{block:Regular}
	
{block:Title}

{Title}

{/block:Title} {Body}
| {/block:Regular}

Note how I’m plugging the {Permalink} in all the places where Haloscan requires you to put the unique post ID. At runtime it will expand to the full URL which will be unique for each post. You can actually just copy and paste this snippet to your Tumblr – it does not include any account specific info. Repeat this for every block that you want to enable comments for. Tumblr defines different blocks for regular posts, links, conversations, videos, images and etc..

Anyway, this is all there is to it. I hope you find this helpful. :)

]]>
http://www.terminally-incoherent.com/blog/2007/09/02/adding-comments-to-tumblr/feed/ 25
Run PNGOUT on all PNG Files in a Folder http://www.terminally-incoherent.com/blog/2007/07/17/run-pngout-on-all-png-files-in-a-folder/ http://www.terminally-incoherent.com/blog/2007/07/17/run-pngout-on-all-png-files-in-a-folder/#comments Tue, 17 Jul 2007 15:33:54 +0000 http://www.terminally-incoherent.com/blog/2007/07/17/run-pngout-on-all-png-files-in-a-folder/ Continue reading ]]> Here is a quick registry hack to add a new entry “Run PNGOUT on Folder” to your context menu. It will iterate through all the files in the directory, and run PNGOUT on every PNG file it finds. It will leave all the other files alone, so you don’t have to worry about errors, or data corruption. Simply copy the code below:

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Folder\shell\PNGOUT]
@="Run PNGOUT on Folder"
[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Folder\shell\PNGOUT\command]
@="cmd.exe /c \"TITLE Running PNGOUT on %1 && FOR /r \"%1\" %%f IN (*.png) DO pngout \"%%f\" \""

Paste it into notepad, save it as a .reg file, and run it by double clicking it. It should install the extension for you. This will come useful if you need to optimize a whole folder of files at once. The script should recourse into sub folders, so you can run it on the root of your website, then go to sleep and wake up to a fully PNG optimized web page.

If you want to remove this context menu item it at some point, repeat the whole procedure with:

Windows Registry Editor Version 5.00
[-HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Folder\shell\PNGOUT]
[-HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Folder\shell\PNGOUT\command]

Please note that this script assumes that PNGOUT.exe is somewhere in your path. If it’s not, make sure you put it in a folder within your path to ensure the script works.

Update 07/26/2007 09:58:20 PM

Just for the sake of completion, Scott Hanselman created very similar registry hack, allowing you to run PNGOUT on a single file.

[tags]pngout, png, png compression, png size reduction, registry, hax[/tags]

]]>
http://www.terminally-incoherent.com/blog/2007/07/17/run-pngout-on-all-png-files-in-a-folder/feed/ 1
Get a md5 of a string on a command line http://www.terminally-incoherent.com/blog/2007/01/10/get-a-md5-of-a-string-on-a-command-line/ http://www.terminally-incoherent.com/blog/2007/01/10/get-a-md5-of-a-string-on-a-command-line/#comments Wed, 10 Jan 2007 20:06:43 +0000 http://www.terminally-incoherent.com/blog/2007/01/10/get-a-md5-of-a-string-on-a-command-line/ Continue reading ]]> One of the web applications I maintain stores md5 hashes in the database, instead of the actual passwords. This is a good practice – passwords are not stored in plain text, and knowing the has still does not give you a direct access to the system.

I whipped up this tiny shell script in order to easily get md5 hashes of strings on the command line (for example in case I need to go in and change some password directly in the database).

#!/usr/bin/perl
use Digest::MD5 qw(md5 md5_hex md5_base64);
print(md5_hex($ARGV[0]) . "\n")

This script takes a parameter string and prints out the md5 hash of that string to the standard output stream.

[tags]md5, pearl, hash, passwords, hashing, scripting[/tags]

]]>
http://www.terminally-incoherent.com/blog/2007/01/10/get-a-md5-of-a-string-on-a-command-line/feed/ 5
Tweaking Firefox User-Agent Value http://www.terminally-incoherent.com/blog/2006/11/18/tweaking-firefox-user-agent-value/ http://www.terminally-incoherent.com/blog/2006/11/18/tweaking-firefox-user-agent-value/#comments Sat, 18 Nov 2006 21:29:08 +0000 http://www.terminally-incoherent.com/blog/2006/11/18/tweaking-firefox-user-agent-value/ Continue reading ]]> Recently I noticed that on most of my comments, the browser detection plugin identifies my OS as Linux. When I went back and looked at some of my past comments, they were correctly identified as Ubuntu Linux. This confused me a little bit, so I decided to look at my User-Agent string.

In Firefox you can see the whole string by typing in about: in the address bar. In my case this was:

Mozilla/5.0 (X11;U;Linux i686;en-US;rv:1.8.1) Gecko/2006101022 Firefox/2.0

In the past I used Firefox 1.5 which came straight from the Hoary repository. Naturally, all official Ubuntu builds have an appropriate User-Agent variable set. Unfortunately the only Ubuntu version that gets an official FF 2.0 package is Edgy. Since I’m a huge Firefox fanboy I ended up downloading and using the generic Mozilla.org version which works fine, but it does not reflect my OS in the User-Agent string.

This bothered me a little because I kinda want to support my OS of choice by advertising it via the User-Agent. This way it ends up in people’s system logs, and gets included in statistical calculations and etc. But mostly I wanted to have the Ubuntu icon underneath my comments here on Terminally Incoherent.

How do you tweak your User-Agent string? The same way you change other Firefox config settings: about:config.

I personally simply changed the value of general.useragent.extra.firefox from Firefox/2.0 to Firefox/2.0 Ubuntu Linux. This was enough for my plugin to include the correct icon under my comments.

Now my User-Agent looks like this:

Mozilla/5.0 (X11;U;Linux i686;en-US;rv:1.8.1) Gecko/2006101022 Firefox/2.0 Ubuntu Linux

Of course this only changes a tiny part of the User-Agent string. You can completely override it by creating a new string value called general.useragent.override. You can essentially set your User-Agent to anything you want. Is that a good idea though? Nope! It’s horrible.

You see, a lot of websites use the User-Agent to serve you an appropriate stylesheet. Good web designers with complex layouts are likely have one stylesheet for IE, one for Gecko based browsers and Opera and one for “other”. The “other” stylesheet will likely be stripped down, and formated for legacy browsers – so in other words, it will likely look ugly. Not everyone does this, but if you start tweaking the User-Agent string to much, your browser may not be able to properly render some of the pages.

But if you don’t care about that, go ahead and knock yourself out. Now you know how. :mrgreen:

[tags]firefox, firefox 2.0, user agent, browser[/tags]

]]>
http://www.terminally-incoherent.com/blog/2006/11/18/tweaking-firefox-user-agent-value/feed/ 7
Hard Links and Junctions in Windows http://www.terminally-incoherent.com/blog/2006/09/06/hard-links-and-junctions-in-windows/ http://www.terminally-incoherent.com/blog/2006/09/06/hard-links-and-junctions-in-windows/#comments Thu, 07 Sep 2006 03:15:34 +0000 http://www.terminally-incoherent.com/blog/2006/09/06/hard-links-and-junctions-in-windows/ Continue reading ]]> Did you know that [tag]NTFS[/tag] supports [tag]hard links[/tag]? This is an interesting tidbit about windows that not many people know about. But let me quickly explain what is a hard link to the clueless windows people.

Imagine the following file C:\temp\a.txt. Where is that file really located? If you said the temp folder then you are only partially right. Yes, the logical location of that file is in that folder. But the real physical location of this file can probably be best expressed in terms of track and sector it occupies on the hard drive. Your file system (NTFS for example) maps the physical location, to the logical location for your convenience. This is usually accomplished via some sort of lookup table (FAT for example).

So what happens when two or more entries in that table point to the same physical file? Nothing spectacular really – you simply get several logical pointers (or hard links) to the file that behave exactly the same. If you change permissions on one of them, all other pointers will be affected. When you delete a hard link, you simply remove one entry from the table. The physical file is only deleted when all the links are gone.

Unix users have been utilizing this nifty functionality for ages now, but Windows crowd only got it recently in the NTFS file system. So how do you make a hard link? You use the [tag]fsutil[/tag] command:

fsutil hardlink create LINK TARGET

In the above LINK is the name of the created hard link, and TARGET is the file you are linking to.

There is one disadvantage in this method. You can’t create hard links to directories. But once again, this functionality is present in [tag]Windows API[/tag] – it is called a [tag]junction[/tag]. Unfortunately, Microsoft does not provide any out of the box support for creating Junctions. The official Microsoft tool you can use is [tag]linkd.exe[/tag] which is part of the Windows Server 2003 Resource Kit. As with many other [tag]Microsoft[/tag] utilities, this one will work just fine when used on XP box – so don’t be scared by the server part. The usage is very simple:

linkd LINK TARGET

If you don’t want to be bothered downloading the whole resource kit, you can just grab the 16Kb [tag]sysinternals[/tag] Junction app. It has the same functionality as linkd, but it is free and comes with a complete source code. The syntax is almost identical:

junction LINK TARGET

Unlike hardlinks, junctions can be easily identified by using the dir command. They will show up on the list marked as <JUNCTION> rather than <DIR>:

09/06/2006 10:50 PM <DIR> TEMP
09/06/2006 10:49 PM <JUNCTION> test
09/06/2006 10:53 PM <JUNCTION> test2
09/22/2005 10:31 PM <DIR> Themes

This leaves out soft links, which for now still seem to be exclusive tho the unix/linux world. Windows does implement a similar functionality with it’s shortcuts. Unfortunately, shortcuts can’t be used on the command line which makes them only marginally useful. Perhaps in a few years Microsoft will be able to figure out that one too…

]]>
http://www.terminally-incoherent.com/blog/2006/09/06/hard-links-and-junctions-in-windows/feed/ 7
Time Logging Script http://www.terminally-incoherent.com/blog/2006/08/11/time-logging-script/ http://www.terminally-incoherent.com/blog/2006/08/11/time-logging-script/#comments Fri, 11 Aug 2006 17:06:47 +0000 http://www.terminally-incoherent.com/blog/2006/08/11/time-logging-script/ Continue reading ]]> I think I found this tip at Lifehacker at some point and decided to implement it. The idea is simple – you set up a script that will ask you what you are currently doing every hour or so, and collect the information in a text file. Then you can parse the file later to see how much time you spend on a given task, or how many things have you accomplished that day.

If the boss asks you what did you do all day yesterday, you can immediately produce a detailed hour by hour activity log. This also comes in handy when the company wants you to long your time in a very detailed way.

I vaguely remember that someone posted a VB script to do that on Lifehacker. I didn’t feel like digging out the post so I just decided to implement it myself. You can go as simple or as complicated as you want with this. I opted for simplicity. I hacked up this nice little shell script:

#!/bin/bash
echo What are you doing right now?
read -e what
echo `date` - $what >> timelog.txt

I really don’t think this can get any simpler than this. For a while I was toying with the idea of using XDialog. But then again I just wanted something quick, easy and robust. So I opted for pure bash.

Now I just needed to create a cron job. Unfortunately, by default cron will run shell scripts in the background. I actually wanted my script to pop up on the screen, get in my face and prompt me for input. So I used kstart to pop up a konsole window on all the desktops:

0,30 * * * * /usr/bin/kstart --windowclass "Konsole" --alldesktops --activate --ontop /usr/bin/konsole -e /home/lmaciak/track

I set my script to annoy me every half an hour. It gives me a better idea of how am I spending my time during the day. But if that’s to much for you, just delete “,30” from the line, and it will bother you once every hour.

One thing you have to remember is that cron daemon does not really know, or care about X environment. So you need to explicitly state which display should be used for the job. Add this somewhere in your cron file:

DISPLAY=:0

I added it above my cron jobs, but I don’t see why you couldn’t place it below them.

If you look in the timelog.txt code you will see nice grep-able output like this:

Thu Aug 10 15:00:15 EDT 2006 – responding to Bob’s email
Thu Aug 10 15:30:10 EDT 2006 – php class
Thu Aug 10 16:00:27 EDT 2006 – looking into setting up another demo
Thu Aug 10 16:30:15 EDT 2006 – php class coding
Thu Aug 10 17:00:17 EDT 2006 – coding eval.class.php
Thu Aug 10 17:30:19 EDT 2006 – replying to an email from ACE project (timesheet)

Most of these are very brief statements. For example, I can always go back and see what did I write to bob on August 10 around 3pm.

As an added benefit, that nag-window usually jolts me back to work. If I was idling, or wasting time, it forces me to concentrate on what I was supposed to do, and reminds me to get back to work. :mrgreen:

[tags]time, time tracking, time logging, bash, cron[/tags]

]]>
http://www.terminally-incoherent.com/blog/2006/08/11/time-logging-script/feed/ 4
How do you lock down XP Home? http://www.terminally-incoherent.com/blog/2006/08/01/how-do-you-lock-down-xp-home/ http://www.terminally-incoherent.com/blog/2006/08/01/how-do-you-lock-down-xp-home/#respond Tue, 01 Aug 2006 18:11:44 +0000 http://www.terminally-incoherent.com/blog/2006/08/01/how-do-you-lock-down-xp-home/ Continue reading ]]> In lieu of the privilege escalation hax I started to wonder what exactly do you need to do to lock down an XP Home machine. In XP pro you can use the group policies to limit what user can do on the local machine. Unfortunately, the home edition is missing gpedit.msc so we can only rely on registry hacks.

In the examples below I use HCU to denote HKEY_CURRENT_USER.

First order of business is to lock the user out of the command prompt so that he can’t issue the at command. This is controlled by the DisableCMD dword. To disable it:

KEY: HCU\Software\Policies\Microsoft\Windows\System\
DWORD: DisableCMD = 1 (use 2 enable it back)

Next is the Task Manager. We don’t want the user being able to kill the explorer process:

KEY: HCU\Software\Microsoft\Windows\
      CurrentVersion\Policies\System
DWORD: DisableTaskMgr = 1 (use 0 to enable it back)

If you feel especially nasty (or security conscious) you can also disable access to regedit

KEY: HCU\Software\Microsoft\Windows\
      CurrentVersion\Policies\System
DWORD: DisableRegistryTools = 1

This of course will make it a little difficult to change any keys for this user in the future do probably this is not the best idea. Chances are that the would-be h4x0r will get discouraged after seeing that neither task manager nor CMD are working.

This method is not perfect, but it is a step in the right direction.

reg-hax © j79zlr

[tags]xp, xp home, windows xp, windows, microsoft, registry, hax, group policies, security, windows security, lock down xp home[/tags]

]]>
http://www.terminally-incoherent.com/blog/2006/08/01/how-do-you-lock-down-xp-home/feed/ 0
Local Privileges Escalation in WinXP http://www.terminally-incoherent.com/blog/2006/07/30/local-privileges-escalation-in-winxp/ http://www.terminally-incoherent.com/blog/2006/07/30/local-privileges-escalation-in-winxp/#comments Mon, 31 Jul 2006 03:53:57 +0000 http://www.terminally-incoherent.com/blog/2006/07/30/local-privileges-escalation-in-winxp/ Continue reading ]]> Did you know that you can escalate you can become the SYSTEM user on a WinXP box simply by using the at command? Try this at home:

at 11:45pm /interactive cmd.exe

You just scheduled a job that will pop up a new cmd window exactly at 11:45pm. Who is the parent of this window? Why SYSTEM of course. But we are not done yet.

Have the new cmd window up? Good. Now kill explorer.exe using the Task Manager. Yes, just kill it! Keep the new cmd window open though. Use it to run explorer again by typing in explorer.exe. Done!

You are now logged in as SYSTEM. You can now go ahead and do all the nifty admin things that you always wanted to do but your IT department wouldn’t let you. ;) You might get in trouble when they find out though. So, don’t go crazy with your newfound power.

If you still don’t believe me, here is a video that shows you how it’s done.

[tags]privilege escalation, windows xp, hax, system user, administrative privileges[/tags]

]]>
http://www.terminally-incoherent.com/blog/2006/07/30/local-privileges-escalation-in-winxp/feed/ 1
Kubuntu WPC54G v1.2 + ndiswrapper – Final Solution http://www.terminally-incoherent.com/blog/2006/06/19/kubuntu-wpc54g-v12-ndiswrapper-final-solution/ http://www.terminally-incoherent.com/blog/2006/06/19/kubuntu-wpc54g-v12-ndiswrapper-final-solution/#respond Mon, 19 Jun 2006 17:30:57 +0000 http://www.terminally-incoherent.com/blog/2006/06/19/kubuntu-wpc54g-v12-ndiswrapper-final-solution/ Continue reading ]]> I finally solved my ndiswrapper issue. If you remember my previous rants, I could never get WEP to work with my Linksys WPC54G v1.2 card. It simply wouldn’t work for me. I think the problem was not with me but with my ndiswrapper module version. I was using 1.13rc1, while the current stable version is 1.17.

Stable release sucks, because they removed the whole debian folder. This means that making a deb package out of it, just became 100% more difficult. Yes, I could simply install from source, but I don’t like doing that.

Most modern linux systems use packages, and there is a very good reason for that. Unless you keep track of all the stuff you install, you will soon find yourself in dependency hell. In fact, each time you type make install you introduce a new set of dependencies that may possibly conflict with some package you will be installing 6 months from now. Unfortunately unless you tell your package manager that you installed something from source, it is not going to know, and it will fail to prevent dependency conflict.

1.16rc2 had the debian rules included but gave me funky error messages. So i settled for 1.15rc2. If you had similar issues, here is a step by step instruction:

1. Download 1.15rc2 source or another version. Note, this method is only going to work for versions 1.16 or lower. For 1.17 and above you need to do something more fancy.

2. Make sure you have all the dependencies (you will need kernel headers, and some other stuff):

apt-get install linux-headers-$(uname -r)
apt-get install dh-make
apt-get install fakeroot
apt-get install gcc-3.4
apt-get install build-essential

3. Untar the source, and cd into the directory

tar xvfz ndiswrapper-[current version].tar.gz
cd ndiswrapper-[current version]

4. Build deb packages using fakeroot:

fakeroot debian/rules binary-modules
fakeroot debian/rules binary-utils

5. The deb files will be built in the parent directory. So go one up, and install the packages using dpkg:

cd..
dpkg -i ndiswrapper-modules-[version]-1_i386.deb
dpkg -i ndiswrapper-utils_[version]-1_i386.deb

After that you can do the normal ndiswrapper magic. Since I had a previous version already installed this was all that I needed to do. WEP is working like a dream :)

The step by step howto instructions were shamelessly stolen from the super helpful ubuntu forums

Update Mon, June 19 2006, 08:39 PM

Fixed the last code segment.

]]>
http://www.terminally-incoherent.com/blog/2006/06/19/kubuntu-wpc54g-v12-ndiswrapper-final-solution/feed/ 0