Archive for May, 2007

C# IPC: Access Denied in Windows Vista

Thursday, May 31st, 2007

Microsoft introduced a new form of Remoting in .NET 2.0 under System.Runtime.Remoting.Channels.Ipc, which used named pipes for remoting instead of HTTP or TCP connections. The named pipes are supposed to be much faster to use (which seems to be true) and work extremely well on a local machine. And, they work flawlessly by default in Windows XP. Of course, the remoting setup is practically the same as other remoting methods (refer to a remoting tutorial for more information on how remoting works).

However, in Windows Vista, they changed around the way named pipes work, and they haven’t clearly documented it in their documentation. Windows Vista has a stricter ACL created by default for named pipes. Heres the code that lets your service communicate with your code running in userspace:

IpcServerChannel channel;

IDictionary properties = new Hashtable();
properties.Add("authorizedGroup", "Users");
properties.Add("portName", "NCChannel");
properties.Add("rejectRemoteRequests", true);

// create the channel and start it up
channel = new IpcServerChannel(properties,null);

Pretty simple, eh? They should probably mention it a little bit more explicitly in their docs..

Ultimate Rejection Letter

Wednesday, May 30th, 2007

My friend Jon sent this to me, and it definitely is the best rejection letter EVER.

How to (not) announce your pregnancy

Tuesday, May 29th, 2007

I was browsing WikiHow a few minutes ago trying to find out how to disable the Windows XP login screensaver, when I came across this ‘featured’ article: How to Announce Your Pregnancy

Some of the advice in the article made sense, but a lot of them are just ridiculous. Such as these ones:

“Take a shower together. Write on your belly in washable marker, “Baby On Board” and allow your partner to get in the shower first.”

Yeah.. I think the poor guy would fall and break his head.

“Wait for a holiday and literally “give” your partner the news. Buy a t-shirt with DADDY written on it, a baby-related keychain, a baby book, as well as a few baby items, and put them in a gift bag. Cover with tissue paper and then have him open it.”

Umm… yeah. No comment on that one.

Zombie Risk

Monday, May 28th, 2007

My roommate and some friends came up with the idea of “Zombie Risk” a couple of years ago, but I think its still worth mentioning. He had it posted as a PDF for the longest time, but its now on his blog. Funny thing is, I still haven’t gotten around to playing it…

Check it out:

Awstats and logfiles

Monday, May 21st, 2007

I use for my web hosting, and they provide raw dumps of the apache logs for the website. Which is great, because you can do whatever analysis on them and it works nicely.. except for two things:

  • They don’t use a standard LogFormat directive
  • After about 6 weeks, the oldest log files get deleted

So, after much trial and error, I figured out a decent way to use Awstats to do my log analysis. I wrote a cron job that runs every monday morning (had to modify my crontab from the default), which basically downloads the latest logfile from my 1and1 account, and then runs awstats on it. It sounds pretty trivial — but 1and1 names their logfiles in the format access.log.[week].tar.gz.. which works until the year rolls over. So I added some logic in there to intelligently rename the files like access.log.[year].[week].tar.gz.

You can download the awstats 1and1 cron job here.

To use it, you need to be running Linux, and have awstats installed. You will need to modify some of the parameters of the cron script, but its decently commented. Additionally, your awstats configuration file needs to have the following two directives in it:

LogFormat= “%host %other %logname %time1 %methodurl %code %bytesd %virtualname %refererquot %uaquot %otherquot”

LogFile=”gunzip -c $(for i in `ls -rt /home/awstats/roadside/`; do echo /home/awstats/roadside/$i; done) |”

Just make sure you replace /home/awstats/roadside with the directory that you want to place your logfiles. Refer to the script for more documentation.

Note: The biggest problem I have with it is that it only updates once a week. I played with doing it every day, but then awstats tended to drop/ignore records that were out of sync, and it seemed to be better to have complete stats instead of having them updated every day. If you have a good solution for this, let me know!

Javascript/CSS enhanced Resume

Sunday, May 20th, 2007

When I was looking for research projects to participate in a few years ago, I decided the best way to find them would be by emailing various professors and sharing my interests with them. However, I figured a resume would be useful to share with them as well, and I only had it in MS word format. So I put it into HTML format.

However, the biggest problem with my resume is I tend to put a lot of stuff in my ‘master copy’ of my resume, and change it according to my targeted audience. So, by putting everything on the web, printing it out results in around 3 pages at this writing. Common resume advice always says to keep it under a page.

My solution: use javascript and CSS to create a resume that can serve multiple purposes! On the top of the resume, is a listbox that when switched, switches between a ‘full’ and ‘condensed’ version of my resume. The condensed version is designed to be printed out in only one page.

Heres how it works: (more…)

Standalone GPS SQL logging software for Linux

Thursday, May 10th, 2007

I was not able to find any decent standalone GPS data logging software for Linux (for use in my car computer), so I decided to write my own — and it works pretty well for what I want to do. Of course, I’m not quite sure what the practical implications are, but I guess I’ll figure that out later (and post it here, of course!). Anyways…

geoHist is a (relatively) simple program I wrote to log GPS data retrieved from GPSD into a MySQL database. It is designed to meet the following requirements:

  • Standalone
  • Run from system startup to shutdown
  • Small/fast, non-intrusive
  • Robust
  • Log data to an SQL database

geoHist will poll GPSD every 10 seconds and then send that data to a preconfigured MySQL database. It will attempt to detect if you are standing still, and if you are then it will NOT log the data. It decides this using a simple drift factor. It will always log position to the database at least once if there is a satellite fix.

All this program does is log your data into mysql. You will need to figure out what exactly you want to do with that data. At the moment, I’m not quite sure what I’m going to do with the data either, but if I figure something out I will surely include it in the next release.

I haven’t put it through a ton of use (yet), but it seems to be doing its job quite well. Now all I need to do is figure out a good practical application, and write some scripts to export the data from the database nicely.


Website Synchronization and Management Issues

Wednesday, May 2nd, 2007

I maintain the Michigan Boys State website and other websites, and something that I keep running into is the problem of website management. Last year, I started developing Onnac, which helped me to some extent in managing the menus, banners, and templates that I needed to run that site.

My problem: I keep local copies of the websites on my desktop, and run apache/php/mysql to do debugging and development with. This is a great arrangement, since it lets me test things out before I deploy them. However, there is the problem of synchronization. As long as I remember which website happens to have the most recent copy of content (which may be either one, especially if we need to post something on the production site right now), then it works out fine. But, a number of times last year during the Boys State program when we were doing a lot of updates each day for a week, I got confused and overwrote old data on accident.

I’m curious though, how does one manage a static website and keep it synced up correctly? Without shell access? There’s probably tools out there for that sort of thing, I spose. My ideal solution would be subversion — but thats not available unless you have shell access, which I do *not* have on the websites I admin (including this one, because I’m too cheap).

Next time, some of my ideas on how to resolve some of these problems that I implemented in Onnac and future ideas I haven’t implemented yet.