Sorry Jekyll, I’m done with you.

logo-2xI’ve been working with Jekyll on the US Drone Racing Association site. It seemed like a nice idea. Check all your content into Github, then, when ready to do work with it, check it out, make your edits, run a local test site (that part is really nice), and when finished, check it back in. One update on the master site, and you’re done. Woo.

Yeah, see, that’s where they getcha.

Jekyll is great for very fast setups for static sites. If you never want to really change the site, such as changing themes or regularly adding blog posts quickly and efficiently, you’re probably good.

But I found the blogging process enormously painful.

  • Check the site out of github
  • Go into the _posts directory, pick an old entry, copy it to a new filename. The new filename must be yyyy-mm-dd-uniquename.markdown. This date is important because it’s used as a sort order.
  • Edit the newly created file with whatever editor you like, but the YAML Front Matter must be correct. Using YAML for structured data is already problematic, but this is supposed to be a markdown document. But, no, it’s sort of a hybrid of YAML and Markdown and HTML.
  • If you get the YAML Front Matter right, you get to write your post. Markdown is nice, but it has it’s limitations
  • Save the file, make sure you go back to your root (god knows how many times I’ve failed at this one), and do ‘jekyll serve’. Test your site locally. Swear and curse as it doesn’t work right. Repeat previous steps until right.  (Credit here.  The live preview is really nice, and it updates automatically when a file change is noticed.  I can’t fault that.)
  • git add -r
  • git commit
  • git push origin master
  • Log into your blog host
  • cd to your working directory
  • git pull origin master
  • cd sitename
  • jekyll build –destination=/var/www/yoursitename

Now, this really isn’t that horrific. Irritating, sure, but you can automate pieces of this and add some nice wrappers around it.

I wanted to theme my site. Here’s where things go sideways.  In short, you can theme a Jekyll site.

But you can only do it once.

Why? Because you don’t apply a theme to a site. You apply a site to a theme.

Sound crazy?  Lemme splain.  To theme a site, you download the theme, build it (and in Ruby land, this can be a nightmare experience. Ruby dependencies are horrific. Don’t believe me? Check out the conversation I had with a theme developer. We couldn’t get it running at all.)  But even if you do get it running, after you build a theme, you copy your existing content into the new theme directory, and commit the whole thing up to git. That’s your new site. Want to change themes? HAHHAHAHAH. You have to do this process all over again, extricating your content from your old themed site and copying it into the new theme directory structure.

Screw that.

Sorry jekyll. I’m done with you.

CONGO is going GPL.

(This announcement is also being posted on stonekeep.com)

Over the last 12 years I’ve been working hard to develop CONGO into the best convention registration system I can manage. Since 2002, CONGO has been used for many events of all sizes, registering and printing badges for tens of thousands of attendees. There have been many successes and a few bumps, but all in all it’s been a great adventure.

Several events now rely heavily on CONGO for year-to-year attendee tracking, allowing online registration, keeping up-to-date history, managing thousands of attendees, as well as the relationship CONGO has to Zambia, the scheduling system.

Continue reading “CONGO is going GPL.”

Arduino Nano “Programmer Not Responding” on a Mac

Arduino Nano v3
Arduino Nano v3

For the Staff project, I’m going to be replacing the existing Arduino Uno R3 with a smaller, more easily embedded Arduino Nano.  The Nano is a heck of a lot smaller than the Uno (makes sense – it’s meant to be permanently installed, while the Uno is a prototyping platform).  I received my Nano a few weeks ago, but immediately ran into a frustrating problem… code would compile, begin to upload, and I’d get the error “stk500_recv(): programmer not responding”

The intarwebz are full of people reporting this problem, unfortunately most are not finding answers.

I went through the usual debugging problems – changing out the USB cable I was using, checking to make sure USB drivers were correct – I could still upload and use code on my Uno, but the Nano flat out refused to accept the new code (and I did check the very common problem of not selecting the correct board in the IDE).

Finally, came across a general discussion about bootloaders, and there was a comment that sometimes these boards do not reset properly.  After some more research, I found some folks using various ‘reset button’ hacks to sort of nudge the board into accepting code.  With a lot of trial an error, I have a procedure that seems to work pretty consistently.  There’s occasional twitches, but with persistence it always loads.

Continue reading “Arduino Nano “Programmer Not Responding” on a Mac”

23andMe – A Scientific Look into Myself

European.  Whoddathunkit?
My genetic backround

A couple months ago, a friend pointed me to the website 23andMe.com.   Their mission statement is pretty straightforward.  “23andMe’s mission is to be the world’s trusted source of personal genetic information.”

Here’s how it works.

After signing up online and coughing over my $100, 23andme sent me a small kit.  Inside the kit is a little plastic tube.  All you need to do is fill part of the tube with saliva, seal it up, and mail it back to them.  It’s all postage paid, so it’s just a matter of dropping the box in the mail.

About 4 weeks later, you’ll get a piece of email saying your results are ready to be viewed.  And then things get interesting.

Continue reading “23andMe – A Scientific Look into Myself”

Notifications on all Logins on a Linux Host

Putting this one out there because I spent some time surfing various Well Known Sites and couldn’t find a complete answer.

We had a need to log whenever users logged into a production host – just a notification send to the admins saying someone was on one of the production boxes.  The other requirement was to have it be low impact – didn’t need a ton of monitoring packages installed, etc.

The result is a pair of scripts.

The first is ‘checklogin.sh’:

 #!/bin/bash
 # Nov  6 13:35:25 inf-1 sudo: dshevett : TTY=pts/0 ; PWD=/etc/munin ; USER=root ; COMMAND=/etc/init.d/munin-node restart
 TMPFILE=checklogin-$$
 AGO=`date "+%b %e %R" -d "1 min ago"`
 grep "$AGO" /var/log/auth.log | grep 'session opened for user' | grep -v CRON > /tmp/$TMPFILE
 grep "$AGO" /var/log/auth.log | grep 'sudo:'| grep -v pam >> /tmp/$TMPFILE
 cat /tmp/$TMPFILE | /tools/sysconf/scripts/mail_if_not_empty ops-notice-internal@REDACTED.com "[inf-1:checklogin.sh]"
 rm /tmp/$TMPFILE

This simply looks for some patterns within the auth.log file. The only real trick here is making a date formatted string that is ‘one minute ago’. If this script is run once a minute via a cron job, it’ll send mail within a minute of someone logging into the host.

The other script is a simple utility tool I use for most of my cron jobs called ‘mail_if_not_empty’:

 #!/bin/bash
 TMPFILE=/tmp/joboutput.$$
 TARGET=$1
 SUBJECT=$2
 cat >  $TMPFILE
 if [ -s $TMPFILE ]
 then
   mail -s $SUBJECT $TARGET < $TMPFILE
 fi
 rm $TMPFILE

Super-duper simple, it just sends mail if there's any output.  This makes sure that mail will only be generated if anything interesting happens.

Time Lapse Video at an SF Convention using Linux and a webcam

For quite a while I’ve been interested in using commodity hardware (a webcam, a small linux machine) to take time lapse videos. It didn’t seem like that complex a problem, but there were a lot of logistical and mildly technical obstacles to overcome. After a couple tests and short videos, it was time to set things up to record a four day long video at [Arisia](http://arisia.org/), in particular, a shot of the registration area.
Here’s how I did it.

Continue reading “Time Lapse Video at an SF Convention using Linux and a webcam”

Hosting a Terreria Server – The yakshaving commences

Four kids, four laptops, one minecraft worldSo the latest craze around here is Terraria. Think of it as Minecraft in 2d. Naturally, since the kids here are all Minecraft addicts, Terraria was a natural next step. Minecraft, the gateway drug for MMPORPGs.
Of course, “DAD! Can you host a Terraria server for us?” was inevitable. “Sure”, the foolish Dad says, “Where’s the Linux client?”
“Yeah, so, there’s a problem with the Linux server version of Terraria. There isn’t one.”
Awesome.
So began my descent into Windows hosting hell. I share my experiences here with you, to hopefully lesson your pain.
A server
Windows xp laptop in the server rackIn order to make this work, you naturally need a server. I had a spare Windows XP Dell 620 laptop lying around that looked like it was ready for abuse, so that was put up as my offering to the network gods. Getting said laptop into the server closet proved to be a bit of a challenge, since I was faced with some awesome challenges:
* The NIC on the laptop (or the drivers) are unstable. Occasionally it will drop the network connection, requiring a physical cable drop and reconnect. Wonderful.
* Terraria is a DirectX application. Ergo, it cannot be started via RDP (which reduces the video driver capability). I must start Terraria on the console of the laptop in the server closet before connecting to it.
* The screen on the laptop is twitchy – Occasionally the screen will blank out, and only a hard reset will restore it.
Installation
Setting up and running the Terraria server was pretty straightforward. Install Steam, download/install Terraria, start up the game, click ‘start server’. Easy, huh? Note that because it uses Steam, you need to use a unique login. My experience has been that the Steam credentials are only checked during startup – once the server is running, you can log out of steam on the server and run up a client machine on the same login.
Networking
Anyone who is familiar with firewalled hosted services should be able to set up their network appropriately. In our network environment, we host servers behind a NAT enabled firewall, and set up port-forwards to internal services. This makes the server relatively isolated from the internet at large, but allows for the server to be accessed from the outside world.
Some basic guidelines when setting up your server:
* Do not host your Windows box on the internet without a firewall. Really, just don’t do it. Windows boxes are the most often attacked, have the most vulernabilities, are the most commonly compromised.
* Running a Windows host with a ‘self hosted’ firewall is marginally better, but is still easy to run up in an ‘unsafe’ configuration without you even knowing it’s happened.
* Terraria uses port ‘31337’ for the server. Note that this port is ALSO used by the (mostly old school now) ‘Back Orifice’ application – a tool generally used to hack servers. Many firewall tools and applications may flag Terraria servers are Back Orifice servers, and disallow them
Testing the server’s available is pretty easy. Log into your Linux box out on the net (you do have one, don’t you?) and test connectivity to the server:

dbs@calypso:~$ telnet your.firewall.ip 31337
Trying 1.2.3.4...
Connected to your.firewall.ip.
Escape character is '^]'.

Hooray! Your server is ready to access! Run up Terraria on your computer, and connect to the IP address of your server (note that Terraria doesn’t support hostnames [idiotic in my opinion] – you must connect by IP). You’re in the game!
Conclusion
In so many ways, Terraria is NOT ready for prime time. The lack of a decent server mode, the requirement for DirectX for even basic operation (even in server mode) – these make hosting a server more painful than necessary. It can be done, but I don’t know how long this house of cards will last.
Oh, the game itself? Don’t know, haven’t played it, there’s no Mac version.

CONGO Update – The road to 2.1.

I’ve set a goal for myself. Have CONGO v 2.1 released by June 1st. It’s an auspicious goal to be sure, and recent career shifts have either made it more likely (more time to work on it) or less likely (new job) to have time to dedicate to coding.
But goshdarn it, I’m going to try.
congov2-eclipse-screenshot.pngWhile coding away last night at a particularly recalcitrant chunk of the new ‘Links’ system (I’ve been… instructed… by my pesky users, that ‘Friends’ is really too ‘social buzzy funtime networking’ for an event management system), I was curious how big CONGO had gotten. So a couple greps got me some quick stats:

Total lines of Java : 13,412
Total lines of XML : 5,492
Total lines of JSP : 5,543

This makes CONGO the largest application I’ve ever written completely on my own. Oh sure, I’ve worked on larger systems, but that was part of a team with other coders. This one (with some small help from 1-2 folks – accounting for around 2% of the code) is all mine.
I’m always looking for alpha and beta testers. Interested? Lemme know. Continuous build / QA testing is working, so there’s always new builds and bugs that need to be tracked.

My bosses are audiophiles.

It’s interesting working for a music distribution company – our upper management tends to the audiophile / retro-geek crew.
Witness our CEO’s office:
CEO Rig
And the CTO’s office:
CTO Rig
I do wonder at the massive old-skool speaker stacks and tube amps… in a 15×15 standard drywall office, but it does look sorta neat.

My Chumbys and Me

It’s no secret I’m a big fan of Woot and the excitement that can accompany a Woot-Off, that festival of consumerism and feeding frenzy for those susceptible to impulse buys.
A few frenzies ago included offering up a Chumby One for the attractive price of $49.
Chumby one!
I bought two.
I’d been trying to figure out various ways gaining ‘shelf-top’ access to online music resources. Back in the day, I’d picked up a Roku Soundbridge or two, but I’ve never been completely satisfied with the results. Even modern versions of these devices are in my opinion too expensive and too limited. They play music, that’s it. Even though Roku has moved on, other manufacturers are offering similar devices for $250.
Screw that.
The Chumby One is a small 450mghz Linux computer with Wifi, 64meg of RAM, and a 3.5″ color screen. It has everything I was looking for in a ‘bedside’ or ‘shelfside’ device. It can play music, it has a touchscreen that can show a wide variety of content, and it’s controllable from a centralized server. It has line-level audio out via a headphone connector, as well as internal speakers. The design allows for easy ‘bedside’ use, along with unattended modes.
The final button for me was the inclusion of a powered USB port on the back. This means I now had an easy charging station nearby for my iPhone, without taking up another power outlet and the accompanying cable mess.
I love the variety of apps, both the whimsical (David Letterman’s Top 10) and the useful (A constantly updated weather / traffic / time / date page that shows ‘local status’ in real time) – all while happily playing Radioparadise for me.
And. Heck. They’re cute.

The Blog is Resurrected… for now.

Well that was no fun.
For a while, I was in a funk because the Planet-Geek.com site was not posting ANY of my articles. And when I logged into the maintenance pages, I couldn’t see any of my articles for the last year.
Now, the site has something like 1600 articles on it. I was pretty cranky at the possibility of losing all my content. But the database itself seemed okay, and I could see entries in it. Just new content was not showing up.
Tonight I decided to sit down and figure out WTF was wrong with it.. It took about half an hour to determine the root of the problem…
I was logging into the wrong site.
We migrated the blogs off msb to msb2 a year or so ago, but I never a) removed the old bookmark in my shortcuts, and b) never updated the maintenance page to point to the correct toolset.
So I was editing the old site.
Boy do I feel like a dork.

Performance Tuning with Trac

I’ve been using Trac for managing all the bugs enhancements in CONGO for the last 3 years or so. For the most part, it’s been pretty useful, though I haven’t been thrilled with some performance problems I was having.
Most notably, a simple page load would take 4-5 seconds to come back.
I thought the initial problem was due to the older (v0.11) version I was running. But after a painful SVN crash and rebuild, and taking that opportunity to upgrade to 0.12 and move to a faster host, the performance problems were still there.
When reading Trac performance blogs, the first thing everyone says is “For gods sake, make sure you’re running mod_python!!!” Well, I was. So that wasn’t it.
I found the answer in an older blog post that mentioned the Chrome elements in Trac were rendered on the fly via Python. This didn’t make sense, as they were primarily static elements.
So why not cache them?
A quick tweak to the vhost configuration:

<LocationMatch /[^/]+/chrome>
Order allow,deny
Allow from all
ExpiresDefault "now plus 12 hours"
</LocationMatch>

(which, by the way, necessitated adding mod_expires in apache), and a restart, and my load times went from 6.6 seconds:

172.16.1.1 – – [13/Feb/2011:22:58:13 -0500] “GET
/chrome/site/stonekeep-ball-logo.gif HTTP/1.1” 200 6660
“http://trac.stonekeep.com/” “Opera/9.80 (Windows NT 5.1; U; en) Presto/2.7.62
Version/11.01”

down to zilch due to caching:

172.16.1.1 – – [14/Feb/2011:08:15:51 -0500] “GET
/chrome/site/stonekeep-ball-logo.gif HTTP/1.1” 304 –
“http://trac.stonekeep.com/wiki/WikiStart” “Mozilla/5.0 (Macintosh; U; Intel Mac OS X
10_6_6; en-us) AppleWebKit/533.19.4 (KHTML, like Gecko) Version/5.0.3 Safari/533.19.4”

Win!!!

How to make sudo use your login name

This is being tossed out there as a handy reference to sysadmins around the world.
Sudo is a magnificient tool for Unix / Linux based systems that allows a single command to be executed as the root / privileged user. The advantage is that the command is logged to the syslog, and access to sudo-managed tools can be tightly controlled via /etc/sudoers.
One problem that comes up a lot is that logged activities on a host will show up as ‘root’ when sudo is used to invoke them, when what you really want is to know who initiated the command.
The sudoers file can include an option that tells sudo to not reset the users login name when escalating priveleges. The option is:

Defaults        !set_logname 

Putting this option in sudoers will make it so RCS checkins and other tasks will log as the user who invoked the sudo, not root.

Fun with Server Uptimes

At ${dayjob}, we were doing a system audit when an alarm came up on a pair of servers we rarely had any interraction with. One of our new monitoring tools was showing these servers were not answering correctly, and should be investigated.
Investigate I did, and found… four machines in a full sized rack that were doing absolutely nothing.
It turns out these were used for 3 customers we no longer supported. The applications were still there, the appservers were running, just… no one had connected to them in almost a year and a half.
What’s more entertaining is the uptime on these boxes:

09:23:08 up 992 days, 19:15,  1 user,  load average: 0.00, 0.00, 0.00

The current plan is to let them roll over to 1000 days, throw a little party for them, and shut ’em down.
(For the true geeks, these are dual opteron Rackable servers with 8gig RAM running CentOS 4.4)
Update – Just found the database servers these machines have been using. Also idle, but the uptime is even more impressive:

 09:43:08 up 1304 days, 19:49,  1 user,  load average: 0.22, 0.09, 0.02

Vox is Dead. Long live Vox!

As little as 5 years ago, Six Apart was the undisputed gold leader of blogging platforms. Movable Type was the largest and best known blogging platform, and corporate entities were making moves to acquire competing services.
During this time, SixApart launched Vox. The idea was to blend blogging with social networking. Shared questions and trends, bring the whole blogger community together into one big happy family.
It never worked.
Bloggers are individuals. They want their own sandbox, their own domains, their own content. Not only from an individualistic stance, but also when it comes to money. It’s hard to make a buck when your blog is buried in with a thousands other bloggers.
Vox lurched along for a few years, but never got any traction. Perhaps due to its muddled target audience. Were they targeting bloggers? Facebook folks? The then-dominant MySpace crew? It wasn’t clear.
I had a Vox account, and I posted perhaps 3-4 things on it, and lost interest. There was no draw or anchor. I never went back.