Mmm, GL Rendering

Gosh, what a difference. I’ve been wanting to do more game playing on my Linux laptop, but I’ve been stymied with trying to get the video system in it up to par. Today I went over the last roadblocks.

For the truly geeky, I have DRM working on the Radeon Mobility 9000 chipset in my IBM T40. The only tidbits I can have for folks who are trying to do this are:

  1. Have the Radeon kernel module installed
  2. Make sure you’re NOT running at 1600×1200. You don’t have enough memory to run GL in that. Running on the native 1400×1050 is fine.
  3. Make sure you have a Mode 0666 line in your XF86Config file for the DRI module:
    31 Section "DRI"
    32   Mode  0666
    33 EndSection
    
  4. Watch your /var/log/XFree86.0.log file for any warnings.

Once that was all set, and ‘glxinfo | grep -i direct’ showed me ‘Direct rendering: yes’, I ran glxgears. WHOAH! 865 or so frames per second (before I got this running, I was getting about 180. A slight performance increase, to say the least)

One of the motivations for this was to take a look at “Jake2” – a full Java port of the Quake2 engine. As advertised, I just needed the binary download, plus the Quake2 demo data files (see the webpage on how to get them), and voila! It just worked!

Go go gadget Java!

Anyway, now that it’s all working, I’m looking for more GL games to play. I’ve played Tuxracer, bzFlag, CannonSmash, and Neverball (MAN that’s a weird game). I jsut downloaded Flightgear, I’ll let you know.

Any other recommendations?

Turnkey Home PBX System

This is so cool. We have a PBX here at Homeport, but it’s a very old hybrid digital/analog system that works ‘okay’, but the handsets are rapidly falling apart, and it really just needs to be trashed.
I’ve been watching The Asterisk Project for quite a while – it being an opensource linux based PBX system, but haven’t jumped into trying it because the rampup / investment in hardware process was more than I was ready to deal with.
Now someone’s put together Asterisk@Home, a pre-packaged bootable CD that will install Asterisk for you, and do all the basic configuration, including setting up the web interface and everything.
Tempting… tempting…
(This was pointed out from an article on slashdot

Moving into the next age of geekery.

For quite a while I’ve been wanting to move into some of the more widely used methods for writing and deploying large-scale apps, particularly in Java. Sun developed a system called J2EE a while back that provides an environment where Java apps can scale to incredibly large installations. Up until now, I haven’t had the opportunity to really explore it

I recently started a 4 month project with a company in NJ to explore the feasibility of porting their applications from a Visual Foxpro base into J2EE. This is really a fantastic opportunity. I’m not only helping a great project move into an exciting new environment, but I’m also getting the chance to learn something I’ve been interested in for ages

One drawback though is that the J2EE environment is huge and fairly complex, and therefore there’s not ‘one way’ to do things. J2EE provides an object-based application server that’s designed to let you design and implement virtually any system and do it big. The steps I’m taking now are determining what aspects of this system are appropriate for us to use, and how to use them

This process is not helped by the fact that I don’t KNOW J2EE at all. I’ve never used these technologies myself for my own application development, and I’ve only brushed up against some of their technologies at a previous job. My work on CONGO used a hyper-simplified version of this concept, so there’s a heck of a learning curve here.

I am making progress though. Part of this project really requires the environment to be workable from someone who has traditionally been using Microsoft Visual Studio applications. That means a clean IDE, object editor and browser, etc. Tonight I successfully configured Eclipse to use a plugin to manage the JBoss application server I’m running on my laptop. Following some tutorials, I built and deployed a servlet to the server, and, via an Apache module used to connect Java servlet containers to web servers, I successully ran the servlet, and got those wonderful words… “Hello World”.

Seems like a lot of work to get 2 words on the screen, eh? But that’s the joy of learning a whole new environment. It doesn’t look like much, but it represents a big step down the road to understanding how I (and my client in NJ) may use this system to write and deploy applications. Personally, I’m okay as long as I don’t get stuck, and continue moving forward.

This coming week I hope to have enough in place to get a full JSP->Servlet->database process working, so that I’m familiar enough with the environment that I can start looking at designing how things REALLY work inside the appserver

LTSP Case Study – Orwell high School

I’m sure eveyone knows that I’m a big booster of LTSP – The Linux Terminal Server Project. One of the areas where they’ve had a lot of success is in schools where budget issues severely limit their capacity to bring decent computing facilities to their students

I found a great case study of a deployment at Orwell Highschool in England where they needed approximately 120 workstations to service 4 labs for their 1000 student. They needed distributed print services, shared server resources, and high end office software within a very restrictive budget.

The natural first place to look was Microsoft, but high requirements for client machines, prohibitive software licenses, and very complex system management procedures made them look elsewhere

LTSP was the solution they settled on. They are currently driving all 120 workstation from 5 central servers, distributing application load across 4 IBM Blade servers, sharing a central RAID drive array. There was an inevitable need to access Microsoft-only packages for legacy school applications, so a Windows Terminal server was installed, and the students and faculty can access it via a Linux RDP client from any of the workstations

John Osborne said:
“I can’t believe how easy it has been to move to Linux. The systems were installed and working within a week and it has been a revelation how simple and painless the process has been. I have saved thousands of pounds per year and got a brand-new ICT infrastructure at the same time”.
He added:
“Without switching to Linux, I would have been forced to cut back on our ICT hardware and software provision. There simply wasn’t the budget to upgrade to the latest versions of the software nor to keep replacing suites of PCs on a three or four year cycle. Now I have no licensing costs to worry about for the Open Source parts of the solution. We shall be moving to a complete Open Source basis as quickly as is practical and hope to start working with other schools interested in this type of development to share ideas and best practise”.

The entire case study is available. Highly recommended reading and sharing for any business or school considering deploying workstations in their environment

Make your Cargo Ship More Efficient: Put a sail on it!

I could spend all day showing links to GizMag, an emerging technology site that just has rapid-fire Cool Stuff, but this particular article is worthy of note.

Some bright lights figured out that if you put a computer controlled sail on a big ship, and combine it with a tracking/routing system that sets courses where there’s prevailing winds, you can cut the ships fuel usage by 50%. Instead of just plowing along the ocean no matter what the conditions, using diesel fuel only, you can just be a little smarter and let the sail help pull the ship along.

“As the sail is spatially separated from the body of the ship the reduction of the ship’s effective area by the system is economically insignificant. In its packed state the towing kite is easy to stow and takes up very little space. The existing crew is sufficient for the operation of the ship and the sail. Thus no additional staffing expenses are necessary.”

It seems like such a simple idea, I’m boggled no one has thought of it before. They do mention that there were some tricks to overcome, like any tethered sail could cause the ship to heel over, but an active control system can limit that.

Slashdot is dead. Long live Boingboing!

I’ve always been a long time fan of Slashdot, with its hordes of geeks just waiting to descend on unsuspecting websites, and subject them to the Slashdot Effect.
The problem is that Slashdot has set the bar as to what a news / geek site should be. The style hasn’t changed in 5 years, and 99% of the traffic on the site now are the overly chatty forums. I rarely look into the forums – they tend to be swamped by trolls and constant reiteration of the “Microsoft sucks! Everything GPL!” argument over and over again.
Lately though, the one benefit of Slashdot, that being timely and interesting news, has been usurped. For a while now, Slashdot’s content has been deteriorating. Where it used to be chock full of fun articles with a couple high-point details, it’s degenerated into a low-volume badly targeted site.
Blogging has moved into the space where Slashdot used to reign supreme. In particular, BoingBoing, to me, has better edited, more detailed, and more relevant postings lately. In particular, there have been a number of articles that BoingBoing has posted 1-2 days -before- Slashdot has, making Slashdot just appear like another follower, perpetuating links and articles that have already been published elsewhere.
I now have Boingboing in my Sage RSS feeds, and I rarely look at Slashdot anymore.

IBM Thinkpad T40 Debian Install – A Breeze!

I had been putting off installing Debian Linux on my ‘new’ IBM T40 for quite a while mostly because I was nervous about all the hassles involved in repartitioning, boot loaders, etc. Since the convention I’d been working on is now over, and I have some more time until my next event, I decided to finally take the plunge. Really, my aging T23 was starting to knuckle under with everything I was running on it, it really was time to step up to the plate and take the plunge.

Continue reading “IBM Thinkpad T40 Debian Install – A Breeze!”

Nice when things just plain work


I’ve been lamenting the loss of my camera for quite a while. Lisa took pity on me and long-term-loaned me her Sony Cybershot DSC-P30 camera. It’s a simple 1.3megapixel camera, with an optical zoom, 128meg memory stick, and 1280×960 resolution. Fine enough for most of what I want to do, and will hold me over until I can really get my true dream camera.

Anyway, one of the things I was worried about was that the Sony uses a “Memory Stick” – a proprietary format that only Sony manufacturs. They’re not any different from any other small medium, but making anything that reads or writes from them requires some sort of unholy legal contract with Sony, so the number of public readers for these devices is somewhat limited.

But lo, on the side of the camera is a normal mini-USB port. “I have USB”, sez I. So I plug a spare cable into the camera, jack it into my little mini-hub, and watch to see what happens.

I had tried this once before with an older camera, and wasn’t pleased with the results. This time, however, things Just Plain Worked. Linux happily recognized the camera as a ‘mass storage device’, and brought up the active device:

USB Mass Storage device found at 4
usb 1-1: USB disconnect, address 4
usb 1-1: new full speed USB device using address 5
scsi3 : SCSI emulation for USB Mass Storage devices
Vendor: Sony      Model: Sony DSC          Rev: 3.28
Type:   Direct-Access                      ANSI SCSI revision: 02
SCSI device sda: 253696 512-byte hdwr sectors (130 MB)
sda: assuming Write Enabled
sda: assuming drive cache: write through
/dev/scsi/host3/bus0/target0/lun0: p1
Attached scsi removable disk sda at scsi3, channel 0, id 0, lun 0
WARNING: USB Mass Storage data integrity not assured

This is the message that normally comes up when a USB mass storage device is added to the system (the USB stack uses the SCSI driver for block device access).

A quick filesystem mount, and I was able to read and write files to the memory stick (which, btw, was still inside the camera) just like it was another drive on the system. This is identical to the way my Sandisk pen drive is used, so this was all familiar territory.

I copied off a few sets of pictures, and finally, after almost a year away from it, I’m populating my picture archive again. Of course, one of the first pictures had to be a pic of the snowstorm we had last weekend. This particular one is after our second snowfall yesterday, which dropped another 5″ of snow or so. The unfortunate vehicle here is my mothers Subaru station wagon, which lives here over the winter while she’s away in Florida. Hi mom!

Arisia post-mortem, with pictures!

With special thanks to Lisa for the loan of her Sony Cybershot, I have some pictures from running registration at Arisia.

Kiosks!
We learned a lot this weekend. It was the first time we deployed the CONGO Kiosk terminals for at-con registration of attendees. We needed to make some functional changes in the code at the event, but for the most part, it worked pretty well. The Kiosks allowed people who were not pre-registered to enter in their contact / regsitration information themselves, and get a printed receipt. This meant the operators didn’t need to re-key all the data into CONGO, and slow down processing. For a first time out, I’m really pleased with the results. There were no disasterous failures (in fact, I can’t think of a failure beyond ‘we’re out of paper’), and folks seemed to like the kiosks, modulo the normal kvetching that’s pretty much unavoidable.

Gateway Operator terminals
This was also the first time we used the Gateway terminals for cashier / operator use. This was a HUGE win for the operators, as the terminals are MUCH faster than the iOpeners for general data access and work. Not to mention the fact that when things slowed down, the operators could play games on them 🙂

Badges!
We ran all badges ‘on the fly’, meaning that even pre-registered attendees had their badges printed as they showed up. This allowed us to make minor changes to information before we wasted a badge (such as a spelling of a badgename, etc).
We had no delays and no problems with the printers. One other thing we did was used blank white badge stock, so we were printing the -entire- badge image on the fly. It was a black ‘stippled’ image (not grey scale, but ‘screened’ to look like it was grey), using an image from our artist guest of honor. They came out great! We had to hand-punch the stock before running it through the printer to get the slots on them, but with a pair of new slot punches, that was really no big problem.

Summary
All in all, a very successful event, despite the snowstorm. All the work that went into CONGO in the last 2 months since our previous large event was well worth it, and made the product even better.

Credit where Credit is due
I would like to thank all the folks that made this possible. Without this team of folks volunteering many many many hours of work to the process, we never would have had such a smooth running registration:

Sarah Twichell – Killer answerer-of-email and registrar-on-the-ball. Sarah answered registration requests and paypal registrations within minutes of receiving them. The database was always up to date whenever I needed to find someone.

Lisa Wilson – Database geek extraordinaire. Lisa kept the database sane and was also on the front lines of requests and registrations. We had tons of comp lists, updates, and changes going on, and Lisa helped plow through them all, even though she was 2000 miles away in Colorado!

Jonah Safar – Jonah was one of the people who first took a gamble on CONGO with an event he was helping run. Since then he’s been there when I need him for coding, hacks, and general help.

Yonah Schmeidler – Yonah showed up to help with Arisia last year and ended up staying til 4am helping with some database and postscript issues. This year again he helped all through Thursday doing database updates and maintenance just for the heck of it. He also plays a mean FreeCIV 🙂

Katy (Pancua) – Katy was the badge goddess all through the mayhem on Friday, and helped out all weekend with things that Just Needed To Be Done. She brought a lot of energy to the whole situation, something we all need after spending days in a coat room 🙂

Ben Cordes – Ben is sort of the unofficial roadie for Stonekeep Consulting. He’s worked with me at a ton of events, and not only knows the systems and the processes well, he also knows my quirky management style. Even in spite of that, he keeps coming back for more. This weekend Ben was a great help with setup and maintenance of all the system components, not to mention being a great crowd wrangler.

Catya – I can’t leave Cat out of this thanks. She puts up with an awful lot to let me do these events – and I love her dearly for it. Thanks!

Microsoft Bad Design Perpetuation. Again.

It boggles me how, in this modern day and age, corporations can still consider Microsoft a true source of quality software.

I am an online VAR for a credit card processing service. Part of their service is being able to go online and view residual reports. Sounds fine. When i view the page, the font is incredibly tiny. Fortunately, Firefox has ye ole magical Control-+ function, which zooms the fonts up one notch.

I decided to do a little investigation. Using the web developer plugin, I viewed the sites stylesheet (yes, they actually did use a stylesheet, though they also used in-stream styles, and table layouts out the wazoo). I found these gems:

.Text7
{
font-size: 7pt;
font-family: Verdana, Arial, Helvetica, sans-serif;
}

Searching for the source of this offense, it became clear in the header of the HTML page:

<meta name="GENERATOR" Content="Microsoft Visual Studio 7.0">
<meta name="CODE_LANGUAGE" Content="C#">

Tell me again why people would use a Microsoft product when the result of said product is CRAP like this? Oh yes. “Long as it looks fine on my IE 6.patched.against.todays.security.holes, it must be okay!” – Doesn’t matter it has 3x as much code as is necessary, and huge chunks of the code is CRAP.

Tinky Flingy Revisited

2 years ago I wrote up an article on a tinker toy trebuchet I had built. Recently, I got mail from Peter Holley with a story of his own:

I was goofing around with my kids tinkertoys last night. And ended up with a trebuchet. I used a pull string instead of a weight. I also used tinkertoys themselves for the ammo.
at first it was just popping them to the ground. but after a minor adjustment I had one fly into the wall at an amazing velocity.
we took it to the hall. At one point my 5 year old son ended up on the
receiving end and got whacked dead center of the forehead (from 25 feet!)
with a wheel we used for the ammo. After we stopped laughing (it really was
an accident, but the way it happened was quite humorous) he had a lump on
his head.
They ought to put a warning on the container. This isn’t the first lethal
toy I’ve made with tinkertoys. (the other was a kinetic yo-yo, the pieces would often fly of in a random direction at a serious velocity)
It’s funny to see I’m not the first to do this.
Some notes on the picture.
The kit we have is the “Jumbo Builder Set” only pieces from the kit were
used. The spool piece I used for the hub of the arm kept breaking so I substituted the connector piece.
The best ammo is the spool piece, that’s what bulls eyed my son in the
forehead. The connector piece works, but doesn’t have much mass.
The ammo slides on the yellow stick.

Thanks Pete!

Seti@Home revisited.

Recently I realized I had 2-3 machines here in Chez Geek that were basically idle 95% of the time. I did regular work on them (one is the server for my CONGO cluster, the other is my Windows XP box that I use for, er, very important projects and, er, other network er… monitoring… stuff.
Ahem. Anyway, the Seti@Home project lets your home computer act as one of the computation engines for the SETI project, by downloading small chunks of data the big radio telescopes pick up and analyzing them looking for potential signals from remote civilizations.
The Seti@home project, when originally started back in 1997-ish, ended up being a geek computation contest, as folks banded into teams to see who had the most computing power. Companies like Sun Microsystems ran the Seti@Home client on many of their internal machines (the program runs when the machine is not busy doing other things, which for many computers in the world, is about 90% of the time), and racked up huge quantities of ‘work units’ (the measure of how much work a seti client has done).
Last year the Seti@Home project switched over to BOINC, a more versatile system that allows arbitrary computation to be run on all those idle computers. BOINC has been used for numerous projects, not just Seti.
Unfortunately, today it appears the Seti@Home and BOINC are offline, apaprently due to some power outage in Berkely. Today my poor computers are truly idle, and have nothing to do.
I wonder if they’re bored?

Comments Pro/Con about Mambo?

I’ve been hunting around for a high end content management system (CMS) for a couple sites I’m working with. I personally use MovableType for my CMS, but that’s really geared toward blog operations, rather than full site management, though it certainly can be used that way (witness my Business site, which is totally MT driven).
I came across Mambo while doing my research, and I have to say I’m mighty impressed with what I’m seeing. What I’d like to hear is feedback pro or con on the system. Has anyone used it, talked with folks who have used it, or had any experiences pro or con with it?
Thanks in advance 🙂