I suppose it’s okay to mention every two years or so that I have a BOINC team that is collectively pooling all idle computing power and using those idle cycles for scientific research and analysis. Since 2 years ago, the number of projects that use the BOINC software has increased and diversified widely, though the core projects (SETI@Home, Folding@home, etc) are still extremely active.
If you have a machine that’s frequently idle, why not consider installing the BOINC client on it, and signing up to join our team? I’ve recently re-started many of my idle clients in an effort to shore up some of the sliding numbers (many of our long time computing partners have stopped submitting updates, due to job changes or just disinterest.)
At our best, we were ranked # 662 about a year ago. Currently we ranked #935 in the world (out of 71,588 teams), according to our page on Boincstats.com.
If you’d like to join, our SETI@Home project page has signup information, though we also have teams at Rosetta and Predictor, both great projects.
Ok, I’ll fire up a bunch of the unused machines at work. The failure rate on this model has been very high, so stress testing the systems is a good idea.
Is there any easy way to manage a couple of dozen systems at once? Having to fire up a gui interface for each box is really tedious.
If I get a good job offer, I may have a couple of hundred systems to use spare cycles on, so management is a big issue.
I wonder if we could set this up on our giant >400-machine server farm at work. Of course, they spend most of their time running automated tests, so they have fewer cycles free than one might think.
@matt – There’s a couple conversations about cluster management floating around. Once a single installation is done, ydou can pretty much just duplicate it around.
What I did here was install boinc via debian packages, and enabled the remote interface. Then ran up the boinc manager on my workstation, connected to the boinc instance on the machines, and subscribed it to the project. With the remote interface, I can ‘peek in’ any time and see how things are going.
@ceo – for really large clusters, I’d probably do the package configuration on the machine, then rdist / rsync whatever a set of project config files to each machine, then restart each boinc service. There’s no reason the machines need unique configurations, they just need to know what project to subscribe to under what username (and therefore, what team they’ll be on).
Oh one other thing – the BOINC process runs at super-niced level, meaning that it doesnt’ interfere with other tasks at all, and only uses idle time. So even if you have busy machines, it’ll just stay out of the way until they quiet down. You’d be surprised how much idle time is available 🙂