A Homegrown Off Grid Solar Installation

When we were designing our cohousing community, we made a lot of effort to make sure all the buildings had decent southern exposure so we could add solar panels later on. And over time, many of us did. My house in particular has 4.9kw of solar on the roof, grid-tied to offset our consumption.

’round home.

But I’m finding the grid tie system unsatisfying. While it’s helping lower our electric bills, given the consumption of the house, the 15-20kWh generated by the panels (on a good day!) isnt’ really enough to offset the 80kWh per day we can hit during the winter (minisplits – great air conditioners – only mediocre heaters).

And then of course there’s the fact that grid-tie systems do not actually provide ‘power backup’. They generate power only when the sun is shining, and that power is ‘mixed’ into the power draw for the house. If the grid goes down, the solar panels are automatically disconnected. Everything goes dark, as it were. There’s no storage mechanism, and no ‘power backup’. While I understand why not (battery / storage / backup systems are complex and expensive), I still feel like it’s something I should have, or at least understand.

The Beginnings of a Project

A few years ago my Makerspace picked up a stack of discarded solar panels. We tested them, they were 36volt 305 watt panels, and best of all, they were free for the taking. I decided to see if I could build my own ‘off grid’ solar installation. I’d start small, with pretty limited capacity – the goal was to be able to power my shed, which has all my tool batteries, lighting, sound when I’m working, and my power tools when doing anything requiring 110v power (like the grinder or lathe). Most of my tools are battery powered, so having a charging station that’s driven from solar panels was appealling.

The first step was I basically had to learn all the terminology and all the components in a battery backed solar power station.

The basics are easy. What are volts and amps and watts. Each of these terms is important to understand, because you have to take them into account when designing a system. Volts, Amps, Watts, kWh… most of this is pretty basic. The biggest challenge for me was understanding that ‘kWh’ is basically the measure used to describe how much battery storage was available, and what sort of power you’d need to provide over time.

My panels are operating at 36v, and are rated to produce 305 watts of power, W = V*A, or 305 = 36 * A, or roughly 8.5A of current. That’s a pretty good start. In reality, these panels won’t produce anything close to that. After installation, they appear to top out around 200w/panel in perfect conditions.

Now. Wattage can be used to describe how much work is happening. For instance, a 100watt bulb is using 100watts of power (doesn’t matter what the voltage is, 100w is 100w). Why is this useful? Because capacity in a solar power system, and power consumed in a day are both defined via ‘Kilowatt-hours’ – or basically how much power is used in hour. That 100 watt bulb would use .1kwh if left on for an hour. If my batteries had a 5kwh capacity, that bulb could stay running for about 50 hours from a set of fully charged batteries.

Putting the Pieces Together

I started out very modestly on my shed project. I picked up one of the solar panels from the makerspace, mounted it on the roof of the shed, and ran lines into the shed to the controller. I purchased a relatively lightweight MPPT and wired it in.. These are relatively low cost devices that act as a bridge between the solar panels and your battery bank, and prevent the batteries from over-charging. Here’s a good definition:

An MPPT, or maximum power point tracker is an electronic DC to DC converter that optimizes the match between the solar array (PV panels), and the battery bank or utility grid. To put it simply, they convert a higher voltage DC output from solar panels (and a few wind generators) down to the lower voltage needed to charge batteries.

https://www.solar-electric.com/learning-center/mppt-solar-charge-controllers.html/

Batteries being wired up

Now I had a power feed, but I needed batteries. Through a ridiculous series of events, I managed to purchase 6 24v 20Ah Lithium Iron Phosphate (LiFePO4) batteries for an extremely good price. That’s .480kWh per battery, or 2.8kWh of battery storage. That’s good for a starting point. These batteries are pretty much the standard style for solar installations, though you wouldn’t usually use a bank like this, you’d get one or two extremely large ones (these are about the size of a lunchbox each).

Next step is making that 2.8kWh of 24v power available for use. In the past, using inverters to convert the power from one voltage to another was an extremely wasteful process. But modern inverters are pretty good. I got an extremely cheap 1kw inverter off Amazon that was designed to take 24v and wired it to the battery bank directly. This gave me 120v power running off a set of batteries that are charged exclusively from solar. I was in business.

The complete installation in the shed. RPi not shown.

At this point, it’s important to note that while this all sounds pretty clean and straight forward, there’s a lot of wiggle room to consider. For instance, solar panels are not perfect – not even close. They only produce power when there is sunlight on them. Here in the northeast, that’s not a big part of the day. On a good day I could get 4 or so hours of direct sunlight in the winter. My little single 305w panel was absolutely not going to be able to keep up. I added a second panel, and wired them in parallel (which would double the amperage, but keep the voltage the same). Now I’m doing on a good day about 1.7kwh of power. If my batteries get completely drained, it’ll take a day and a half to refill them. I’ll definitely need more capacity, but I’m not using the batteries for a lot right now, so this is fine.

Two panels installed, the view from my office.

The solar controller I’m using has a bluetooth module that lets it be monitored remotely, but given that this is out in tool shed, and data is available from it only when the phone is attached to it, I needed something a little more realtime. I hooked up a raspberry pi to the controller and set up a small python script to constantly poll the controller for telemetry, which then posts to MQTT. I then added an MQTT polling function to my Magic Mirror – giving me a realtime display of what was happening in the power shed, including maxes for the last 2 days.

Up And Running – And Next Steps

MQTT Display on a sunny day

So far this has been a stable, successful project. Of course, once it was up and running, winter set in, so I spent less time out in my workshed, but I’m looking forward to having it ready for more use. One of the challenges is keeping things cool in the summer. The smallest AC I can find is about 4A@110v, or 440watts. On a sunny day, I can just barely keep up with that, but there’d be something awfully nice about having an AC cooled workshop that’s 100% solar powered.

I’m planning on upgrading the installation this summer to add 2-4 more panels, bringing my total production closer to 1kw. This will unfortunately push past the capacity for my MPPT controller, so that’ll need an upgrade. And of course, I’d love to figure out how to add more batteries fore capacity.

Ultimately I’d love to find a way to wire this into the house. Right now there’s no simple way to do this other than running an extension cord through a window and running some internal gear on it. Short of adding a new panel to house mains and putting a smart switch in place, I may simply need to live with having a secondary power source powering SOME things, separate from the house.

Parts List

I Built an Evil Genius Sign for Halloween

It’s no secret I’m a huge fan of Warner Brothers cartoons. My sister and I were basically raised on this stuff, and so much of our cultural reference points (and humor) comes from watching Bugs Bunny when we were growing up.

So, as Halloween approached, I thought it might be cool to recreate an iconic image from the 1952 cartoon “Water Water Every Hare”, where Bugs is taken to a big scary castle on the edge of a waterfall. The castle is inhabited by, naturally, an Evil Scientist, who advertises the fact with a blinking sign on the towers of his castle.

Okay, I’m not really a scientist, I’m an engineer, but I figure I could apply a little artistic license and make a sign like that for my house for Halloween.

I wanted it big enough so I could put it in an upstairs window and have it visible from the pathway. We get a LOT of kids through our community over Halloween, and tons of parents as well (since mostly the parents would get the reference), so it needed to be visible. In order to constrain the glare, I decided to put it in basically a shadowbox configuration. An enclosed box, LED lighting inside, with a cutout pattern on front that would show the text.

First step was to use the laser cutter at the Makerspace to cut out the lettering. As anyone who does stencils will recognize, the second line (“BOO”) would have floating elements in it, and would have to be glued down after the box was made.

I found some old acrylic sheeting that still had one strip of white backing on that, and that made a dandy diffuser, as well as a place to mount the center parts of the lettering.

Next, based on the size of the lettering, I whipped up a box out of some scrap wood, and painted it black. I also painted the letter stencils so the shadowmask wouldn’t show up at night, but the lettering shining through would.

The colored lighting was done with some LED strips and an arduino. The sketch was painfully simple. Just first row on, wait a second, off, wait a half second, second row on, wait a half second, off, then wait a half second and then repeat. The most challenging part was soldering up the strips (I needed 3 rows), and mounting the arduino.

The only thing I had to go ‘buy’ was the backing board. A quick trip to Michaels got me a sheet of the plastic corrugated ‘cardboard’ for $4. This stuff is awesome, and I think I’m going to use it more in future projects. I mounted the LED strips and the arduino to it initially using hot glue, but while that’s the default ‘go to’ for DIY projects, I ended up ziptying the strips to the backing board, and doing the same for the arduino. Since the board is flexible, hot glue just didn’t make sense.

Once everything was screwed together, it was just a matter of putting it in the window and plugging it in. Yay! It worked!

I slightly misjudged the width of the window, so it doesn’t quite have the margins I had hoped, but when it got dark, it looked great. Very happy with the end result!

Best Amazon Echo Dot Wall Mount Ever

I have 4 Echo Dots, plus a full size Echo in the living room. Love the durned little buggers, and calling out ‘alexa!’ has become a normal part of every day life. I use it for music, news, shopping lists, and home control of lights and dimmers. I can see the days of carrying on conversations with your house getting closer and closer.

The Dots are cute, but they need to sit on a surface somewhere. That takes up space and clutter. I’d been digging around to try and find a 3d printed mount or something similar so I could mount the Dots on the wall.

This morning while cleaning up my workbench, I realized there was a very simple way of hanging the dot on the wall. A pair of 3″ framing nails later, and voila. The dot is up and off my workbench, it’s stable, the speaker is cleared enough to be heard and sounds fine, victory.

I know many people dont’ have 2×6 studs exposed everywhere, but goshdarnit, this was a quicky fix that works great.

Creating Timelapse Videos from a Synology NAS

About a year and a half ago, I bought a Synology 216+ NAS .  The primary purpose was to do photography archiving locally (before syncing them up to Amazon S3 Glacier for long term storage).  The box has been a rock solid tool, and I’ve been slowly finding other uses for it.  It’s basically a fully functional Linux box with an outstanding GUI front end on it.

One of the tools included with the NAS is called ‘Surveillance Station’, and even though it has a fairly sinister name, it’s a good tool that allows control and viewing of IP connected cameras, including recording video for later review, motion detection, and other tidbits.  The system by default allows 2 cameras free, but you can add ‘packs’ that allow more cameras (these packs are not inexpensive – to go up to 4 cameras cost $200, but given this is a pretty decent commercial video system, and the rest of the software came free with the NAS, I opted to go ahead and buy into it to get my 4 cameras online).

It just so happens, in September, 2017, we had a contractor come on site and install solar panels on several houses within our community. What I really wanted to do is use the Synology and it’s attached cameras to not only record the installation, but do a timelapse of the panel installs. Sounds cool, right?

Here’s how I did it.

The Cameras

The first thing needed obviously were cameras. They needed to be wireless, and relatively easy to configure. A year or two ago, I picked up a D-Link DCS-920L IP camera. While the camera is okay (small, compact, pretty bulletproof), I was less than thrilled with the D-Link website and other tools. They were clunky and poorly written. A little googling around told me “hey, these cameras are running an embedded OS that you can configure apart from the D-Link tools”. Sure enough, they were right. The cameras have an ethernet port on them, so plugging that into my router and powering up let me see a new Mac address on my network. http://192.168.11.xxx/ and I got an HTTP authentication page. Logging in with the ‘admin’ user, and the default password of… nothing (!), I had a wonderful screen showing me all the configuration options for the camera. I’m in!

First thing, natch, I changed the admin password (and stored it in 1Password), then I set them up to connect to my wireless network. A quick rebooot later, and I had a wireless device I could plug into any power outlet, and I’d have a remote camera. Win!

Next, these cameras needed to be added to the Synology Surveillance Station. There’s a nice simple wizard in Surveillance Station that makes the adding of IP camera pretty straighforward. There’s a pulldown that lets you select what camera type you’re using, and then other fields appear as needed. I added all of my cameras, and they came up in the grid display no problem. This is a very well designed interface that made selecting, configuring, testing, and adding the camera(s) pretty much a zero-hassle process.

If you’re planning on doing time lapses over any particular length of time, it’s a good idea to go into ‘Edit Camera’ and set the retention timeperiod to some long amount of time (I have mine set to 30 days). This’ll give you enough room to record the video necessary for the timelapse, but you won’t fill your drive with video recordings. They’ll expire out automatically.

At this point you just need to let the cameras record whatever you’ll be animating later. The Synology will make 30 minute long video files, storing them in /volume1/surveillance/(cameraname).

For the next steps, you’ll need to make sure you have ssh access to your NAS. This is configured via Control Panel -> Terminal / SNMP -> Enable ssh. DO NOT use telnet. Once that’s enabled, you should be able to ssh into the NAS from any other device on the local network, using the port number you specify (I’m using 1022).

ssh -p 1022 shevett@192.168.11.100

(If you’re using Windows, I recommend ‘putty’ – a freely downloadable ssh client application.)

Using ‘ssh’ requires some basic comfort with command line tools under linux.  I’ll try and give a basic rundown of the process here, but there are many tutorials out on the net that can help with basic shell operations.

Putting It All Together

Lets assume you’ve had camera DCS-930LB running for a week now, and you’d like to make a timelapse of the videos produced there.

  1. ssh into the NAS as above
  2. Locate the directory of the recordings.  For a camera named ‘DCS-930LB’, the directory will be /volume1/surveillance/DCS-930LB
  3. Within this directory, you’ll see subdirectories with the AM and PM recordings, formatted with a datestamp.  For the morning recordings for August 28th, 2017 ,the full directory path will be /volume1/surveillance/DCS-930LB/20170828AM/.  The files within that directory will be datestamped with the date, the camera name, and what time they were opened for saving:
  4. Next we’ll need to create a file that has all the filenames for this camera that we want to time.   A simple command to do this would be:
    find /volume1/surveillance/DCS-930LB/ -type f -name '*201708*' > /tmp/files.txt

    This gives us a file in the tmp directory called ‘files.txt’ which is a list of all the mp4 files from the camera that we want to timelapse together.

  5. It’s a good idea to look at this file and make sure you have the list you want. Type
    pico /tmp/files.txt

    to open the file in an editor and check out out.  This is a great way to review the range of times and dates that will be used to generate the timelapse.  Feel free to modify the filename list to list the range of dates and times you want to use for the source of your video.

  6. Create a working directory.  This will hold your ‘interim’ video files, as well as the scripts and files we’ll be using
    cd 
    mkdir timelapse
    cd timelapse
  7. Create a script file, say, ‘process.sh’ using pico, and put the following lines into it.  This script will do the timelapse proceessing itself, taking the input files from the list creatived above, and shortening them down to individual ‘timelapsed’ mp4 files. The ‘setpts’ value defines how many frames will be dropped when the video is compressed. A factor of .25 will take every 4th frame. A factor of .001 will take every thousandth frame, compressing 8 hours of video down to about 8 seconds.
    #!/bin/bash
    
    counter=0;
    for i in `cat /tmp/files.txt`
    do
        ffmpeg -i $i -r 16 -filter:v "setpts=0.001*PTS" ${counter}.mp4
        counter=$((counter + 1))
    done
  8. Okay, now it’s time to compress the video down into timelapsed short clips.  Run the above script via the command ‘. ./process.sh’.  This will take a while.  Each half hour video file is xxx meg, and we need to process that down.  Expect about a minute per file, if you have a days worth of files, that’s 24 minutes of processing.
  9. When done, you’ll have a directory full of numbered files:
    $ ls
    1.mp4
    2.mp4
    3.mp4
  10. These files are the shortened half hour videos.  The next thing we need to do is ‘stitch’ these together into a single video.  ffmpeg can do this, but it needs a file describing what to load in.  To create that file, run the following command:

    ls *.mp4|sort -n| sed -e "s/^\(.*\)$/file '\1'/" > final.txt
  11. Now it’s time to assemble the final mp4 file.  The ‘final.txt’ file contains a list of all the components, all we have to do is connect them up into one big mp4.
    ffmpeg -f concat -safe 0 -i final.txt -c copy output.mp4
  12. The resulting ‘output.mp4’ is your finalized video.   If you’re working in a directory you can see from the Synology desktop, you can now play the video right from the web interface.  Just right click on it, and select ‘play’.

Here’s two of the three timelapses I did, using a remote camera in my neighbors house.  Considering the low quality of the camera, it came out okay…

This entire tutorial is the result of a lot of experimentation and tinkering.  There are holes, though.  For instance, I’d like to be able to set text labels on the videos showing the datestamp, but the ffmpeg that’s compiled on the NAS doesn’t have the text extension built into it.

Let me know if you have any suggestions / improvements / success stories!

Using Amazon Kindle Fire HD’s as Registration Terminals

Even though I’m not working on CONGO as much anymore, I’m still helping out with registration at a couple events, and I’m always looking for better tools and gear to use. I originally designed registration to use cheap, network bootable PC’s, but that was so 15 years ago. The new hotness are small, inexpensive tablets. So could you put together a registration environment using some cheap tablets? Sure.

I’m helping an event that’s using EventBrite for registration services. I’d helped out at a different event about a year ago, and was impressed with the tools Eventbrite offered. The best part was the Eventbrite Organizer, a mobile app for IOS and Android that basically gave you a live dashboard, but also allowed super-fast checkins using a QR code scan. Think of scanning a boarding pass when boarding an airplane. The process is very similar.

The only drawback was, I needed a series of tablets that were roughly the same (bringing batches of workstations that are all different is a sure way to headaches). I didn’t think buying a stack of iPads was going to make sense, and el cheapo tablets from ebay and amazon are sketchy.

3 Kindle Fires being configured as registration terminals
I saw a deal come across Woot for Amazon Fire HD 7″ Tablets for… $33. Each. After digging around on the net, it looked like it was possible to load non-amazon software on these, it just took a little bit of jiggling. I’ve rooted Android tablets before, but it’s not a pleasant experience. I was seeing documentation that allowed for the Play store to be activated without a lot of yak shaving, so I decided to go all in.

I ordered 3 of the tablets, and they arrived a few days later.

First impressions – these are really nice. The design and polish is excellent, they fit well in the hand, and have exceptional screens. They have excellent battery life, and front and rear facing cameras. For $33, there’s not much to go wrong with here.

Here’s the steps I went through to get them up to ‘useable’ status.

  • First, charge them up, natch. They have great batteries, and the entire upgrade process and installation can happen on battery, but really, just get ’em charged.
  • Next, power up and log into your Amazon account. All the Fires have to be tied to an amazon login. Using the same one on each is fine (Amazon supports many Kindles per account).
  • Continuously go into the System settings (swipe down from the top) and select Device Information -> System Update. There’s a good 6 full OS updates that have to happen to bring your device up to FireOS 5.3.x or later. This can take upwards of an hour and a lot of reboots, but at the end, you’ll have a fully upgraded device.
  • Next, we’re going need to install APK’s that are not ‘blessed’, so you have to tell the Fire to accept them. Go into settings -> Security settings and check the switch that says “Allow third party apps”
  • Download and install a file manager. I used ES-File Explorer, which is very popular, but I’ve seen others say “don’t use this it doesn’t work”. I suspect the ‘not working’ has since been fixed. It’s worked fine on 3 devices so far.
  • Next, pull down the APK’s via the Fire’s Silk Browser. Go to this thread on the XDA Developers forum and click on each of the APK links, and download the files, in order, from top to bottom.
  • Once they’re downloaded, start up the ES File Explorer, and navigate to the Downloads folder. You’ll see 4 APK’s there. Click on the them from RIGHT TO LEFT (which will install the ‘oldest’ one first, and the Play store last.
  • After each of the APK’s is installed, launch the Play store, log in with your Google account, and you are all set.

Now that the Fire can install third party apps via the Play store, all we needed to do is install Eventbrite Manager, and log into it with an access-limited login we created just for this event (we’re going to allow general joe schmoes to check people in, and having access to refunds, people’s personal infromation, etc – didn’t seem like a good idea. So a generic Eventbrite login that ONLY allows for checkins was created, and that’s what we logged the tablets into.

I also picked up a handful of desk mounts with really strong gooseneck stalks. Because we’re going to be scanning receipts via the rear camera, the tablet needs to be held off the desk easily.

And we’re done! The Eventbrite Manager app syncs the attendee list whenever it’s connected to the internet. So we can go ahead and check in people super-fast (with a very satisfying BADEEP whenever a successful scan happens), and not have to rely on hotel internet connectivity (which can be notoriously sketchy). At the end of the day, we have a full record of everyone who has checked in and when.

OwnStar – Vulnerability in OnStar Application for GM vehicles

Hack of OnStar Remotelink lets attacker unlock, remote-start, and track cars.

The OwnStar device can detect nearby users of the OnStar RemoteLink application on a mobile phone and can then inject packets into the communication stream to the phone, getting it to give up additional information about the user’s credentials. Those credentials can then be used to gain access to the vehicle’s OnStar account and the full functionality of the OnStar RemoteLink app.

Kamkar says the vulnerability is in the app itself and not the OnStar hardware in GM vehicles. He added that GM and OnStar are working to correct the flaw in the vulnerable mobile application. GM customers who use OnStar can protect themselves for the time being by not using the RemoteLink app.

Good thing I don’t have a GM vehicle that heavily uses OnStar remote services.

Source: ArsTechnica