The year was 2005. The dotcom days were over, and even though the fear of Windows NT taking over the world was fading, Linux was still considered a “hacker” OS – something not to be taken seriously. Of course, the cool kids all knew that Linux was going to take over the world. Right? Right?
In May 2005, Nokia announced the N770 tablet. A full on tablet computer, with bluetooth, wireless, audio tools, all running Linux in a handheld configuration. While not the first portable handheld Linux device (my Sharp Zaurus SL5500 is an earlier example), the N770 grabbed my attention as something truly exciting. I wanted one in the worst way.
Alas, the reviews of the N770 were not kind. It was slow. It had very limited memory and storage. The battery life wasn’t so hot. I still wanted one, but couldn’t bring myself to fork over the couple hundred bucks to make it happen. Not 2 years later Apple released the iPhone, and the world of handheld computing was forever changed.
On the inside, the specs are interesting, but not particularly staggering:
I’ve always wanted to tinker with the N770, and at the last VCF-East (where I picked up my copy of Wizardry), a nice fellow gave me one that he wasn’t using. I was ecstatic. The device is much as I had read – small, lightweight, in a neat little aluminum shell it can slide out of. However, there was no power supply, so I couldn’t turn it on. Nokia devices in this generation (including the phones) used a very very small barrel connector for power. I didn’t have one of these. A quick ebay search and I found a power supply, and ordered it.
Two weeks later I had my power supply, and plugged in the N770… and… nothing really happened. After a few minutes, the Nokia logo would flash, then flash again, then flash again. My N770 was busted.
FINE, sez me. I had the fever, and nothing was going to stop me. eBay again! This time I waited a few days and ended up purchasing another N770 for $40 delivered. Take that!
THIS one arrived with a power supply, and in fairly decent shape (no stylus though – the first one I got did have one). I plugged it in, powered it up , and yay! It worked!
Okay, yeah. It’s slow. Connecting to wifi can take 2-3 minutes (!), and if you get the password wrong, you have to go through the process again.
The interface is… confusing. I understand it’s Maemo, which is a GUI on top of the linux core, and has been updated and modified a lot since then but there’s a lot of guesswork involved between the navigation buttons, the touchscreen controls, and the buttons on the top of the unit. It really feels like they didn’t quite know what to do with a tablet. Is it all touchscreen stuff? Or are there buttons and light keys, with the touchscreen stuff being tacked on as a ‘cool’ factor?
Nonetheless, its’ a cute little toy to play with, and I love having a working one as part of my collection.
The “Magic Mirror” craze got pretty big in the hacker community a few years ago. For those who may not be familiar with them, a Magic Mirror is setup using a small display behind a 2 way mirror to add text and information to your bathroom (or wherever) mirror. It’s pretty cool, and can be done at very low cost and with only a little bit of tech know-how.
I’ve always loved having ‘displays’ around my workspace – showing information that doesn’t need to sit on my ‘work’ monitors, but is handy to be able to glance at. Being able to quickly glance oer and see dashboards showing system status, or even something showing date, time, and the weather outside.
A few months ago I decided to take one of my spare monitors at home mounted on the wall over my desk and turn it into a permanent display. It would show my current calendar, weather, stock prices, stuff like that. I got to work.
This part is probably the easiest. I used a spare 24″ LCD monitor I had originally mounted to be a sort of TV display. It wasn’t showing anything yet, so I just co-opted it for the Mirror display. It had an HDMI port on it, so it was perfect.
The second component is a Raspberry Pi3 I had lying around from some other project. This particularly Pi is pretty old, so using it just drive a mostly static display seemed great. This one has a case and power supply. I was able to just stick it to the back of the monitor, coil up a HDMI cable next to it, and I was all set.
A small note here. A second display I built for our Makerspace actually uses the monitor itself to power the Pi, since the monitor had a USB port on it. A USB -> MicroUSB cable meant as soon as the monitor was powered up, the Pi would boot and start displaying information. Pretty handy.
When building up these systems, I highly recommend having a keyboard and mouse to plug into the Pi. You can use an ‘all in one’ wireless keyboard/mouse from Amazon – these are great because you don’t have to deal with the cables (particularly when the monitor is mounted on a wall), and you can just unplug the USB adapter and use the keyboard on another project at any time.
The needed packages are pretty straightforward:
Raspbian – the default Linux installation for the Raspberry Pi. Get this installed and up to date (Run the package manager updater after the install to make sure you have the latest and greatest of everything)
Using a command line or the package manager, make sure you have the following secondary tools installed (these are not installed by default):
Chromium (apt-get install chromium-browser)
Magic Mirror 2 – This is the core software that will run your display. Follow the directions on installation carefully. Clone the repository and get it ready for use. I use the manual installation procedure , it works best for how I build systems. YMMV.
Configuring the Host
At this point, I’m assuming the manual configuration of the software above has gone correctly, and you’re able to either use the Raspbian browser or Chromium to connect to http://localhost:8080/ on the Pi and view something approaching the display you want.
Now, this is where I’ve seen a lot of tutorials and other reference material fall down. How do you go from a desktop showing your display to something that will survive reboots, auto-configures itself, etc. Well, here’s what I did to make my display boards stable and rebootable without user intervention.
Some of these things are convenience items, some are mandatory.
For the love of all that is holy, set your password. The default ‘pi’ password is well known, please reset it. This device will be running unattended for days/weeks/months. Please change the password.
Rename the host – this is super handy so you can ssh to it easily. Edit the /etc/hostname file and give it a nice name (mine is ‘mirror’). Once you do this, from your local network, you’ll be able to ssh into the pi via ‘ssh firstname.lastname@example.org’ – neat trick, huh?
Create an autostartup script for the Pi that starts the browser in full screen mode just after the desktop loads. Best way to do this is to edit /etc/xdg/lxsession/LXDE-pi/autostart and put the following code in that file:
@xset s noblank
@xset s off
@lxpanel --profile LXDE-pi
@pcmanfm --desktop --profile LXDE-pi
@chromium-browser --app=http://localhost:8080 --start-fullscreen
Create a cron job entry that will cause the magic mirror server software to restart on reboot. Easiest way to do this is use the ‘crontab -e’ command to make a new entry. Add the following line to the bottom of file (note, this assumes that the Magic Mirror software is installed in /home/pi/MagicMirror – adjust if that’s not the case)
@reboot cd /home/pi/MagicMirror;node serveronly > /home/pi/nodeserver.log 2>&1
On reboot, your mirror software should come up cleanly. Here’s a small trick though that makes remote maintenance easy. If you make a change to the config of the server – add a new module, change sources, etc, and you’re like me and have long since detached the keyboard and house from the unit, this little command will force the Chromium browser to do a reload, bringing in the changes you make to your config file. No need to reboot!
DISPLAY=:0 xdotool key F5
Conclusions / Sum-up
I’ve been running my display at home, and the second display up at the lab for a few months now. I’ll write some more on a few of the modules I’ve used (hooking up to my home automation stuff has been interesting), but that’ll be in a future article. I love having the date, time, calendar, stock prices, and weather always visible. The news ticker at the bottom has been sort of ‘cute’, but I really don’t watch it that much.
There are literally hundreds of third party modules available for the mirror software. You can configure the layout of the screen to do just about anything – from showing phases of the moon to displaying the next time a bus will stop in front of your office. Enjoy!
In May, we adopted a beautiful little 8 year old beagle named Daisy.
We’d been haunting adoption / rescue organizations for several months, trying to find a dog that fit our needs. We were close to adopting a greyhound, but something at the last minute pushed us away. A lot had to do with our history with our last adoption… so we were very nervous about getting our next dog.
Daisy was the last unclaimed dog at an open adoption event down in Forever Home Rescue, in Medfield, MA. The folks there were surprised – it’s rare for a dog to not be adopted during their open houses. Daisy was extremely shy and best described as timid. Anytime she meets someone new she immediately goes down on her belly, and sort of ‘shoulder rolls’ over, putting her head on the ground, whining faintly. She wasn’t the bouncy happy dog that most places present for adoption, so when we arrived, we were concerned. “Is.. she okay?” Adopting an older dog is something we had considered, but we weren’t sure what it would like bringing in a new member of the house who has had a long and busy life before meeting us… would she be able to adapt to being with us?
But we took the plunge, and we’re so happy we did. Daisy has turned into a wonderful member of the family. Every day she gets more comfortable, more expressive, and more dog-like. She’s been with us two months, and in the last day or two, she’s started hopping up on the couch next to us without us needing to coax her or lift her up. She’s snuggling in next to us when she wants companionship, and is just being a great dog.
That’s not to say there haven’t been challenges. She came to us with some serious medical issues that required immediate surgery, several rounds of medication, and a lot of care just teaching her how to be a real dog. She’s definitely had several litters of pups, and her body shows she’s had some other rough times. But every day, she gets a little happier, a little more engaged… it’s been a great journey.
The migration of the blog off my friends shared environment (Thanks Allison!) is complete. Welcome to the new service!
NameCheap has been advertising their EasyWP hosting service for a while, and given it’s relatively inexpensive model ($3/mo-ish), and the fact that I was having a hard time figuring out how to get full SSL service on the blog, I figured the time was right.
This week is shutdown at work, so I’m technically on vacation. My wife and I have had several conversations about me shaving off my beard. It’s literally been 22 years since I had a cleanshaven face, I honestly wasn’t sure what I looked like – maybe vacation is a time to explore something new?
Welp, two days ago I took the plunge and shaved. It feels weird, and actually being able to feel air on my chin is pretty bizarre. I don’t think I’ll stick with this in the long run. Partly I’m just too lazy to keep shaved, but the bigger reason is I like my fuzzy face, and the cleanshaven look just doesn’t feel like me.
Many folks have said I look a lot like my dad… not so sure about that, but either way, it’s been a nice experience seeing my real face for once.
In 1982, I was a freshman at Rochester Institute of Technology. I had already been geeking out with Apple and TRS-80 computers through high school, and had enjoyed my share of games, but RIT was a whole new social crew, new computers, and new connections.
I wandered into one of the labs and met up with a group of gamers that would end up being my Crew for my time at RIT. One of the games they were most passionate about was Wizardry, from Sir-Tech Software, a Sword and Sorcery game that in many ways is the root of the “squad based” RPG games that became so popular. Instead of playing just one character, you controlled a group of 6 at a time, each with different skills and equipment.
The game was fantastic, and I became a huge fan, even writing a lame knock-off of my own called Explorer. (Interestingly, I got mail from a fellow named Rich Katz who apparently did some artwork on Explorer – I vaguely remember him from 1987. He has a great page up about it and the work he did. Thanks Rich!)
Anyway, a few weeks ago I was at the Vintage Computer Festival East down in Wall New Jersey. I have lots of good stories from it, but one particular exchange stands out.
Old sk00l software racks at #vcfeast – need a boxed Choplifter for your Apple ][+ ? C'mon down! We got it!
The VCF has mounds of software, still in boxes, they were trying to sell / get out of the warehouse. They set up an awesome ‘computer store’ with boxed copies of old software right there on the shelves. It was pretty awesome going through all the old still-boxed software. I noticed a set of boxes on a high shelf, and… yes! They were original copies of Wizardry! But it was for a later version. I wanted the first one, the one I played the most in high school. I spoke with one of the organizers for a while, and he said he’d check in the warehouse to see if there were any of the original boxes. I said I’d be happy to pay for them the next day.
Sunday rolled around, and I stopped over at the store. Sure enough, they had found a boxed copy of Wizardry 1, Proving Grounds of the Mad Overlord, and had put it aside for me. I was a proud owner of an original, still in box, copy of a game I played over 35 years ago.
No, I’m not going to try and use this disk, there are plenty of copies / versions on the internet. But having this box, with all the original documentation, and of course the master disk, and the cover artwork – it’s a great addition to my retro computing museum.
It’s been a long time coming. I’ve been having some serious problems with bandwidth from home. Since I work remotely, this has gotten to be a serious issue. Regular daily checks against Speedtest would result in abysmal numbers (we’re talking between 8 and 15 Mbps.) I knew my cable modem could do better, and after a bunch of debugging, I realized it was most likely the Archer C7 TP-Link router I was using. This was originally supposed to be a decent performer, but in the end, it’s turned out to be absolute crap. So I went shopping.
The fix turned out to be replacing the router with a Nighthawk AC2300 Dual Band Router The installation was super-duper easy, and setting it up with my reserved IP addresses, guest network, customized DHCP range, etc was a breeze. The initial config was done via an app on my phone, which was pretty helpful, as it allowed configuration while hopping around on the new Wifi network I was creating.
So how fast is it? Well, here’s what Speedtest is showing me now. To say this is an improvement would be a gross understatement. This is epic.
Thanks Netgear for providing an excellent product with excellent performance results. I’m a fan.
The whole Home Automation craze has been around for years. From the first X10 devices in the 70s and 80s, all the way through wifi enabled refrigerators, the technology to link devices and services in the home has marched onward. I certainly am not immune to the draw of a ‘smart home’, where all my devices are interconnected and can communicate with each other (and I can communicate with them!), but up until recently, the tech for this has been clunky and unimaginative. Sure you could have a big multibutton wired box on your coffee table that could turn on the kitchen lights, but that’s not particularly convenient.
No, the big revolution came when always-on, integrated voice controlled devices like Amazon’s Echo Dot and the Google Home successfully bridged the human / computer interface with easy to use voice commands that didn’t require you to speak like Robbie the Robot. With natural language interfaces available 24/7, without requiring physical button pushing or training, home automation could start to move into the “this makes things easier” territory.
I’ve naturally been attracted to this sort of integration. Having a whole-house ‘personality’ that I could talk to anytime, anywhere, without it being intrusive or burdensome was a big attraction.
How I Did It
The first step to this process was getting Amazon Echo devices in all the rooms. This turned out to be less of a challenge than I expected. Echo Dots are going for $40 and are a decent starting point. I was setting up for the 4 rooms in my house, so this was easy (with a full Echo in the living room for good ambient music and general use.
Even before I started setting up the next stage of automation, we found having a House Bot to be incredibly convenient. Having an Echo in every room, you get very comfortable having basically any answer to any question available just by asking. “Alexa, What’s the capital of Wisconsin?”, etc etc.
But more than that, we use the always-available service for a lot of other things:
Shopping lists – being in the kitchen and realizing we’re almost out of sugar “Alexa, add Sugar to the shopping list” (“I’ve added sugar to your shopping list.”) – when one of us is at the supermarket, we can look at the current list on our phone and see what’s needed, marking things off as we get them.
Timers – This one was a little surprising. “Alexa, set a timer for 10 minutes.” “I’ve set a timer for 10 minute, starting now.” – this is a great reminder service for anything from something in the oven to remembering to go leave to go pick up your kid.
Intercom – because we have Echos in every room, including the kids room, it’s nice to be able to use it as an intercom. “Alexa, Drop in on the kids room” (bdoink) “Hey, what do you want for dinner?”
Music – I have our accounts linked to Spotify, which means I basically have access to all the music in the world, as well as many curated playlists. A lot of times I’ll come down in the morning for coffee, and put on some music with “Alexa, play quiet classical music” – and a nice mix of quiet music will start playing.
Background sounds – We have an active house and neighborhood. Sometimes a nap is needed, and perhaps the general churn of kids playing and doors closing can make that difficult. Asking Alexa to play quiet sounds helps make napping easier. “Alexa, play ocean sounds” is a great way to set some soothing sounds to take a nap to.
Okay, all this is great, but what about the other automation stuff? The lights! What about the lights?
Home automation is frequently associated with ‘turning the lights on and off’. I wanted to be able to do this via Alexa, as well as have some automatic things happen (for instance, the stair lights turn on when you get up to go to the bathroom in the middle of the night). To do this, you need lightswitches and sensors that can be linked together and controlled
There’s a lot of technologies to do this. With LED lightbulbs replacing CF bulbs (for good reason), zillions of companies started making WiFi enabled lightbulbs. I’ll be honest, these things seem sketchy AF. This is a fully enabled wifi computer in a lightbulb socket in your house, on your local network. Most people don’t know what those devices are doing, and what external services they’re communicating with. There’s a school of thought that says “Who cares? It’s just a lightbulb!” – but that’s not the point. That’s not a lightbulb, it’s a computer. It’s on your local wireless network in your house. Which means it has localized access to all the devices on your network at home. That nice firewall / router you have? It’s just been bypassed.
Now, many could argue that this is already happening, with the smart devices like the Echos and other things in the house, which are in regular communication with servers on the internet. And they’d be right – there’s communication happening there that I’m not in 100% control of. But, with a hefty dose of salt, I honestly trust Amazon and Google a lot more than a Chinese company making a $19 Wifi enabled lightbulb that asks me to install an Android app to control the light. Do I blindly trust Amazon and Google? Heck no! But I know a lot of very smart people are analyzing what the Echos and the Google devices are doing. There’s far more transparency there than these fly by night “Smart Device” manufacturers on the net.
Building out the Hub and Devices
Right. Enough of that. Let’s get down to how I built out my network.
First of all, if you’re not going to use wifi, you need to pick another wireless protocol. There’s several to choose from, I ended up choosing Zwave. This is a very common protocol, and has many devices and hubs supporting it. When I started this project 2+ years ago, Zwave devices tended to be on the pricy side, but the costs have been steadily dropping.
Once you’ve selected a protocol, you’ll need a hub. A hub does all the communication with the devices, and presents that communication to whatever interface you’d like to use. In my case, I wanted a dozen or two devices, and I wanted to talk to them via Alexa as well as web and mobile apps. This is a pretty normal ask, nothing too fancy. I ended up buying a Vera Plus hub. It was relatively inexpensive (at the time, compared to others), had a decent developer community, and I had several friends at MakeIt Labs who were using them, so I had a place to ask questions.
The initial setup was pretty easy. I was able to get the bulbs synced with the hub, and I was able to get the hub communicating with Alexa (though this turned out to be something of a challenge, since the integration was still in beta. I hear that the Alexa integration is much smoother now).
At this point, I had a system that would allow me to control the lights in our living room just by speaking out loud “Alexa, living room lights on please” or, if it was a movie night and we wanted subdued lighting, “Alexa, living room lights to ten percent please”.
A side note here. “Dave, why are you saying ‘please’ to a computer?” – it’s a good question. It turns out, when you’re speaking out loud in an aggressive short tone, even to a computer, it makes the entire environment around you… less comfortable. Teaching a 10 year old that it’s okay to yell “ALEXA, LIVING ROOM LIGHTS ON” puts everyone no edge. But if you’re polite, and treat all communication with respect, it changes the tone of communication. It helps that you can even thank the bot after doing something. “Alexa, bedroom lights off.” “Okay!” (lights dim) “Thanks!” “You bet!”
This all… surprisingly… worked really well! Having the lights in each room voice controllable was a huge win. I don’t like centralized lighting in a room. I’d rather have 4 lamps around the edges of a space than have one big light. Tying all the lights together in one ‘scene’ where they can all be turned on, off, or dimmed with one command was awesome. This setup ran for almost 2 years.
After it was well established and the family had gotten very comfortable with having a true ‘home automation’ setup, I started to have some problems.
The Vera Plus hub controller is, well, slow. It could take 5-10 seconds for a device to respond to commands, and occasionally the hub would disconnect from Alexa. The UI on the device was PAINFULLY outdated. It had the look and feel of something written by an intern 10 years ago, and they’ve been just maintaining / adding screens / updating forms on it since then, with no one willing to tackle replacing the UI with something more modern and less clunky. It all “worked”, but it was no fun to fiddle with. I also was interested in doing more integration. I wanted to have a ‘smart lightswitch’ setup where I could see the status of all the lights, and all the motion sensors, on a tablet on the wall. This wasn’t that idle a need – our houses are very tightly insulated. When someone comes in the front door, you can feel the pressure change in the air int he house, but it’s subtle. I wanted to be able to look up and see if someone had just come in the door downstairs, particularly if I was in the attic.
It was time to look at upgrades.
In the 2+ years I had been building this network, the technology had advanced, and there were many new offerings. The Google and Alex integrations had improved, and new devices were on the market. I started taking a good long look at the Samsung Smartthings Hub. I had heard about SmartThings, but had also heard the tools were not mature yet, and there were some serious concerns about privacy and stability. The third generation hub however was looking very nice, and many of the ‘mysteries’ about how these devices were communicating were being cleared up. I started watching the SmartThings subreddit and it looked like people were doing some good work, so I took the plunge and bought the hub.
I won’t bore you with all the details of setting up the new hub and migrating the devices. The short version is “it happened”. There were naturally bumps (like, in order to migrate any Zwave device that’s already been set up to a new hub, you have to basically tell the device and the hub to deregister the old connection before you add the new one. This is accomplished via something called Z-Wave Device Exclusion, which seems counterintuitive, but it let me attach the devices to the new hub once I figured that out.
The real pleasant surprise was that Samsung provides an “IDE” for working with Smartthings. It’s a very well designed UI that lets you go in and update, modify, browse, and configure every device attached to the system. This includes adding new functionality through community-written drivers and debugging connectivity issues. This IDE was a breath of fresh air compared to what I was working with on the Vera. I felt that Samsung understood that people doing Home Automation really want full control over the devices and the tools, without going nuts with hacky approaches to the system.
Once the Smartthings hub was up and running smoothly, I wanted to go to my next project, which is having a ‘smart display’ showing the light and motion detector status.
A while back, I picked up a handful of Amazon Fire HD 7″ tablets and modified them to be able to run the google Play store. I pulled out one of the tablets, charged it, got the software on it all the way up to date, and installed ActionTiles on it. ActionTiles is sort of the ‘standard’ tablet display application for people using SmartThings devices. While not particularly elegant or fancy, it provides a clean, simpl touch interface to all the devices on your network. Setting it up and configuring it was pretty easy, and after tinkering with the layout a bit, I mounted the tablet in one of of the clamp brackets and set it over my desk. I at last had a live display of my device network that would notify me if the door sensors tripped while I was safely ensconced in my office. Victory!
This has been running now for a day or two, and I’m super-happy with the results. I’m sure I’ll find things that need tuning and updating, but so far, the entire project has been a win. I have several ideas about the next steps, but that’ll have to wait for anther day.
I guess it’s getting closer to spring. We spent most of today moving furniture, cleaning, rearranging stuff, etc. The house has been really crowded since M moved in and between her stuff and my old furniture we were just tripping over everything.
So today was “get the stuff out we don’t use”. It was a carefully choreographed process of…
Friday get the storage unit ready to receive furniture.
Saturday morning move everything from the attic and second floor that’s leaving out to the front porch.
Go get the truck from Uhaul.
Get awesome neighbors to load up the truck, follow us to the storage place and Tetris the furniture into place.
Return the truck, tidy up, and fall into a death like sleep for 2 hours
Get up and go to a cohousing meeting
For evening entertainment, assemble the new kitchen table and chairs (smaller and better suited to the space).
Now we’re finally collapsing into bed after a damned busy day. But? It felt good. We worked hard, and made the house better without going crazy doing it.
Next? Tuesday we get a washer dryer. We’ve never had one in the house and while I lived by myself, it was ok taking things to the common house. But now that there’s three of us, we really need local equipment. Yay upgrades!
I’ve been using my Olympus PEN-F Micro Four Thirds for about 8 months now, and on the whole, I’ve been super-happy with a number of aspects of it. It’s small, it’s light, the picture quality is excellent, the glass available is very good, and after a relatively busy learning curve, the menus and controls are easy to work with.
That’s not to say it doesn’t have problems. There are several, lets run them down.
No external displays
I understand this is probably a factor of the small size / mirrorless nature of the beast. But not having any external indicators showing the camera is on, or how many shots are left, or battery level is a real problem. A very small LCD screen (even on the back) would have helped. Having to power up the camera, wait for the EVF to power up, and glancing through it to see if you have a decent battery is a pain. (BTW, there’s a noticeable delay on the battery reader. It can easily say GREEN, FULL, particularly right after putting hte battery in, but 10seconds later it’s showing almost empty. Beware!
Slow Focus Speed
This has been noted elsewhere, but the focus time on the unit is quite slow. If you’re working a shot that has multiple depths of field, the camera can ‘hunt’ around trying to set AF. I tend to run my camera in AF/MF mode, which means it’ll autofocus, but then you can use the focus ring to adjust it to where you want. This is a win, but if the camera is ‘hunting’ for an AF spot, you can’t stop it until it gives up and locks onto something. THEN you can use the manual focus ring. I’d like to see the camera automatically try to stop focusing if I touch / move the focus ring.
The controls can be confusing
There’s 8 turnable dials and 5 pushbuttons on a device half the size of a paperback book. Many of these are unlabelled, because they have a ‘variable’ purpose – they can be reprogrammed to do different things, and this doesn’t include the interface controls on the back (another 10 buttons), but at least these are labelled and make sense. I like the big ‘index finger’ wheel on top which is used to twiddle whatever variable setting you’re currently tuned to (For instance, I tend to shoot in A mode, which means exposure is automatically set, but my aperture is set by the finger wheel. This allows me to change DOF on the fly to get the ‘feel’ I want. I can’t imagine if I’m running in full manual mode trying to keep track of what dial does what.
This is relatively minor, but I wish the camera had either better battery life, or an external power connector. The 2000mAh battery will last about half a day of heavy usage, so I carry 3 of them with me. If I want to do any long exposure work or time lapses, I’m pretty much SOL.
Poor “No Card” handling
Okay, this is the big one, and the reason I decided to write this post. Now, to set the stage, I’m running the latest firmware available (v3.0), so this problem has not been fixed (though it can be with a simple software change). Here it is.
It is TRIVIALLY easy to go out for a shoot and not have a card in the camera, and not notice it.
The camera will operate normally, triggering the shutter, showing all the information in the EVF, but obviously won’t record anything. The ONLY indication there is no card is if you’re looking through the EVF and do not have your finger on the shutter release in ‘half press’ mode. Which, honestly, you never do. If I pick up my camera to take a shot, my finger is already on the shutter setting focus for the shot. I don’t just stare through the EVF unless i’m trying to get a focus point and setting in place.
I’ve caught this problem several times, and it was just annoying. This past weekend, I went out for a long walk in the city, and didn’t realize I had left the card out. I took 20-30 shots and when I got home that night… saw my working card in the laptop.
“But wait, Dave, isn’t there an indicator in the EVF?” – yes, but it’s very easy to miss particularly in bright light, AND only if you’re not touching the shutter release. The left image is a view through the EVF touching no controls, with no card in it. The right image is with my finger on the shutter release, still with no card in the camera. If I trigger the shutter, it’ll act like it took a shot – blanks the EVF, makes a click-kerchunk sound, and goes back to that display if I leave my finger in place (which I do) :
So, after 8 months carrying the PEN-F full time, what are my thoughts? Would I recommend it?
On the average, yes, I would recommend it, but with some caveats, not just the ones mentioned above. But lets start with some of the positives.
It’s a beautiful camera. Really, you can’t avoid that. The styling and setup are wonderful, and adhere to the Olympus PEN styling that goes back 50 years. I’m proud to carry it and use it.
It’s very comfortable feeling. The controls, though there’s a lot of them, are easily accessible, comfortable in my hand, and easy to work with. I added the leather carrying case in the picture, which lets me sling it comfortably under my arm when not using it, and it doesn’t get in the way.
The four-thirds lens platform is quite well supported, and glass is available for reasonable prices. I have 4 lenses now and being able to get things like a 300mm equivalent zoom lens for $99 makes it a great deal.
No need to recap the technical issues above. None of them comes close to a deal breaker – at the most they’re irritations. Olympus has patched firmware on the camera in the past to fix issues, I hope they’ll fix the No Card issue soon.
It’s expensive. The PEN-F body-only is $999. That’s not cheap, and in an increasingly saturated compact mirrorless market, while the camera is good, this is on the expensive side.
I would recommend the platform and camera for people who really are into the styling and are looking for a very good compact camera that is professional and competent enough to do serious photography on. Is it the same as carrying around a full size DSLR like a 7D? No, I’d say mostly because of it’s speed, battery life, and EVF. But do you really need that much weight and bulk for most of your photography? If you want a professional camera you can carry with you full time with exchangeable lenses and excellent features, and the price doesn’t scare you off, the PEN-F is a great camera.
So, I’m sure folks have heard the news about protests in Paris today. That did happen, and in fact I was right in the middle of it for a good part of the day. How could I miss the opportunity to take my camera into a real live protest?
The very short version is, yes, I was at the protests. Yes, there was tear gas and water cannons and lots of people moving around. There were really only a handful of instigators that were egging the crowds on to do damage, but that was enough.
I primarily stayed outside of the major crowds, but I had my camera with me the whole time. Pictures are here:
And yeah, now I know what tear gas feels like. I don’t recommend it.
I’m back in the US for a week to celebrate Thanksgiving with my family. It’s already been a bit of a culture shock but I’m enjoying recharging my New England American needs, starting with my wife, upon seeing me at the airport, handing me an XL Dunkin Donuts French Vanilla coffee. Ahhh.
Lunch is a peanut butter and jelly sandwich (th French don’t do peanut butter. I missed it.)
Hopefully I can take this time to catch up on a few things before I head back to spend most of December in Paris.
I’ve become a huge fan of the WyzeCam IP cameras. They’re small, very high quality, and have a very good mobile client to connect to them. But sometimes, the mobile client will refuse to start. It comes up with the startup screen, and never proceeds.
Searching around the Reddit and WyzeCam forums, many people have seen this happen, but there’s not a clear reason why.
I’ve had this happen on occasion on my Samsung S9+, and I’ve finally found a pattern – it’s quite simple actually.
On the loading screen, the client is making it’s initial connections to WyzeCam’s cloud services. But it’s quite common for providers, corporate networks, and sometimes even hotels or wifi hotspots to block connections to certain services. If the phone cannot connect to the cloud service, it will sit stuck at that startup screen forever, without ever doing anything.
I discovered this when my phone had connected to a mobile hotspot in the office which required authentication to start operating. The phone was connected, but could not reach the internet. The WyzeCam app was sticking at the opening screen. Once I completed the registration, and reloaded the Cam app, it came up super-fast.
I was able to duplicate this experience at a hotel stay recently as well. The local wifi was extremely crowded and performing extremely badly. The WyzeCam app was hanging at the startup screen again. As soon as I switched my phone from the WiFi to my carrier data, the screen loaded correctly!
I think Wyze could fix this very easily by giving some feedback on the loading screen, showing it’s trying to connect, and giving a timeout message if it fails after X amount of time. But for now, this frustrating behaviour is easy to understand and deal with.