About a year and a half ago, I bought a Synology 216+ NAS . The primary purpose was to do photography archiving locally (before syncing them up to Amazon S3 Glacier for long term storage). The box has been a rock solid tool, and I’ve been slowly finding other uses for it. It’s basically a fully functional Linux box with an outstanding GUI front end on it.
One of the tools included with the NAS is called ‘Surveillance Station’, and even though it has a fairly sinister name, it’s a good tool that allows control and viewing of IP connected cameras, including recording video for later review, motion detection, and other tidbits. The system by default allows 2 cameras free, but you can add ‘packs’ that allow more cameras (these packs are not inexpensive – to go up to 4 cameras cost $200, but given this is a pretty decent commercial video system, and the rest of the software came free with the NAS, I opted to go ahead and buy into it to get my 4 cameras online).
It just so happens, in September, 2017, we had a contractor come on site and install solar panels on several houses within our community. What I really wanted to do is use the Synology and it’s attached cameras to not only record the installation, but do a timelapse of the panel installs. Sounds cool, right?
Here’s how I did it.
The Cameras
The first thing needed obviously were cameras. They needed to be wireless, and relatively easy to configure. A year or two ago, I picked up a D-Link DCS-920L IP camera. While the camera is okay (small, compact, pretty bulletproof), I was less than thrilled with the D-Link website and other tools. They were clunky and poorly written. A little googling around told me “hey, these cameras are running an embedded OS that you can configure apart from the D-Link tools”. Sure enough, they were right. The cameras have an ethernet port on them, so plugging that into my router and powering up let me see a new Mac address on my network. http://192.168.11.xxx/ and I got an HTTP authentication page. Logging in with the ‘admin’ user, and the default password of… nothing (!), I had a wonderful screen showing me all the configuration options for the camera. I’m in!
First thing, natch, I changed the admin password (and stored it in 1Password), then I set them up to connect to my wireless network. A quick rebooot later, and I had a wireless device I could plug into any power outlet, and I’d have a remote camera. Win!
Next, these cameras needed to be added to the Synology Surveillance Station. There’s a nice simple wizard in Surveillance Station that makes the adding of IP camera pretty straighforward. There’s a pulldown that lets you select what camera type you’re using, and then other fields appear as needed. I added all of my cameras, and they came up in the grid display no problem. This is a very well designed interface that made selecting, configuring, testing, and adding the camera(s) pretty much a zero-hassle process.
If you’re planning on doing time lapses over any particular length of time, it’s a good idea to go into ‘Edit Camera’ and set the retention timeperiod to some long amount of time (I have mine set to 30 days). This’ll give you enough room to record the video necessary for the timelapse, but you won’t fill your drive with video recordings. They’ll expire out automatically.
At this point you just need to let the cameras record whatever you’ll be animating later. The Synology will make 30 minute long video files, storing them in /volume1/surveillance/(cameraname).
For the next steps, you’ll need to make sure you have ssh access to your NAS. This is configured via Control Panel -> Terminal / SNMP -> Enable ssh. DO NOT use telnet. Once that’s enabled, you should be able to ssh into the NAS from any other device on the local network, using the port number you specify (I’m using 1022).
ssh -p 1022 shevett@192.168.11.100
(If you’re using Windows, I recommend ‘putty’ – a freely downloadable ssh client application.)
Using ‘ssh’ requires some basic comfort with command line tools under linux. I’ll try and give a basic rundown of the process here, but there are many tutorials out on the net that can help with basic shell operations.
Putting It All Together
Lets assume you’ve had camera DCS-930LB running for a week now, and you’d like to make a timelapse of the videos produced there.
- ssh into the NAS as above
- Locate the directory of the recordings. For a camera named ‘DCS-930LB’, the directory will be /volume1/surveillance/DCS-930LB
- Within this directory, you’ll see subdirectories with the AM and PM recordings, formatted with a datestamp. For the morning recordings for August 28th, 2017 ,the full directory path will be /volume1/surveillance/DCS-930LB/20170828AM/. The files within that directory will be datestamped with the date, the camera name, and what time they were opened for saving:
- Next we’ll need to create a file that has all the filenames for this camera that we want to time. A simple command to do this would be:
find /volume1/surveillance/DCS-930LB/ -type f -name '*201708*' > /tmp/files.txt
This gives us a file in the tmp directory called ‘files.txt’ which is a list of all the mp4 files from the camera that we want to timelapse together.
- It’s a good idea to look at this file and make sure you have the list you want. Type
pico /tmp/files.txt
to open the file in an editor and check out out. This is a great way to review the range of times and dates that will be used to generate the timelapse. Feel free to modify the filename list to list the range of dates and times you want to use for the source of your video.
- Create a working directory. This will hold your ‘interim’ video files, as well as the scripts and files we’ll be using
cd mkdir timelapse cd timelapse
- Create a script file, say, ‘process.sh’ using pico, and put the following lines into it. This script will do the timelapse proceessing itself, taking the input files from the list creatived above, and shortening them down to individual ‘timelapsed’ mp4 files. The ‘setpts’ value defines how many frames will be dropped when the video is compressed. A factor of .25 will take every 4th frame. A factor of .001 will take every thousandth frame, compressing 8 hours of video down to about 8 seconds.
#!/bin/bash counter=0; for i in `cat /tmp/files.txt` do ffmpeg -i $i -r 16 -filter:v "setpts=0.001*PTS" ${counter}.mp4 counter=$((counter + 1)) done
- Okay, now it’s time to compress the video down into timelapsed short clips. Run the above script via the command ‘. ./process.sh’. This will take a while. Each half hour video file is xxx meg, and we need to process that down. Expect about a minute per file, if you have a days worth of files, that’s 24 minutes of processing.
- When done, you’ll have a directory full of numbered files:
$ ls 1.mp4 2.mp4 3.mp4
- Now it’s time to assemble the final mp4 file. The ‘final.txt’ file contains a list of all the components, all we have to do is connect them up into one big mp4.
ffmpeg -f concat -safe 0 -i final.txt -c copy output.mp4
- The resulting ‘output.mp4’ is your finalized video. If you’re working in a directory you can see from the Synology desktop, you can now play the video right from the web interface. Just right click on it, and select ‘play’.
These files are the shortened half hour videos. The next thing we need to do is ‘stitch’ these together into a single video. ffmpeg can do this, but it needs a file describing what to load in. To create that file, run the following command:
ls *.mp4|sort -n| sed -e "s/^\(.*\)$/file '\1'/" > final.txt
Here’s two of the three timelapses I did, using a remote camera in my neighbors house. Considering the low quality of the camera, it came out okay…
This entire tutorial is the result of a lot of experimentation and tinkering. There are holes, though. For instance, I’d like to be able to set text labels on the videos showing the datestamp, but the ffmpeg that’s compiled on the NAS doesn’t have the text extension built into it.
Let me know if you have any suggestions / improvements / success stories!
There is something slightly ironic about the end of the video… both strings of panels are installed and then it starts raining 🙂
I thought that was funny too 🙂 The panels have been running great since they went online.
I cant get this to run due in part to my camera path names having spaces in them. I tried adding ‘ to the being and the end of the path in the txt file but still get a file not found message from ffmepg
So the issue is in the for loop around the ffmpeg. The way bash reads input fields like that, it takes any whitespace as a delimiter. Because newline and a space are whitespace, your spaces in the filenames are causing it to mess up.
There’s a couple ways to fix that. You can use a bash readarray like this:
You will probably have to put single quotes around the “$i” though in the ffmpeg line to keep them one parameter.
Hope this helps!
Hi, I’m using it but the conversion is really slow (about 0.1 fps!!!) do you know why?
I’ve got a ds212J is it too old?
Thanks
The DS212J is a very low power machine – only 256meg of RAM and a 1.2gig CPU. Doing video encoding on that will definitely run slow. Sorry!
Hi,
on my DS216play ffmpeg resurn this error
[swscaler @ 0x4834f0] deprecated pixel format used, make sure you did set range correctly
and a 0byte mp4 file is created
Any hint ?
Thanks
hello, i get this error on a synology dsm 6.1.x
“Error while opening encoder for output stream #0:0 – maybe incorrect parameters such as bit_rate, rate, width or height”
output:
ffmpeg version 2.7.1 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.9.3 (crosstool-NG 1.20.0) 20150311 (prerelease)
configuration: –prefix=/usr –incdir=’${prefix}/include/ffmpeg’ –arch=i686 –target-os=linux –cross-prefix=/usr/local/x86_64-pc-linux-gnu/bin/x86_64-pc-linux-gnu- –enable-cross-compile –enable-optimizations –enable-pic –enable-gpl –enable-shared –disable-static –enable-version3 –enable-nonfree –enable-libfaac –enable-encoders –enable-pthreads –disable-bzlib –disable-protocol=rtp –disable-muxer=image2 –disable-muxer=image2pipe –disable-swscale-alpha –disable-ffserver –disable-ffplay –disable-devices –disable-bzlib –disable-altivec –enable-libopencore-amrnb –enable-libopencore-amrwb –enable-libmp3lame –disable-vaapi –disable-decoder=amrnb –disable-decoder=ac3 –disable-decoder=ac3_fixed –disable-encoder=zmbv –disable-encoder=dca –disable-encoder=ac3 –disable-encoder=ac3_fixed –disable-encoder=eac3 –disable-decoder=dca –disable-decoder=eac3 –disable-decoder=truehd –cc=/usr/local/x86_64-pc-linux-gnu/bin/x86_64-pc-linux-gnu-ccache-gcc –enable-yasm –enable-libx264 –enable-encoder=libx264
libavutil 54. 27.100 / 54. 27.100
libavcodec 56. 41.100 / 56. 41.100
libavformat 56. 36.100 / 56. 36.100
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 16.101 / 5. 16.101
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.100 / 1. 2.100
libpostproc 53. 3.100 / 53. 3.100
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from ‘/volume2/surveillance/sricam-jardin/20180222AM/sricam-jardin20180222-080550-1519283150.mp4’:
Metadata:
major_brand : isom
minor_version : 0
compatible_brands: mp41avc1
creation_time : 2018-02-22 07:05:50
Duration: 00:29:59.67, start: 0.000000, bitrate: 266 kb/s
Stream #0:0(eng): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p, 640×360, 180 kb/s, 15 fps, 15 tbr, 1001 tbn, 2002 tbc (default)
Metadata:
creation_time : 2018-02-22 07:05:50
handler_name : VideoHandler
Stream #0:1(eng): Audio: pcm_alaw (alaw / 0x77616C61), 8000 Hz, 1 channels, s16, 64 kb/s (default)
Metadata:
creation_time : 2018-02-22 07:05:50
handler_name : SoundHandler
File ‘test.mp4’ already exists. Overwrite ? [y/N] y
Output #0, mp4, to ‘test.mp4’:
Metadata:
major_brand : isom
minor_version : 0
compatible_brands: mp41avc1
Stream #0:0(eng): Video: h264, none, q=2-31, 128 kb/s, 15 fps (default)
Metadata:
creation_time : 2018-02-22 07:05:50
handler_name : VideoHandler
encoder : Lavc56.41.100 libx264
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> h264 (libx264))
Hi max, what’s the exact command you’re using?
Hi dave
i use this command :
sudo ffmpeg -i “20180222-080550-1519283150.mp4” -r 15 -filter:v “setpts=0.001*PTS” -an test.mp4
but according someone on stackoverflow , the ffmpeg seems outdated.
i ran the same command on my computer instead of the nas itself and the result was good.
maybe the version shipped with dsm cannot handle h264 video format.
Regards
A little improvement i have made is to find videos taken between a time frame.
Assuming the cam is recording 24/24, the ony useful videos would be those recorded between let’s say 08AM and 17h00 . (others may be too dark in winter to be useful)
I used awk command to parse my videos where the path that looks like this :
/volume2/surveillance/sricam-jardin/20180304PM/sricam-jardin20180304-173507-1520181307.mp4
find $PWD -name “*.mp4” | awk -F- ‘{ if ( substr($4,1,2) = “08”) print $0 }’
note awk is using dash as column separator .
Regards
made a typo , the correct command is :
find $PWD -name “*.mp4” | awk -F- ‘{ if ( substr($5,1,2) = “08”) print $0 }’
typo # 2 🙂
find $PWD -name “*.mp4” | awk -F- ‘{ if ( substr($4,1,2) = “08”) print $0 }’
looks like the site does not let me paste what i really want
”
find $PWD -name “*.mp4” | awk -F- ‘{ if ( substr($4,1,2) = “08”) print $0 }’
“
the text that i pasted is not what it’s printed .
have a look here where i have also posted :
https://superuser.com/questions/1298960/generate-a-list-of-files-based-on-a-time-pattern-in-filename/1300367#1300367
Hi Dave,
thank you for nice article. Could you help me with issue what I have? I can see similar error like others. I really dont know where could be problem.
Ladis@DiskStation:/volume1/homes/Ladis/@Scripts$ ffmpeg -i /volume1/homes/Ladis/@CAM/Timelapse_Dum/2018-04-18/CAM03-WIFI-20180418-0822415006.jpg -r 16 -filter:v “setpts=0.001*PTS” /volume1/homes/Ladis/@CAM/Timelapse_Dum/tmp/videos/0.mp4
ffmpeg version 2.7.1 Copyright (c) 2000-2015 the FFmpeg developers
built with gcc 4.9.3 (crosstool-NG 1.20.0) 20150311 (prerelease)
configuration: –prefix=/usr –incdir=’${prefix}/include/ffmpeg’ –arch=arm –target-os=linux –cross-prefix=/usr/local/arm-unknown-linux-gnueabi/bin/arm-unknown-linux-gnueabi- –enable-cross-compile –enable-optimizations –enable-pic –enable-gpl –enable-shared –disable-static –enable-version3 –enable-nonfree –enable-libfaac –enable-encoders –enable-pthreads –disable-bzlib –disable-protocol=rtp –disable-muxer=image2 –disable-muxer=image2pipe –disable-swscale-alpha –disable-ffserver –disable-ffplay –disable-devices –disable-bzlib –disable-altivec –enable-libopencore-amrnb –enable-libopencore-amrwb –enable-libmp3lame –disable-vaapi –disable-decoder=amrnb –disable-decoder=ac3 –disable-decoder=ac3_fixed –disable-encoder=zmbv –disable-encoder=dca –disable-encoder=ac3 –disable-encoder=ac3_fixed –disable-encoder=eac3 –disable-decoder=dca –disable-decoder=eac3 –disable-decoder=truehd –cc=/usr/local/arm-unknown-linux-gnueabi/bin/arm-unknown-linux-gnueabi-ccache-gcc
libavutil 54. 27.100 / 54. 27.100
libavcodec 56. 41.100 / 56. 41.100
libavformat 56. 36.100 / 56. 36.100
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 16.101 / 5. 16.101
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.100 / 1. 2.100
libpostproc 53. 3.100 / 53. 3.100
Input #0, image2, from ‘/volume1/homes/Ladis/@CAM/Timelapse_Dum/2018-04-18/CAM03-WIFI-20180418-0822415006.jpg’:
Duration: 00:00:00.04, start: 0.000000, bitrate: 53673 kb/s
Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 1920×1080, 25 tbr, 25 tbn, 25 tbc
File ‘/volume1/homes/Ladis/@CAM/Timelapse_Dum/tmp/videos/0.mp4’ already exists. Overwrite ? [y/N] y
[swscaler @ 0xf5020] deprecated pixel format used, make sure you did set range correctly
Output #0, mp4, to ‘/volume1/homes/Ladis/@CAM/Timelapse_Dum/tmp/videos/0.mp4’:
Stream #0:0: Video: mpeg4, none, q=2-31, 128 kb/s, 16 fps
Metadata:
encoder : Lavc56.41.100 mpeg4
Stream mapping:
Stream #0:0 -> #0:0 (mjpeg (native) -> mpeg4 (native))
Error while opening encoder for output stream #0:0 – maybe incorrect parameters such as bit_rate, rate, width or height
Thank you
Lada
Hi,
Thanks very much for the tutorial. I took it one step further and wrapped it all up into a bash script. I used the Synology DSM task scheduler to execute the script at midnight so every morning I have a time-lapse video of the previous day for both my cameras. I hope it can be of some use to others:
#/bin/bash
# Start time
START_TIME=073000
# End time
END_TIME=190000
CAMERA_LIST=(House Garden)
TIMELAPSE_DIR=timelapse
TEMP_DIR=output
CUR_DIR=$(pwd)/$TIMELAPSE_DIR
mkdir -p $CUR_DIR/$TEMP_DIR
for CAMERA in ${CAMERA_LIST[@]};
do
# Make a list of video files from today to process
NOW=$(date -d "yesterday 13:00" '+%Y%m%d')
find /volume1/surveillance/$CAMERA/ -type f -name "*$NOW*.mp4" > /tmp/$CAMERA-files.txt
START_TIME=$((10#$START_TIME))
END_TIME=$((10#$END_TIME))
echo Start time: $START_TIME
echo End time: $END_TIME
counter=0;
while read line
do
# Get the time from the filename
TIME=$(echo $line | grep -ow '[0-9]\{6\}')
TIME=$((10#$TIME))
echo "File #$counter - Time from filename: $TIME"
if (($TIME > $START_TIME)) && (($TIME < $END_TIME)); then
# Print the filename to be read
echo "File #$counter - input filename is: $line"
TEMP_FILENAME=$CUR_DIR/$TEMP_DIR"/"$CAMERA"-"$counter"-"$TIME.mp4
echo Output filename is: $TEMP_FILENAME
# Strip out ever 100 frames
ffmpeg -i "$line" -r 16 -filter:v "setpts=0.0005*PTS" $TEMP_FILENAME < /dev/null
fi
counter=$((counter + 1))
done /tmp/$CAMERA-timelapse_final.txt
OUTPUT_FILENAME=$CUR_DIR/$(date -d "yesterday 13:00" +"%Y-%m-%d")"-"$CAMERA".mp4"
echo Concatenated output file: $OUTPUT_FILENAME
# Concatenate all the files togeter
ffmpeg -f concat -safe 0 -i /tmp/$CAMERA-timelapse_final.txt -c copy $OUTPUT_FILENAME
# Remove intermediate files
rm $CUR_DIR/$TEMP_DIR/$CAMERA*
done
Dave, id pay you to help me get this going on mine please reach out if you can!