Video Stabilization tutorial for Jetson TK1
Image-based video stabilization can be performed in many different ways and with many different parameter settings, but in general it is quite a compute-intensive task. OpenCV comes with several different image-based algorithms for video stabilization, so we will use one of the OpenCV sample programs that performs image-based video stabilization. Note that it is also possible to perform video stabilization using other sensors such as a Gyroscope/Accelerometer/IMU motion sensor, and ideally video stabilization is performed using a combination of motion sensors and image-based computer vision processing. However this tutorial is only interested in image-based video stabilization.
- First, make sure you have installed the CUDA toolkit and the OpenCV development packages on your device, by following the CUDA tutorial and the OpenCV tutorial.
- Download the OpenCV samples (they come with the OpenCV source code). You can download this by visiting "http://opencv.org/" in a browser and clicking to download "OpenCV for Linux/Mac", or if you want to download this directly from the command-line then run this on the device:
wget http://downloads.sourceforge.net/project/opencvlibrary/opencv-unix/2.4.9/opencv-2.4.9.zip unzip opencv-2.4.9.zip
(If you don't have the wget or unzip commands, then run "sudo apt-get install wget unzip" to download & install them).
- Simply build the OpenCV Videostab sample program:
cd opencv-2.4.9/samples/cpp g++ videostab.cpp -lopencv_core -lopencv_imgproc -lopencv_highgui -lopencv_calib3d -lopencv_contrib -lopencv_features2d -lopencv_flann -lopencv_gpu -lopencv_legacy -lopencv_ml -lopencv_nonfree -lopencv_objdetect -lopencv_photo -lopencv_stitching -lopencv_superres -lopencv_video -lopencv_videostab -o videostab
(If you don't have the g++ command, then run "sudo apt-get install build-essential" to download & install it).
- You can run the videostab demo on a pre-recorded shaky video, such as the "Qt_sample/cube4.avi" file in the OpenCV cpp samples folder. The videostab demo displays a graphical output, hence you should plug a HDMI monitor in or use a remote viewer such as X Tunneling or VNC or TeamViewer on your desktop in order to see the output. It will default to performing a 2-pass analysis of the video, so let's tell it to use a single-pass (so it can potentially work in realtime from a live webcam) by setting "-est-trim=no":
./videostab --est-trim=no Qt_sample/cube4.avi
If you compare the resulting video (stored as "stabilized.avi") with the input video you will see that it has significantly reduced the motion! For example, to view the video files on the device you can use mplayer:
sudo apt-get install mplayer mplayer Qt_sample/cube4.avi mplayer stabilized.avi
Note: this videostab demo is a single-threaded CPU demo, thus it runs quite slow since it doesn't use the CUDA-accelerated GPU of Tegra K1 and it doesn't use the multiple CPU cores.