Difference between revisions of "EBC Exercise 39 Setting Up tidl on X15"

From eLinux.org
Jump to: navigation, search
m (Installing caffe-jacinto)
m (Installing caffe-jacinto)
Line 224: Line 224:
 
  ./Anaconda2-2018.12-Linux-x86_64.sh  # Took about 10 minutes
 
  ./Anaconda2-2018.12-Linux-x86_64.sh  # Took about 10 minutes
  
Change directory to the folder where caffe source code is placed.  From:
+
Change directory to the folder where caffe source code is placed.  From: https://github.com/tidsp/caffe-jacinto/blob/caffe-0.17/INSTALL.md
  
 
  sudo apt install caffe-cuda
 
  sudo apt install caffe-cuda
Line 231: Line 231:
 
  sudo apt install libopenblas-dev
 
  sudo apt install libopenblas-dev
  
  sudo aptinstall libturbojpeg #may not be needed
+
May not be needed.
 +
  sudo aptinstall libturbojpeg
 
  sudo ln -s /usr/lib/x86_64-linux-gnu/libturbojpeg.so.0 /usr/lib/x86_64-linux-gnu/libturbojpeg.so
 
  sudo ln -s /usr/lib/x86_64-linux-gnu/libturbojpeg.so.0 /usr/lib/x86_64-linux-gnu/libturbojpeg.so
  
Line 251: Line 252:
 
https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html
 
https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html
  
 +
Install python packages for Anaconda Python: (takes XX mins)
 +
for req in $(cat python/requirements.txt); do conda install $req; done
 
Preparation
 
Preparation
 
* copy Makefile.config.example into Makefile.config
 
* copy Makefile.config.example into Makefile.config
Line 256: Line 259:
 
* Uncomment the line that says USE_CUDNN
 
* Uncomment the line that says USE_CUDNN
 
* If more than one GPUs are available, uncommenting USE_NCCL will help us to enable multi gpu training.
 
* If more than one GPUs are available, uncommenting USE_NCCL will help us to enable multi gpu training.
* Uncomment OPENCV_VERSION := 3
+
* run <code>pkg-config --modversion opencv</code> and if you have version 3, Uncomment '''OPENCV_VERSION := 3'''
  
 
Now you are ready to compile.  Mine took 4 min 15 s.
 
Now you are ready to compile.  Mine took 4 min 15 s.

Revision as of 14:59, 28 February 2019

thumb‎ Embedded Linux Class by Mark A. Yoder


Here are instructions on how to run TI's Deep Learning (tidl) examples on a BeagleBoard-X15.

Install

Get Robert's tidl repo

x15$ git clone https://github.com/rcn-ee/tidl-api

Now follow the instructions in the readme.md file.

x15$ sudo apt update
x15$ sudo apt install ti-opencl libboost-dev libopencv-core-dev libopencv-imgproc-dev libopencv-highgui-dev libjson-c-dev

Most were already installed and up to date. Install time 38s.

Checkout the most current branch and compile. Use -j2 since we have 2 cores.

x15$ cd tidl-api/
x15$ git checkout origin/v01.02.02-bb.org -b v01.02.02-bb.org
x15$ make -j2 build-api      # 1m31s

The next build puts things in /usr/share/ti/tidl so create it and assume give user 1000 (should be debian) permission to read/write it.

x15$ sudo mkdir -p /usr/share/ti/tidl
x15$ sudo chown -R 1000:1000 /usr/share/ti/tidl/

x15$ make -j2 build-examples   # 4m33s

Extras to install

Here are a few other handy extras to install.

If you get a cmemk error:

x15$ cd /opt/scripts/tools/ ; git pull ; sudo ./update_kernel.sh ; sudo apt upgrade

Fix a path error with

x15$ cd /usr/share/ti/tidl
x15$ sudo ln -s <path to tidl>/tidl-api/examples .

The x15 runs a bit hot. A fan is suggested. You can check the CPU temp with

x15$ cat /sys/class/thermal/*/temp
36600
36200
35800
35400
36200
25625

The units are millidegrees C. A fan will drop the temp some 20 Deg C.

If you get Gtk-Message: Failed to load module "canberra-gtk-module", run

x15$ sudo apt install libcanberra-gtk-module libcanberra-gtk3-module

Install the image viewer "eye of gnome" for viewing images on the x15.

x15$ sudo apt install eog

Run Examples

Here's how to run some of the examples. From the host computer you need to ssh with the -XC flags so the x15 can access the host's X-windows to display things. You need to ssh as root for the X-Windows authentication to work. Here are instructions for setting a root password, etc.

host$ ssh -XC root@x15

classification

The imagenet demo is looking for one object out of a list of 1000 things. The classification demo is looking for one (or two if you set TWO_ROIs) object out of a small list of 12 or so things. You need to login to the x15 as root for the X-Windows authentication to work.

root@x15$ cd classification
root@x15$ ls
avg_fps_window.h  imagenet1001.txt  Makefile                        stream_config_mobilenet.txt
classlist.txt     imagenet.txt      readme.md                       tidl_classification
clips             images            stream_config_inceptionnet.txt  tidl-sw-stack-small.png
findclasses.cpp   main.cpp          stream_config_j11_v2.txt

stream_config_inceptionnet.txt seems to have a file missing.

stream_config_j11_v2.txt runs but gets the error "Corrupt JPEG data: 2 extraneous bytes before marker 0xd4". So I send stderr to /dev/null

stream_config_mobilenet.txt runs but it looks like the color channels are switched

The following takes live video from a camera (/dev/video0) and displays it on the host. It also displays a list of objects it is looking for and highlights the last object it found. See readme.md for more details.

root@x15$ ./tidl_classification -g 1 -d 2 -e 2 -l ./imagenet.txt -s ./classlist.txt -i 0 -c ./stream_config_j11_v2.txt 2> /dev/null
Water Bottle
Objects to recognize

This will play a video and classify it. Note: The readme.md referenced test50.mp4, but I couldn't find it so I'm using test10.mp4.

root@x15$ ls clips
test10.mp4  test1.mp4  test2.mp4
root@x15$ ./tidl_classification -g 1 -d 2 -e 2 -l ./imagenet.txt -s ./classlist.txt -i ./clips/test10.mp4 -c ./stream_config_j11_v2.txt

See readme.md for more examples

main.cpp, line 55, uncomment to have two Regions of Interest. (#define TWO_ROIs)

Look in imagenet.txt to see what can be recognized and add them to classlist.txt.

imagenet

Run the imagenet demo to recognize any of the 1000 images.

root@x15$ cd tidl-api/examples/imagenet
root@x15$ ls
imagenet  imagenet_objects.json  main.cpp  Makefile

Processing live video from /dev/video0

./imagenet -i camera0 2> /dev/null  # Redirect the errors to ignore a message
Water Bottle
Recognition Results

Processing a still image.

./imagenet -d 2 -e2 -i IMG_3806.jpg

segmentation

The segmentation example takes an image as input and performs pixel-level classification according to pre-trained categories.

root@x15 cd <path to tidl>/tidl-api/examples/segmentation
root@x15$ ./segmentation -d 2 -e 2 -i camera0 -w 1200 2> /dev/null

ssd_multibox

SSD is the abbreviation for Single Shot multi-box Detector. The ssd_multibox example takes an image as input and detects multiple objects with bounding boxes according to pre-trained categories.

root@x15$ cd <path to tidl>/tidl-api/examples/ssd_multibox
root@x15$ ./ssd_multibox -d 2 -e 2 -i camera0 -w 1200 2> /dev/null

Others

layer_ouput and mcbench look like handy tools.

Auto starting

Here are some notes that I hope will lead up to the examples auto starting.

First allow user debian to run sudo without a password. Do this by added a line to the /etc/sudoers file.

x15$ sudo visudo

The add the following to the end.

debian ALL=(ALL) NOPASSWD: ALL

Now debian doesn't need to enter a password when using sudo. Use with care!

Now create an auto start file.

x15$ mkdir -p ~/.config/autostart
x15$ vi ~/.config/autostart/tidl.desktop

Put the following in the file:

[Desktop Entry]
Type=Application
Exec=sudo bash -c "cd /home/debian/exercises/x15/tidl/tidl-api/examples/classification ; gedit & ./tidl_classification -g 1 -d 2 -e 2 -l ./imagenet.txt -s ./classlist.txt -i 0 -c ./stream_config_j11_v2.txt" Hidden=false
NoDisplay=false
X-GNOME-Autostart-enabled=true
Name=TIDL Example
Comment=Just playing

The examples that use the GUI have an error unless you run gedit first. I hope this can be fixed.

Training on new images

Here are instructions for training the network.

Some links I'm using

Downloading the images

Instructions for downloading the various image data sets are here: https://github.com/amd/OpenCL-caffe/wiki/Instructions-to-create-ImageNet-2012-data

But there are a couple of things you have to do to make it work.

Download

time wget --user <your username> --ask-password -c http://www.image-net.org/challenges/LSVRC/2012/nnoupb/ILSVRC2012_img_train.tar
  • Note: You need to use the username and password of your account. Note also that nonpub has changed to nnoupd.
  • It took some 1585 minutes (26.4 hours) to download the training images.
  • Now download the validation images. This took some 2.5 hours for me.
time wget --user <your username> --ask-password -c http://www.image-net.org/challenges/LSVRC/2012/nnoupb/ILSVRC2012_img_val.tar

Extract

  • To extract training data
mkdir train 
mv ILSVRC2012_img_train.tar train
cd train
tar -xvf ILSVRC2012_img_train.tar    # This took 48 minutes
rm -f ILSVRC2012_img_train.tar
find . -name "*.tar" | while read NAME ; do mkdir -p "${NAME%.tar}"; echo ${NAME} ; tar -xf "${NAME}" -C "${NAME%.tar}"; done

The find command took some 2 hours and 20 minutes. There will be 1000 folders, one for each object. Each folder will have some 1200 images in it. Make sure to check the completeness of the decompression, you should have 1,281,167 images in train folder. Check it with

sum=0
for dir in `find . -type f`
do
  sum=$((sum+1))
done
echo $sum
  • To extract validation data. This took nearly 4 minutes for me.
cd ../ 
mkdir val
mv ILSVRC2012_img_val.tar val
cd val 
tar -xvf ILSVRC2012_img_val.tar

Installing caffe-jacinto

While you are waiting for all the images to download you can start install caffe-jacinto.

cd ImageNet
git clone https://github.com/tidsp/caffe-jacinto.git
git clone https://github.com/tidsp/caffe-jacinto-models.git  # Took about 40 seconds

These are updated instructions from https://github.com/tidsp/caffe-jacinto/blob/caffe-0.17/INSTALL.md Go to https://www.anaconda.com/distribution/ to download and install Anaconda Python 2.7.

wget https://repo.anaconda.com/archive/Anaconda2-2018.12-Linux-x86_64.sh
chmod +x Anaconda2-2018.12-Linux-x86_64.sh
./Anaconda2-2018.12-Linux-x86_64.sh  # Took about 10 minutes

Change directory to the folder where caffe source code is placed. From: https://github.com/tidsp/caffe-jacinto/blob/caffe-0.17/INSTALL.md

sudo apt install caffe-cuda
sudo apt install libgflags-dev libgoogle-glog-dev liblmdb-dev 
sudo apt install libjpeg-turbo8-dev libjpeg8-dev libturbojpeg0-dev
sudo apt install libopenblas-dev

May not be needed.

sudo aptinstall libturbojpeg
sudo ln -s /usr/lib/x86_64-linux-gnu/libturbojpeg.so.0 /usr/lib/x86_64-linux-gnu/libturbojpeg.so

The rest shouldn't be needed.

sudo apt install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler
sudo apt install --no-install-recommends libboost-all-dev

I'm assuming Cuda is already installed. Check the version.

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Tue_Jun_12_23:07:04_CDT_2018
Cuda compilation tools, release 9.2, V9.2.148

Install CUDNN (https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html). libcudnn-dev developer deb package can be downloaded from NVIDIA website (https://developer.nvidia.com/rdp/cudnn-download) Pick the version that matches the compiler and then install using dpkg -i path-to-deb.

https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html

Install python packages for Anaconda Python: (takes XX mins)

for req in $(cat python/requirements.txt); do conda install $req; done

Preparation

  • copy Makefile.config.example into Makefile.config
  • In Makefile.config, uncomment the line that says WITH_PYTHON_LAYER
  • Uncomment the line that says USE_CUDNN
  • If more than one GPUs are available, uncommenting USE_NCCL will help us to enable multi gpu training.
  • run pkg-config --modversion opencv and if you have version 3, Uncomment OPENCV_VERSION := 3

Now you are ready to compile. Mine took 4 min 15 s.

make -j32   # I use -j32 since I have 32 cores
make pycaffe

Prepare the images for training

Download and install the following to get the scripts to prepare the images.

cd ImageNet
git clone https://github.com/tidsp/caffe-jacinto.git
git clone https://github.com/tidsp/caffe-jacinto-models.git  # Took about 40 seconds
cd caffe-janinto
cd data/ilsvrc2012
./get_ilsvrc.sh   # Takes about 3 seconds
  1. N.B. This does not download the ilsvrcC12 data set, as it is gargantuan and we've already downloaded it.
  2. This script downloads the imagenet example auxiliary files including:
    1. - the ilsvrc12 image mean, binaryproto
    2. - synset ids and words
    3. - Python pickle-format data of ImageNet graph structure and relative infogain
    4. - the training splits with labels
cd ../../    # This puts you in caffe-janinto
vi examples/imagenet/create_imagenet.sh

Modify the following variables to point to your ImageNet data dir

TRAIN_DATA_ROOT=/work/yoder/ImageNet/train
VAL_DATA_ROOT=/work/yoder/ImageNet/val

Then set data resize bool to true:

RESIZE=true

Next follow the directions (summarized here) here https://github.com/tidsp/caffe-jacinto/blob/caffe-0.17/INSTALL.md to configure the Makefile.

  • copy Makefile.config.example into Makefile.config
  • In Makefile.config, uncomment the line that says WITH_PYTHON_LAYER
  • Uncomment the line that says USE_CUDNN
  • If more than one GPUs are available, uncommenting USE_NCCL will help us to enable multi gpu training.

Save then run. The -j option tells how many parallel compiles to run. I'm on a 32 core machine.

make -j32

then you are ready to create the lmdb format of ImageNet data, as needed by the training!

./examples/imagenet/create_imagenet.sh

To Do

Need to position the windows so one isn't on top of the other. Try:

wmctrl




thumb‎ Embedded Linux Class by Mark A. Yoder