Difference between revisions of "TensorRT/YoloV3"

From eLinux.org
Jump to: navigation, search
(1. How to run YoloV3 with TRT/ONNX)
(1. How to run YoloV3 with TRT/ONNX)
Line 4: Line 4:
 
With the sample in TRT(5.1.5.0) release (Path: TRT_PATH/samples/python/yolov3_onnx/), we can do Yolov3 inference with below steps
 
With the sample in TRT(5.1.5.0) release (Path: TRT_PATH/samples/python/yolov3_onnx/), we can do Yolov3 inference with below steps
  
# Call TRT_PATH/samples/python/yolov3_onnx/yolov3_to_onnx.py to convert yolov3.cfg and yolov3.weights to onnx model  -  yolov3.onnx. <br>The yolov3_to_onnx.py will download the yolov3.cfg and yolov3.weights automatically, you may need to install wget module and onnx(1.4.1) module before executing it. <pre>$ pip install wget
+
# Call TRT_PATH/samples/python/yolov3_onnx/yolov3_to_onnx.py to convert yolov3.cfg and yolov3.weights to onnx model  -  yolov3.onnx. <br>The yolov3_to_onnx.py will download the yolov3.cfg and yolov3.weights automatically, you may need to install wget module and onnx(1.4.1) module before executing it. <pre>$ pip install wget<pre> $ pip install onnx=1.4.1<pre> $ python yolov3_to_onnx.py </pre>
$ pip install onnx=1.4.1
 
$ python yolov3_to_onnx.py  
 
</pre>
 
 
# Execute “python onnx_to_tensorrt.py” to load yolov3.onnx and do the inference, logs as below.  
 
# Execute “python onnx_to_tensorrt.py” to load yolov3.onnx and do the inference, logs as below.  
 
  $ python onnx_to_tensorrt.py
 
  $ python onnx_to_tensorrt.py

Revision as of 22:19, 25 June 2019

This page will provide some FAQs about using the TensorRT to do inference for the YoloV3 model, which can be helpful if you encounter similar problems.

FAQ

1. How to run YoloV3 with TRT/ONNX

With the sample in TRT(5.1.5.0) release (Path: TRT_PATH/samples/python/yolov3_onnx/), we can do Yolov3 inference with below steps

  1. Call TRT_PATH/samples/python/yolov3_onnx/yolov3_to_onnx.py to convert yolov3.cfg and yolov3.weights to onnx model - yolov3.onnx.
    The yolov3_to_onnx.py will download the yolov3.cfg and yolov3.weights automatically, you may need to install wget module and onnx(1.4.1) module before executing it.
    $ pip install wget<pre> $ pip install onnx=1.4.1<pre> $ python yolov3_to_onnx.py 
  2. Execute “python onnx_to_tensorrt.py” to load yolov3.onnx and do the inference, logs as below.
$ python onnx_to_tensorrt.py
Downloading from https://github.com/pjreddie/darknet/raw/f86901f6177dfc6116360a13cc06ab680e0c86b0/data/dog.jpg, this may take a while...
100% [............................................................................] 163759 / 163759
Loading ONNX file from path yolov3.onnx...
Beginning ONNX file parsing
Completed parsing of ONNX file
Building an engine from file yolov3.onnx; this may take a while...
Completed creating Engine
Running inference on image dog.jpg...
[[135.04631186 219.14289907 184.31729756 324.86079515]
[ 98.95619619 135.56527022 499.10088664 299.16208427]
[477.88941676  81.22835286 210.86738172  86.96319933]] [0.99852329 0.99881124 0.93929232] [16  1  7]
Saved image with bounding boxes of detected objects to dog_bboxes.png.


You also could use TensorRT C++ API to do inference instead of the above step#2:

  • TRT C++ API + TRT built-in ONNX parser like other TRT C++ sample, e.g. sampleFasterRCNN, parse yolov3.onnx with TRT built-in ONNX parser and use TRT C++ API to build the engine and do inference.
Verify the onnx file before using API:
$ ./trtexec  --onnx=yolov3.onnx
Build ONNX converter from https://github.com/onnx/onnx-tensorrt.git, and then convert the .onnx file to TensorRT engine file
$ onnx2trt yolov3.onnx -o yolov3.engine
Load the engine file to do the inference with TRT C++ API, before that you could verify the engine file firstly with trtexec as below
$ ./trtexec --engine=yolov3.engine --input=000_net --output=082_convolutional --output=094_convolutional --output=106_convolutional
              

Tips: as you know, the “Upsample” layer in YoloV3 is the only TRT un-supported layer, but ONNX parser has embedded its support, so TRT is able to run Yolov3 directly with ONNX as above.