Difference between revisions of "Jetson/L4T/TRT Customized Example"
Line 53: | Line 53: | ||
$ deepstream-app -c deepstream_app_config_ssd.txt | $ deepstream-app -c deepstream_app_config_ssd.txt | ||
+ | |||
+ | |||
+ | == TensorRT == |
Revision as of 22:02, 4 May 2021
This page collects information to deploy customized models with TensorRT.
Contents
Deepstream
YoloV4 Tiny
Deepstream can reach 60fps with 4 video stream on Xavier:
$ cd /opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo $ wget https://raw.githubusercontent.com/AastaNV/eLinux_data/main/deepstream/yolov4-tiny/yolov4_tiny.patch $ git apply yolov4_tiny.patch $ export CUDA_VER=10.2 $ make -C nvdsinfer_custom_impl_Yolo
$ wget https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4-tiny.cfg -q --show-progress $ wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.weights -q --show-progress $ wget https://raw.githubusercontent.com/AastaNV/eLinux_data/main/deepstream/yolov4-tiny/deepstream_app_config_yoloV4_tiny.txt $ wget https://raw.githubusercontent.com/AastaNV/eLinux_data/main/deepstream/yolov4-tiny/config_infer_primary_yoloV4_tiny.txt
$ deepstream-app -c deepstream_app_config_yoloV4_tiny.txt
Custom Parser for SSD-MobileNet Trained by Jetson-inference
$ cd /opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_SSD/ $ sudo wget https://raw.githubusercontent.com/AastaNV/eLinux_data/main/deepstream/ssd-jetson_inference/ssd-jetson_inference.patch $ sudo git apply ssd-jetson_inference.patch $ sudo CUDA_VER=10.2 make -C nvdsinfer_custom_impl_ssd/
Update config_infer_primary_ssd.txt:
Ex.
diff --git a/config_infer_primary_ssd.txt b/config_infer_primary_ssd.txt index e5bf468..81c52fd 100644 --- a/config_infer_primary_ssd.txt +++ b/config_infer_primary_ssd.txt @@ -62,15 +62,13 @@ gpu-id=0 net-scale-factor=0.0078431372 offsets=127.5;127.5;127.5 model-color-format=0 -model-engine-file=sample_ssd_relu6.uff_b1_gpu0_fp32.engine -labelfile-path=ssd_coco_labels.txt -uff-file=sample_ssd_relu6.uff +model-engine-file=ssd-mobilenet.uff_b1_gpu0_fp16.engine +uff-file=ssd.uff infer-dims=3;300;300 uff-input-order=0 uff-input-blob-name=Input -batch-size=1 -## 0=FP32, 1=INT8, 2=FP16 mode -network-mode=0 +labelfile-path=labels.txt +network-mode=2 num-detected-classes=91 interval=0 gie-unique-id=1
$ deepstream-app -c deepstream_app_config_ssd.txt