Difference between revisions of "TensorRT/PerfIssues"

From eLinux.org
Jump to: navigation, search
(CUDA Perspective)
(One intermediate revision by the same user not shown)
Line 8: Line 8:
 
# [https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#implicit-synchronization Any CUDA command to the NULL stream will cause an implicit synchronization]
 
# [https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#implicit-synchronization Any CUDA command to the NULL stream will cause an implicit synchronization]
 
* Run parallel CUDA tasks, e.g. different TensorRT inference instances, on different CUDA streams
 
* Run parallel CUDA tasks, e.g. different TensorRT inference instances, on different CUDA streams
* No cudaMalloc() called in main loop of the application
+
* No cudaMalloc() called in the main loop of the application
 +
 
 
=== TensorRT Perspective ===
 
=== TensorRT Perspective ===
 
* [https://docs.nvidia.com/deeplearning/sdk/tensorrt-best-practices/index.html#batching Batching]
 
* [https://docs.nvidia.com/deeplearning/sdk/tensorrt-best-practices/index.html#batching Batching]
Line 30: Line 31:
 
:Log - ''[[File:Trtexec log.txt|thumb]]''
 
:Log - ''[[File:Trtexec log.txt|thumb]]''
 
==== Measure the Inference Time ====
 
==== Measure the Inference Time ====
 +
User can use CPU Timing or CUDA Event to measure the inference time. Since GPU needs some milliseconds to warmup, to eliminate the impact of the warmup, it's better to measure the time with many loops, e.g. 500 inference loops or add 100 loops before profiling.
 
* [https://docs.nvidia.com/deeplearning/sdk/tensorrt-best-practices/index.html#cpu-timing CPU Timing]
 
* [https://docs.nvidia.com/deeplearning/sdk/tensorrt-best-practices/index.html#cpu-timing CPU Timing]
 
* [https://docs.nvidia.com/deeplearning/sdk/tensorrt-best-practices/index.html#cuda-events CUDA Events]
 
* [https://docs.nvidia.com/deeplearning/sdk/tensorrt-best-practices/index.html#cuda-events CUDA Events]
Line 63: Line 65:
 
With IProfiler, after inference finish, profiler reports the timing for each layer in the network like log below.<br>
 
With IProfiler, after inference finish, profiler reports the timing for each layer in the network like log below.<br>
 
<gallery>
 
<gallery>
Example.jpg|Caption1
+
IProfiler.png
Example.jpg|Caption2
 
 
</gallery>
 
</gallery>
 
<br>
 
<br>
 +
==== VERBOSE Log ====
 +
With "gLogger.setReportableSeverity(nvinfer1::ILogger::Severity::kVERBOSE);" to enable VERBOSE log, after TensorrRT build, it will report TensorRT tatic selection informantion like<br>
 +
log - [[File:Verbose log.txt|thumb]]
 +
In such log, there are logs like:
 +
::''conv2/3x3 + conv2/relu_3x3 (i8816cudnn) '''Set Tactic Name:''' volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1''
 +
in which, "conv2/3x3 + conv2/relu_3x3 (i8816cudnn)" is layer name, "volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1" is the CUDA kernel selected for this layer, if its name includes i8816, it indicates Tensor Core is selected.
 +
==== [https://docs.nvidia.com/cuda/profiler-users-guide/index.html#nvprof-overview nvprof] ====
 +
* User can call cudaProfilerStart() / cudaProfilerStop() like below to limit the profiling region
 +
<pre style="margin-left:30px; color:#B0B0B0; background-color:#111111; white-space:pre-wrap;">
 +
diff --git a/trtexec.cpp b/trtexec.cpp
 +
index 95d01fb..d0747b6 100644
 +
--- a/trtexec.cpp
 +
+++ b/trtexec.cpp
 +
@@ -63,6 +63,7 @@
 +
#include <time.h>
 +
#include <vector>
  
 +
+#include <cuda_profiler_api.h>
 +
@@ -427,9 +428,11 @@ void doInference(ICudaEngine& engine)
 +
        {
 +
              cudaEventRecord(start, stream);
 +
+            cudaProfilerStart();
 +
            context->enqueue(gParams.batchSize, &buffers[0], stream, nullptr);
 +
            cudaEventRecord(end, stream);
 +
            cudaEventSynchronize(end);
 +
+            cudaProfilerStop();
  
 
+
            auto tEnd = std::chrono::high_resolution_clock::now();
We strongly recommend you to run your network in multi-batch mode, so that GPU computation resource can be fully exhausted. It’s always true to see a better performance when inferencing through multi-batch mode, unless your network is deeper or complicated enough to get GPU drained.
+
</pre>
* '''Lower precision mode'''<br />TensorRT supports inferencing in FP16 or INT8 mode. Generally, the speed will become faster from FP32 to FP16 to INT8. <br />For FP16, it’s very simple to enable it.
+
* sample use cases
:: <code>builder->setFp16Mode(true);</code>
+
# Profile Tensor Core FP16/INT8 Utilization<br>
:For INT8, if you don’t care about the correctness or accuracy during network evaluation, you can simply use dummy dynamic range to get the network running in INT8,<br>
+
::tensor_precision_fu_utilization and tensor_int_fu_utilization two metrics can be used to profile FP16 and INT8 Tensor Core utilization respectively (more info can refer to nvprof guidance webpage)
:: <code>samplesCommon::setAllTensorScales(network.get())</code>
+
:: <code>$ nvprof  --profile-from-start off -m tensor_precision_fu_utilization,tensor_int_fu_utilization  ./trtexec --deploy=ResNet-50-deploy.prototxt --output=prob --int8 --batch=2  --avgRuns=1 --iterations=1</code><br>
:: <code>builder->setInt8Mode(true);</code><br />
+
::log - [[File:Tensor core int8 fp16 utilization.txt|thumb]]
:NOTE: if you finally decide to choose INT8 as the deployment mode, you have to implement the ICalibrator or set proper dynamic range for your network. <br>
+
# Profile Tensor Core FP16/INT8 Utilization and GPU trace
:If you find the performance for INT8 or FP16 is not significantly improved, don't panic, let’s break down the issue step by step,
+
:: $ # /usr/local/cuda/bin/nvprof --profile-from-start off --trace api -m tensor_precision_fu_utilization,tensor_int_fu_utilization  ./trtexec --deploy=ResNet-50-deploy.prototxt --output=prob --int8 --batch=2  --avgRuns=1 --iterations=1
:* Dump the per layer time and compare it between FP32 and FP16 or INT8.  
+
:: log - [[File:Tensor core int8 fp16 utilization AND gpu trace.txt|thumb]]
:* Figure out which layer takes the bigger or most time consumption. If it’s FC layers, you probably need to enable hybrid mode (enable both FP16 and INT8),<br>
+
# Profile GPU call trace in sequence
:: <code>builder->setFp16Mode(true);</code>
+
:: $ /usr/local/cuda/bin/nvprof --profile-from-start off  --print-gpu-trace  ./trtexec --deploy=ResNet-50-deploy.prototxt --output=prob --int8 --batch=2  --avgRuns=1 --iterations=1
:: <code>builder->setInt8Mode(true);</code>
+
:: log - [[File:Gpu trace.txt|thumb]]
:: <code>builder->setInt8Calibrator(&calibrator); </code>
+
==== [https://docs.nvidia.com/deeplearning/sdk/tensorrt-best-practices/index.html#nvprof Nsight Systems] ====
:: <code>// or samplesCommon::setAllTensorScales(network.get())</code>
+
The recommended CUDA profilers are NVIDIA Nsight Compute and NVIDIA Nsight Systems. Please get more info from [https://docs.nvidia.com/deeplearning/sdk/tensorrt-best-practices/index.html#nvprof 1.5. CUDA Profiling] in [https://docs.nvidia.com/deeplearning/sdk/tensorrt-best-practices/index.html Best Practices For TensorRT Performance]
:* If your network has many plugin layers or those plugin layers interlace through the whole network, TensorRT will insert many reformat layers to convert the data layout between normal layer and plugin layer. This reformat layer can be eliminated in some cases, for example, the network with PReLU (which can’t be supported by TensorRT 5.1 or prior versions). User could consider to replace PRelu with Leaky Relu which is the native layer if this wouldn't decrease the accuracy a lot. 
+
== Further Measures about Perf Improvement ==
:* Generally, the speedup for lower precision mode mainly comes from convolution layer, if the total time of convolution layer takes a little part of your network inferencing time, it’s expected that lower precision FP16 or INT8 can’t help the network performance a lot. In this case, you should consider how to optimize those non-convolution layer or feed back to NVIDIA for any potential advice.  
+
* Upgrade to the latest TensorRT version
* '''Network pruning'''<br />This is beyond what TensorRT could help, but this approach should be in mind also for network optimization, like network pruning way provided in NVIDIA TLT. <br />Standing on GPU HW perspective, there are also some tricks when we design or prune the networks, for example, Tensor core on T4 or Xavier will be more friendly to these convolution cases of which channels are multiplier of 32 or 64. Hence this should give you a sense when you design or prune your feature extraction layers.
+
* Enable DLA to offload GPU (Xavier)
 +
* [https://docs.nvidia.com/deeplearning/sdk/tensorrt-best-practices/index.html#enable-fusion Layer Fusion]
 +
* Design network more friendly to Tensor core (channels % 32 == 0)
 +
* Network Pruning 
 +
* Choose faster network under specific accuracy requirement

Revision as of 19:29, 15 January 2020

Quick Check

Basically, should follow below rules to get good perf.

CUDA Perspective

  • Only one CUDA context for one GPU
  1. Multiple CUDA context consumes extra memory
  2. The different CUDA contexts sharing the same GPU are time-sliced
  • Not use default CUDA stream
  1. Any CUDA command to the NULL stream will cause an implicit synchronization
  • Run parallel CUDA tasks, e.g. different TensorRT inference instances, on different CUDA streams
  • No cudaMalloc() called in the main loop of the application

TensorRT Perspective

  1. Maximize the parallelism capability on GPU
  2. Save runtime memory consumption
  • Lower Precision Mode
  1. Lower precision has higher compute capability
  2. Lower precision consume less memory
  • Ensure there is enough workspace size for TensorRT inference
  1. builder->setMaxWorkspaceSize(1_GB); // TensorRT 5.1
  2. config->setMaxWorkspaceSize(1_GiB); // TensorRT 6.0

Profiler

There are many useful profiler tools that can help TensorRT user to find out the performance status.

trtexec

  • It's in TensorRT package (bin: TensorRT/bin/trtexec, code: TensorRT/samples/trtexec/)
  • lots of handy and useful options to support
  1. build model using different build options with or without weight/input/calib data, save the build TensorRT engine
  2. inference using different inference options with or without input, or simply inference with TensorRT engine
  3. inference profiling
  • Example of "./trtexec --deploy=ResNet-50-deploy.prototxt --output=prob --int8 --batch=8 --dumpProfile"
Log - File:Trtexec log.txt

Measure the Inference Time

User can use CPU Timing or CUDA Event to measure the inference time. Since GPU needs some milliseconds to warmup, to eliminate the impact of the warmup, it's better to measure the time with many loops, e.g. 500 inference loops or add 100 loops before profiling.

Note: below event can be applied in application to improve the parallal capability of inference data preparation and inference
TensorRT also includes an optional CUDA event in the method IExecutionContext::enqueue that will be signaled once the input buffers are free to be reused. This allows the application to immediately start refilling the input buffer region for the next inference in parallel with finishing the current inference. For example:
cudaEvent_t inputReady;
cudaEventCreate(&inputReady);

context->enqueue(batchSize, &buffers[0], stream, &inputReady);
cudaEventSynchronize(inputReady);

// At this point we can refill the input buffers, but output buffers may not be done</code>

IProfiler

IProfiler is a TensorRT Built-In TensorRT Profiling tool IProfiler interface is provided in the common sample code (common.h), below is sample change to apply IProfiler

--- sampleSSD.cpp.orig	2019-05-27 12:39:14.193521455 +0800
+++ sampleSSD.cpp	2019-05-27 12:38:59.393358775 +0800
@@ -428,8 +428,11 @@
     float* detectionOut = new float[N * kKEEP_TOPK * 7];
     int* keepCount = new int[N];
 
+    SimpleProfiler profiler (" layer time");
+    context->setProfiler(&profiler);
     // Run inference
     doInference(*context, data, detectionOut, keepCount, N);
+    std::cout << profiler;
 
     bool pass = true;

With IProfiler, after inference finish, profiler reports the timing for each layer in the network like log below.


VERBOSE Log

With "gLogger.setReportableSeverity(nvinfer1::ILogger::Severity::kVERBOSE);" to enable VERBOSE log, after TensorrRT build, it will report TensorRT tatic selection informantion like
log - File:Verbose log.txt In such log, there are logs like:

conv2/3x3 + conv2/relu_3x3 (i8816cudnn) Set Tactic Name: volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1

in which, "conv2/3x3 + conv2/relu_3x3 (i8816cudnn)" is layer name, "volta_int8_i8816cudnn_int8_256x64_ldg16_relu_singleBuffer_small_nt_v1" is the CUDA kernel selected for this layer, if its name includes i8816, it indicates Tensor Core is selected.

nvprof

  • User can call cudaProfilerStart() / cudaProfilerStop() like below to limit the profiling region
diff --git a/trtexec.cpp b/trtexec.cpp
index 95d01fb..d0747b6 100644
--- a/trtexec.cpp
+++ b/trtexec.cpp
@@ -63,6 +63,7 @@
 #include <time.h>
 #include <vector>

+#include <cuda_profiler_api.h>
@@ -427,9 +428,11 @@ void doInference(ICudaEngine& engine)
         {
              cudaEventRecord(start, stream);
+            cudaProfilerStart();
             context->enqueue(gParams.batchSize, &buffers[0], stream, nullptr);
             cudaEventRecord(end, stream);
             cudaEventSynchronize(end);
+            cudaProfilerStop();

             auto tEnd = std::chrono::high_resolution_clock::now();
  • sample use cases
  1. Profile Tensor Core FP16/INT8 Utilization
tensor_precision_fu_utilization and tensor_int_fu_utilization two metrics can be used to profile FP16 and INT8 Tensor Core utilization respectively (more info can refer to nvprof guidance webpage)
$ nvprof --profile-from-start off -m tensor_precision_fu_utilization,tensor_int_fu_utilization ./trtexec --deploy=ResNet-50-deploy.prototxt --output=prob --int8 --batch=2 --avgRuns=1 --iterations=1
log - File:Tensor core int8 fp16 utilization.txt
  1. Profile Tensor Core FP16/INT8 Utilization and GPU trace
$ # /usr/local/cuda/bin/nvprof --profile-from-start off --trace api -m tensor_precision_fu_utilization,tensor_int_fu_utilization ./trtexec --deploy=ResNet-50-deploy.prototxt --output=prob --int8 --batch=2 --avgRuns=1 --iterations=1
log - File:Tensor core int8 fp16 utilization AND gpu trace.txt
  1. Profile GPU call trace in sequence
$ /usr/local/cuda/bin/nvprof --profile-from-start off --print-gpu-trace ./trtexec --deploy=ResNet-50-deploy.prototxt --output=prob --int8 --batch=2 --avgRuns=1 --iterations=1
log - File:Gpu trace.txt

Nsight Systems

The recommended CUDA profilers are NVIDIA Nsight Compute and NVIDIA Nsight Systems. Please get more info from 1.5. CUDA Profiling in Best Practices For TensorRT Performance

Further Measures about Perf Improvement

  • Upgrade to the latest TensorRT version
  • Enable DLA to offload GPU (Xavier)
  • Layer Fusion
  • Design network more friendly to Tensor core (channels % 32 == 0)
  • Network Pruning 
  • Choose faster network under specific accuracy requirement