NVIDIA TensorRT™ is a platform for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference applications. TensorRT-based applications perform up to 40x faster than CPU-only platforms during inference. With TensorRT, you can optimize neural network models trained in all major frameworks, calibrate for lower precision with high accuracy, and finally deploy to hyperscale data centers, embedded, or automotive product platforms.
You can find answers here for some common questions about using TRT.
Refer to the page TensorRT/CommonFAQ
TRT Accuracy FAQ
If your FP16 result or Int8 result is not as expected, below page may help you fix the accuracy issues.
Refer to the page TensorRT/AccuracyIssues
TRT Performance FAQ
If the performance of doing inference with TRT is not as expected, below page may help you to optimize the performance.
Refer to the page TensorRT/PerfIssues
TRT Int8 Calibration FAQ
Below page will present some FAQs about TRT Int8 Calibration.
Refer to the page TensorRT/Int8CFAQ
TRT Plugin FAQ
Below page will present some FAQs about TRT Plugin.
Refer to the page TensorRT/PluginFAQ
How to fix some Common Errors
If you met some Errors during using TRT, please find from below page for the answer.
Refer to the page TensorRT/CommonErrorFix
How to debug or analyze
Below page will help you debugging your inferencing in some ways.
Refer to the page TensorRT/How2Debug
TRT & YoloV3 FAQ
Refer to the page TensorRT/YoloV3
TRT ONNXParser FAQ
If you have some question about onnx dynamic shape and onnx Parsing issues, this page might be helpful.
Refer to the page TensorRT/ONNX