How to fix the error “Could not find scales for tensor xxxx” for INT8 mode?
Generally, after INT8 calibration is done, Int8Calibrator will save the scaling factors into a local file (through API writeCalibrationCache), so that it wouldn’t need to do calibration again for subsequent running and load the cached calibration table directly (through API readCalibrationCache).
If you change the network or update the network or run the network among different GPU platforms or different TensorRT versions, then you may probably get the error “Could not find scales for tensor xxxx”, that indicates builder couldn’t find corresponding scaling factor from local cached calibration table. It’s intended since the network graph after fusion would change among different GPU platform or different TensorRT version or modification to network itself. The solution is very simple that removes the local calibration table and does calibration again.
How to fix "LogicError: explicit_context_dependent failed" during running TRT Python in multi-thread？
If you are using the common.py of TRT/sample to do inference with multi-thread, and getting below error, this FAQ will help you to fix that.
"pycuda._driver.LogicError: explicit_context_dependent failed: invalid device context - no currently active context?"
Refer to "PyCuda FAQ: How does PyCUDA handle threading", this error is caused by the missing active context in work thread.
Please make context as below before trigger the GPU task that reported the error:
dev = cuda.Device(0) // 0 is your GPU number ctx = dev.make_context()
and cleans up after the GPU task using:
ctx.pop() del ctx