This project base on tiny-tensorrt
It can speed up the whole pipeline on GPU, greatly improve the operation efficiency, and customize the pre-processing and post-processing on GPU - 2021-6-29
- Preprocess in GPU
- Postprocess in GPU
- run whole pipeline in GPU easily
- Custom onnx model output node
- Engine serialization and deserialization auto
- INT8 support
cuda 10.0+
TensorRT 7
OpenCV 4.0+ (build with opencv-contrib module)
Make sure you had install dependencies list above
# clone project and submodule
git clone {this repo}
cd {this repo}
mkdir build && cd build && cmake .. && make
Then you can intergrate it into your own project with libtinytrt.so and Trt.h
example cxx code for how to use opencv gpu version in TensorRT inference.
For the 3rd-party module and TensorRT, you need to follow their license
For the part I wrote, you can do anything you want