Yolov10 model supports TensorRT-8.
CUDA: 11.8
CUDNN: 8.9.1.23
TensorRT: TensorRT-8.2.5.1 / GPU: RTX1650
TensorRT: TensorRT-8.4.3.1 / GPU: RTX4070
# faq
Error Code 1: Internal Error (Unsupported SM: 0x809)
The architecture of the higher version does not support the use of the earlier version of TensorRT,
and you need to upgrade the TensorRT version
- YOLOv10-det support FP32/FP16/INT8 and Python/C++ API
- Choose the YOLOv10 sub-model n/s/m/b/l/x from command line arguments.
- Other configs please check src/config.h
- generate .wts from pytorch with .pt, or download .wts from model zoo
git clone https://github.com/THU-MIG/yolov10.git
cd yolov10/
wget https://github.com/THU-MIG/yolov10/releases/download/v1.1/yolov10n.pt
git clone https://github.com/wang-xinyu/tensorrtx.git
cp [PATH-TO-TENSORRTX]/yolov10/gen_wts.py .
python gen_wts.py -w yolov10n.pt -o yolov10n.wts
# A file 'yolov10n.wts' will be generated.
- build tensorrtx/yolov10 and run
cd [PATH-TO-TENSORRTX]/yolov10
# add test images
mkdir images
cp [PATH-TO-TENSORRTX]/yolov3-spp/samples/*.jpg ./images
# Update kNumClass in src/config.h if your model is trained on custom dataset
mkdir build
cd build
cp [PATH-TO-yolov10]/yolov10n.wts .
cmake ..
make
# Build and serialize TensorRT engine
./yolov10_det -s yolov10n.wts yolov10n.engine [n/s/m/b/l/x]
# Run inference
./yolov10_det -d yolov10n.engine ../images
# The results are displayed in the console
- Optional, load and run the tensorrt model in Python
// Install python-tensorrt, pycuda, etc.
// Ensure the yolov10n.engine
python yolov10_det_trt.py ./build/yolov10n.engine ./build/libmyplugins.so
- Prepare calibration images, you can randomly select 1000s images from your train set. For coco, you can also download my calibration images
coco_calib
from GoogleDrive or BaiduPan pwd: a9wh - unzip it in yolov10/build
- set the macro
USE_INT8
in src/config.h and make again - serialize the model and test
See the readme in home page.