Deep Learning API and Server in C++14 support for Caffe, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
-
Updated
Jun 28, 2024 - C++
Deep Learning API and Server in C++14 support for Caffe, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation (ICRA 2021)
BEVDet implemented by TensorRT, C++; Achieving real-time performance on Orin
Deploy stable diffusion model with onnx/tenorrt + tritonserver
NVIDIA-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU
Yolov5 TensorRT Implementations
ComfyUI Depth Anything (v1/v2) Tensorrt Custom Node (up to 14x faster), licensed under CC BY-NC-SA 4.0
Using TensorRT for Inference Model Deployment.
you can use dbnet to detect word or bar code,Knowledge Distillation is provided,also python tensorrt inference is provided.
this is a tensorrt version unet, inspired by tensorrtx
Production-ready YOLO8 Segmentation deployment with TensorRT and ONNX support for CPU/GPU, including AI model integration guidance for Unitlab Annotate.
3d object detection model smoke c++ inference code
VitPose without MMCV dependencies
C++ TensorRT Implementation of NanoSAM
The real-time Instance Segmentation Algorithm SparseInst running on TensoRT and ONNX
Base on tensorrt version 8.2.4, compare inference speed for different tensorrt api.
Convert yolo models to ONNX, TensorRT add NMSBatched.
DepthStream Accelerator: A TensorRT-optimized monocular depth estimation tool with ROS2 integration for C++. It offers high-speed, accurate depth perception, perfect for real-time applications in robotics, autonomous vehicles, and interactive 3D environments.
Add a description, image, and links to the tensorrt-inference topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt-inference topic, visit your repo's landing page and select "manage topics."