This toturial includes a Python demo for OpenVINO, as well as some converted models.
Model | Parameters | GFLOPs | Test Size | mAP | Weights |
---|---|---|---|---|---|
YOLOX-Nano | 0.91M | 1.08 | 416x416 | 25.3 | onedrive/github |
YOLOX-Tiny | 5.06M | 6.45 | 416x416 | 31.7 | onedrive/github |
YOLOX-S | 9.0M | 26.8 | 640x640 | 39.6 | onedrive/github |
YOLOX-M | 25.3M | 73.8 | 640x640 | 46.4 | onedrive/github |
YOLOX-L | 54.2M | 155.6 | 640x640 | 50.0 | onedrive/github |
YOLOX-Darknet53 | 63.72M | 185.3 | 640x640 | 47.3 | onedrive/github |
YOLOX-X | 99.1M | 281.9 | 640x640 | 51.2 | onedrive/github |
Please visit Openvino Homepage for more details.
Option1. Set up the environment tempororally. You need to run this command everytime you start a new shell window.
source /opt/intel/openvino_2021/bin/setupvars.sh
Option2. Set up the environment permenantly.
Step1. For Linux:
vim ~/.bashrc
Step2. Add the following line into your file:
source /opt/intel/openvino_2021/bin/setupvars.sh
Step3. Save and exit the file, then run:
source ~/.bashrc
-
Export ONNX model
Please refer to the ONNX toturial. Note that you should set --opset to 10, otherwise your next step will fail.
-
Convert ONNX to OpenVINO
cd <INSTSLL_DIR>/openvino_2021/deployment_tools/model_optimizer
Install requirements for convert tool
sudo ./install_prerequisites/install_prerequisites_onnx.sh
Then convert model.
python3 mo.py --input_model <ONNX_MODEL> --input_shape <INPUT_SHAPE> [--data_type FP16]
For example:
python3 mo.py --input_model yolox.onnx --input_shape [1,3,640,640] --data_type FP16 --output_dir converted_output
python openvino_inference.py -m <XML_MODEL_PATH> -i <IMAGE_PATH>
or
python openvino_inference.py -m <XML_MODEL_PATH> -i <IMAGE_PATH> -o <OUTPUT_DIR> -s <SCORE_THR> -d <DEVICE>