Skip to content

This poject run Deepstream SDK V 5.0 to inference native Tensorflow using triton inference server. As input we I use an rasberry .py camera and output will be render as RTSP

License

Notifications You must be signed in to change notification settings

romeolandry/DS_python_OD_IS

Repository files navigation

Deepstream python custom Model

This poject run Deepstream SDK V 5.1 to inference native Tensorflow using triton inference server. As input we I use an rasberry .py camera and output will be render as RTSP.

Requirement

Backend Server runnig Apache Kafka to receive. you can use this tutorial

Before using this repo make sure you are already install DeepStream SDK following. you can use this tutorail

Once its done, change the path of kafka library and of the path Custompaerser. Each model confguration file into configuration direction content parameter custom path make sure the point to /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_infercustomparser.so which should corepond to installation dirctory of DeepStream respectivly. Also have to set the path of libnvds_kafka_proto, to do that change the parameter BROKER_CONF['proto_lib'] in to the configuration file.

Running Pre-configurate Model

python run.py --model [pre-configurate model name] --input [/dev/video0]

Parameters

  • --model Give the of the model you wont to inference with. the both inerence have already pre-configured model setting.

    • Trinton Inference Server ['ssd_inceptionv2']
  • --input to give the input path of Raspberry camera. the default has already been setted in to the run configuration file.

  • birate and codec can also be configurate in the same file.

Done

  • Parser Freezed tensorflow model and apply inference using triton inference server
  • build pipeline that parser the on-Screnn Display objet and extract somme frame.
  • send Information to the Server using Apache kafta
  • Stream OSD as RTSP link

To Do

  • [] save frame with opencv; didn't work.
  • [] Save video localy
  • [] Add Instance segmentaion model

About

This poject run Deepstream SDK V 5.0 to inference native Tensorflow using triton inference server. As input we I use an rasberry .py camera and output will be render as RTSP

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages