You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi!
My name is Erwin, I am on team of AI, and we currently experimenting using inference pipeline. We trying to make multiple AI CCTV Camera stream with inference pipeline.
My basic system,
FastAPI for the restAPI (this is the "gate" for adding camera stream and view the annotated stream)
Inference Pipeline
we wanted to use inference pipeline because the capability of multiple thread and clean from RAM leak (previously we tried using plain thread module from python, it didn't go well, because of RAM leak or leftover, even we already remove the camera)
We initialized new Inference Pipeline instance / object, each time we initiate camera or AI module
for example camera1
has 3 AI module
object detection
face recognition
license plate recognition
3 Inference Pipeline intialized
Then,
If we want to remove the camera, we delete that instance / object of Inference Pipeline
Problem,
After deletion the VRAM GPU is decrease, but... there is a hiccup.
for example,
if I initiated camera1 (initial VRAM = 100MB), VRAM will increase let's say (VRAM = 200 MB), and then I stopped the camera1, the VRAM is decrease, but there is little left over (VRAM = 120MB). There is 20MB stuck in out VRAM.
I also already done a bit of analysis, I don't know exactly how to describe, maybe you can help
So in here I tried to benchmark the VRAM GPU
My scenario in this benchmark is
I'm trying to turn on one camera, then turn it off; turn on two cameras, then turn off two cameras, and so on. My observations:
Graph for one camera: there’s an upward trend, which is bad.
Graph for two cameras: the trend is stable.
Graph for three cameras: it’s stable at around 4400 - 4500 MB, with a slightly downward trend.
Graph for four cameras: stable at around 4600 - 4700 MB.
Stable means that if a new camera is added when it's in the stable range, the increase in RAM isn’t too high, compared to the first and second times a camera was added.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi!
My name is Erwin, I am on team of AI, and we currently experimenting using inference pipeline. We trying to make multiple AI CCTV Camera stream with inference pipeline.
My basic system,
for example
camera1
has 3 AI module
3 Inference Pipeline intialized
Then,
If we want to remove the camera, we delete that instance / object of Inference Pipeline
Problem,
for example,
if I initiated camera1 (initial VRAM = 100MB), VRAM will increase let's say (VRAM = 200 MB), and then I stopped the camera1, the VRAM is decrease, but there is little left over (VRAM = 120MB). There is 20MB stuck in out VRAM.
So in here I tried to benchmark the VRAM GPU
My scenario in this benchmark is
I'm trying to turn on one camera, then turn it off; turn on two cameras, then turn off two cameras, and so on. My observations:
Stable means that if a new camera is added when it's in the stable range, the increase in RAM isn’t too high, compared to the first and second times a camera was added.
My system,
PyPI Package:
Inference-GPU: 0.23.0
Software & Hardware
OS: Ubuntu 22.04.5 LTS x86_64
GPU: Tesla P4 8GB (Nvidia driver: 535.183.01 )
CPU : Intel Xeon Silver 4216 (12) @ 2.095GHz
Beta Was this translation helpful? Give feedback.
All reactions