-
Testing demo-image-full R36.3 commit e470442d70133ad06acd931f57d91f52d16c3439 The hardware codec works fine at host, but in container I kept getting error,
Then run
kept getting this error
Wonder if anyone has any idea how this can be solved, I feel like that some libraries are not properly mapped into the container? |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 5 replies
-
Try |
Beta Was this translation helpful? Give feedback.
-
Hi, Thanks for the reply, but it's actually the exact same image:
So I still got the same error. Did it work in your test? This was how I built the image:
|
Beta Was this translation helpful? Give feedback.
-
Thanks for the update, but I am still encountering the same error
Build info
files under
It can be tested using the following if you have other decoders installed This fails:
The following sw decoding works fine in container:
|
Beta Was this translation helpful? Give feedback.
-
docker run -it --rm --runtime nvidia --net=host nvcr.io/nvidia/deepstream:7.0-samples-multiarch bash instead of docker run -it --rm --runtime nvidia --net=host --privileged nvcr.io/nvidia/deepstream:7.0-samples-multiarch bash
Never mind, I found that it's just sometimes it works, sometimes it does not..... For example, this time it does not work
But if I exit the container, the start a new container, then there is a chance that it might work ERROR: pipeline doesn't want to preroll.
Failed to set pipeline to PAUSED.
Setting pipeline to NULL ...
Freeing pipeline ...
root@p3768-0000-p3767-0000:/opt/nvidia/deepstream/deepstream-7.0# exit
exit
root@p3768-0000-p3767-0000:~# docker run -it --rm --runtime nvidia --net=host nvcr.io/nvidia/deepstream:7.0-samples-multiarch bash
==========
== CUDA ==
==========
CUDA Version 12.2.12
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
root@p3768-0000-p3767-0000:/opt/nvidia/deepstream/deepstream-7.0# gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! fakesink
<deleted all the warnings as they are not relevant...>
Setting pipeline to PAUSED ...
Opening in BLOCKING MODE
Pipeline is PREROLLING ...
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
Redistribute latency...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:03.600543689
Setting pipeline to NULL ...
Freeing pipeline ...
root@p3768-0000-p3767-0000:/opt/nvidia/deepstream/deepstream-7.0# You can see that the newly created container works just fine, they are running exactly the same I also find that
And another file which is not created correctly to container:
|
Beta Was this translation helpful? Give feedback.
-
nvidia-container-runtime.log |
Beta Was this translation helpful? Give feedback.
-
Seems to be working fine now, thanks for all your help! |
Beta Was this translation helpful? Give feedback.
OK, I've updated the patch again to eliminate the problem caused by the unordered values returned by that function. Try updating to scarthgap latest again, hopefully I got it right this time.