Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ffmpeg nvmpi usage from within a docker container #108

Open
jeroenvanderschoot opened this issue Jan 17, 2022 · 5 comments
Open

ffmpeg nvmpi usage from within a docker container #108

jeroenvanderschoot opened this issue Jan 17, 2022 · 5 comments

Comments

@jeroenvanderschoot
Copy link

Is it possible to use the binaries from within a docker container?

Anybody has a sample Dockerfile for this?

@grantthomas
Copy link

It should be possible, I would imagine, if you can figure out what resources to passthrough to the container

Nvidia has data on using regular PCI-E cards for GPU acceleration in docker containers
https://github.com/NVIDIA/nvidia-docker

You would probably be better served by starting with the docker AI/ML accelerators on jetson, like something here:
https://forums.developer.nvidia.com/t/how-to-build-docker-container-for-jetson-nano/183281
or here:
https://medium.com/@Smartcow_ai/building-arm64-based-docker-containers-for-nvidia-jetson-devices-on-an-x86-based-host-d72cfa535786

@jeroenvanderschoot
Copy link
Author

@grantthomas thx for poiting this out.

What would be needed to copy the binaries themselves into the container?
Should I repeat build steps from wiki during build of the container image itself?

@grantthomas
Copy link

I think it'd depend on how you wanted to structure it.

You could roll the binaries in, or do a volume, so long as the docker container has access to the files.

Probably better practice to roll them in, but would be easier to update the bins without having to rebuild the docker container.

I moved a pre-compiled binary along with the proper .so objects from one Jetson NX to another, and was able to get it to work after installing the so files globally.

I don't think you need to do the full compliation, as long as you have the kernel modules available.

I would assume you'd need to passthrough whatever nvidia suggests regarding the typical jetson platform GPU usage, but that's just a guess.

Good luck, and please report back if/when you get it working, I'm sure you won't be the only one who's interested.
I know I really dont' like using gstreamer for what I need and much prefer ffmpeg

@grantthomas
Copy link

Actually, on second though, rolling them in would probably the the only way to have it consistent since the .so file(s) need to be present and loaded at boot unless you want to bootstrap loading the modules for every container start.

@Azkali
Copy link

Azkali commented Feb 11, 2022

These are the devices needed to be passed to the container :

/dev/dri
/dev/nvhost-as-gpu
/dev/nvhost-ctrl
/dev/nvhost-ctrl-gpu
/dev/nvhost-ctrl-isp
/dev/nvhost-ctrl-isp.1
/dev/nvhost-ctrl-nvdec
/dev/nvhost-ctxsw-gpu
/dev/nvhost-dbg-gpu
/dev/nvhost-gpu
/dev/nvhost-isp
/dev/nvhost-isp.1
/dev/nvhost-msenc
/dev/nvhost-nvdec
/dev/nvhost-nvjpg
/dev/nvhost-prof-gpu
/dev/nvhost-sched-gpu
/dev/nvhost-tsec
/dev/nvhost-tsecb
/dev/nvhost-tsg-gpu
/dev/nvhost-vic
/dev/nvmap

Or just pass the privileged flag if security is not a priority in your use case.
You can easily make your own Jetson accelerated container from regular containers.
This is how I do it on my end: https://gitlab.com/l4t-community/docker/toybox/-/blob/master/toybox_86_64

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants