Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

native overlay inside Kubernetes #24831

Closed
jtl-novatec opened this issue Dec 13, 2024 · 2 comments
Closed

native overlay inside Kubernetes #24831

jtl-novatec opened this issue Dec 13, 2024 · 2 comments

Comments

@jtl-novatec
Copy link

Hello, I just got into the whole "unprivileged image builds without docker topic" and need some help.
We are currently thinking about migrating our CI from long lived AWS EC2 to ephemeral Kubernetes-based runners using Github's Actions Runner Controller Arc.Therefore, I was looking into creating a container image for our Github workflows that uses rootless podman inside a unpriviled pod.
I came across the articles by Dan Walsh, looked into the podman/stable, and tried to combine it with the ARC Dockerfiles.

What is the problem

The issue I am is that the official podman image appears to be pre-configured for fuse-overlayfs.
In a conference talk on the topic, a speaker advised to go with native overlayfs instead of fuse (https://www.youtube.com/watch?v=62p6v_A4KTM&t=1420s&pp=ygUUdW5wcml2aWxlZGdlZCBidWlsZHM%3D)
I want follow this suggestion because

  • to the layman, it seems that native overlay is the more modern / performant approach
  • I cannot to install the fuse-device plugin inside our Kubernetes cluster

On my local machine, I can build container images in my custom runner image without issues.
However, when I try to build a container using the runner image via a Github workflow (aka running inside Kubernetes), I get the following error:

Error: mounting new container: mounting build container "eb7ea3d50ce36184764a098a501d52359d65ecba5b4117e5c906ab8b2f6a651c": creating overlay mount to /home/runner/.local/share/containers/storage/overlay/a713df41a00445e75dec4bd9f347a4b2ad2056e8f04cac637b48604bb6b7e236/merged, mount_data="lowerdir=/home/runner/.local/share/containers/storage/overlay/l/YQPNYV5AE4FFAG27DHECQIX7AF,upperdir=/home/runner/.local/share/containers/storage/overlay/a713df41a00445e75dec4bd9f347a4b2ad2056e8f04cac637b48604bb6b7e236/diff,workdir=/home/runner/.local/share/containers/storage/overlay/a713df41a00445e75dec4bd9f347a4b2ad2056e8f04cac637b48604bb6b7e236/work,volatile": using mount program /usr/bin/fuse-overlayfs: unknown argument ignored: lazytime
fuse: device not found, try 'modprobe fuse' first
fuse-overlayfs: cannot mount: No such file or directory

Unfortunately, there are not that many resources that explain native & fuse-overlayfs.
I don't know why podman tries to build the image using the fuse-overlayfs binary, because

  • I don't install install it via the package manger
  • tried to configure podman in a way to use native overlayfs
  • define volumes inside the container for the directories inside the containerfile,
  • mount a ephemeral volume to the pod

Podman info (local machine)
https://pastebin.com/nXtR554V

Podman info (Kubernetes pod)
https://pastebin.com/sFgq9uUZ

Dockerfile
https://pastebin.com/4fgtDBYY

I cannot provide you any details on the Kubernetes node, but it's AWS EKS with a modern kernel.

@Luap99
Copy link
Member

Luap99 commented Dec 13, 2024

kernel: 5.10.226-214.880.amzn2.x86_64

That is not a "modern" kernel. Rootless overlayfs was added in 5.11 and then there were some selinux fixes in later version that were needed.

@Luap99 Luap99 closed this as not planned Won't fix, can't repro, duplicate, stale Dec 13, 2024
@jtl-novatec
Copy link
Author

You are correct, our clusters are using that kernel. I thought that since the article is from 2021 (https://www.redhat.com/en/blog/podman-rootless-overlay), it would be available in EKS.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants