-
Notifications
You must be signed in to change notification settings - Fork 633
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to Volume Mount in Playbook Execution Container #1927
Comments
Maybe I misunderstood but is the ee_extra_volume_mounts not what you are looking for? EE means execution environment which is the container that runs your jobs |
@YaronL16 I appreciate the suggestion; however, I believe my issue might not be resolved by using The settings of $ kubectl -n awx describe pod awx-demo-task-6898d4ddbc-l82ts
Name: awx-demo-task-6898d4ddbc-l82ts
Namespace: awx
Priority: 0
Service Account: awx-demo
...
Containers:
redis:
Image: docker.io/redis:7
...
awx-demo-task:
Image: quay.io/ansible/awx:24.6.1
...
Mounts:
...
/etc/tower/settings.py from awx-demo-settings (ro,path="settings.py")
/path/to/volume-mount/ from volume-mount-hostpath-volume (ro)
/var/lib/awx/projects from awx-demo-projects (rw)
...
awx-demo-ee:
Image: quay.io/ansible/awx-ee:24.6.1
...
Args:
/bin/sh
-c
if [ ! -f /etc/receptor/receptor.conf ]; then
cp /etc/receptor/receptor-default.conf /etc/receptor/receptor.conf
sed -i "s/HOSTNAME/$HOSTNAME/g" /etc/receptor/receptor.conf
fi
exec receptor --config /etc/receptor/receptor.conf
...
Mounts:
...
/etc/receptor/work_private_key.pem from awx-demo-receptor-work-signing (ro,path="work-private-key.pem")
/path/to/volume-mount/ from volume-mount-hostpath-volume (ro)
/var/lib/awx/projects from awx-demo-projects (rw)
...
awx-demo-rsyslog:
Image: quay.io/ansible/awx:24.6.1
...
Volumes:
...
volume-mount-hostpath-volume:
Type: HostPath (bare host directory volume)
Path: /path/to/volume-mount/
HostPathType: Directory
...
As far as I have tried, when running Playbooks on AWX v24, it appears that the pod For example, running the following playbook below: - hosts: all
tasks:
- local_action:
module: ansible.builtin.shell
cmd: printenv
- local_action:
module: ansible.builtin.shell
cmd: ls -la /
- name: sleep 60
ansible.builtin.command: sleep 60 Then, you'll know the name of the pod that's running. changed: true
stdout: >-
...
HOSTNAME=automation-job-3-568rw
HOME=/runner
JOB_ID=3
...
stderr: ''
rc: 0
cmd: printenv Also, you can see that changed: true
stdout: |-
total 72
drwxr-xr-x 1 root root 4096 Oct 31 06:45 .
drwxr-xr-x 1 root root 4096 Oct 31 06:45 ..
dr-xr-xr-x 2 root root 4096 Jun 25 14:23 afs
lrwxrwxrwx 1 root root 7 Jun 25 14:23 bin -> usr/bin
dr-xr-xr-x 2 root root 4096 Jun 25 14:23 boot
drwxr-xr-x 5 root root 360 Oct 31 06:45 dev
drwxr-xr-x 1 root root 4096 Oct 31 00:25 etc
drwxr-xr-x 2 root root 4096 Jun 25 14:23 home
lrwxrwxrwx 1 root root 7 Jun 25 14:23 lib -> usr/lib
lrwxrwxrwx 1 root root 9 Jun 25 14:23 lib64 -> usr/lib64
drwx------ 2 root root 4096 Oct 28 04:09 lost+found
drwxr-xr-x 2 root root 4096 Jun 25 14:23 media
drwxr-xr-x 2 root root 4096 Jun 25 14:23 mnt
drwxr-xr-x 1 root root 4096 Oct 31 00:19 opt
dr-xr-xr-x 315 root root 0 Oct 31 06:45 proc
dr-xr-x--- 1 root root 4096 Oct 31 00:24 root
drwxr-xr-x 1 root root 4096 Oct 31 00:26 run
drwxrwxr-x 1 root root 4096 Oct 31 06:45 runner
lrwxrwxrwx 1 root root 8 Jun 25 14:23 sbin -> usr/sbin
drwxr-xr-x 2 root root 4096 Jun 25 14:23 srv
dr-xr-xr-x 13 root root 0 Oct 31 06:45 sys
drwxrwxrwt 1 root root 4096 Oct 31 06:45 tmp
drwxr-xr-x 1 root root 4096 Oct 28 04:09 usr
drwxr-xr-x 1 root root 4096 Oct 28 04:09 var
stderr: ''
rc: 0
cmd: ls -la / With the playbook above running, even if you look at the state of the corresponding pod on K8s, you can see that the container running the job has not mounted anything.
|
Please confirm the following
Feature Summary
AWX creates a
automation-job-<job_id>-<random>
pod and executes the playbook within the worker container.Using the method described in Custom Volume and Volume Mount Options, volumes can be mounted to the Web or Control Plane containers.
However, there is no way to mount a volume to the job's worker container.
The text was updated successfully, but these errors were encountered: