Add support for CIFS (and S3) volumes #3089
Replies: 3 comments 5 replies
-
CorrectionEven docker-compose doesn't seem to work, since it modifies the docker-compose to point to a different volume, without any ability to edit it, so CIFS isn't working either through the direct Volume dialog or through a docker-compose file. |
Beta Was this translation helpful? Give feedback.
-
Here's my docker-compose.yml file: services:
db:
image: postgres:latest
volumes:
- pg_data:/var/lib/postgresql/data
- ./init-scripts:/docker-entrypoint-initdb.d
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "{{ environment.POSTGRES_PASSWORD }}"
ADMIN_USER: "{{ project.ADMIN_USER }}"
ADMIN_PASSWORD: "{{ environment.ADMIN_PASSWORD }}"
APP_USER: "{{ project.APP_USER }}"
APP_PASSWORD: "{{ environment.APP_PASSWORD }}"
ports:
- "5432:5432"
volumes:
pg_data:
driver: local
driver_opts:
type: cifs
device: "//{{ CIFS_IP }}/{{ CIFS_SHARE }}/{{ CIFS_SUBFOLDER }}"
username: "{{ CIFS_USERNAME }}"
password: "{{ CIFS_PASSWORD }}" And here is how Coolify transformed it: services:
db:
image: 'postgres:latest'
volumes:
- 'qkscc8480wws44kc4wkcskwg-pg_data:/var/lib/postgresql/data'
- '/data/coolify/applications/qkscc8480wws44kc4wkcskwg/init-scripts:/docker-entrypoint-initdb.d'
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: '{{ environment.POSTGRES_PASSWORD }}'
ADMIN_USER: '{{ project.ADMIN_USER }}'
ADMIN_PASSWORD: '{{ environment.ADMIN_PASSWORD }}'
APP_USER: '{{ project.APP_USER }}'
APP_PASSWORD: '{{ environment.APP_PASSWORD }}'
ports:
- '5432:5432'
networks:
qkscc8480wws44kc4wkcskwg: null
labels:
- coolify.managed=true
- coolify.version=4.0.0-beta.323
- coolify.applicationId=3
- coolify.type=application
- coolify.name=db-qkscc8480wws44kc4wkcskwg-135516574471
- coolify.pullRequestId=0
restart: unless-stopped
container_name: db-qkscc8480wws44kc4wkcskwg-135516574471
volumes:
pg_data:
driver: local
driver_opts:
type: cifs
device: '//{{ CIFS_IP }}/{{ CIFS_SHARE }}/{{ CIFS_SUBFOLDER }}'
username: '{{ CIFS_USERNAME }}'
password: '{{ CIFS_PASSWORD }}'
qkscc8480wws44kc4wkcskwg-pg_data:
name: qkscc8480wws44kc4wkcskwg-pg_data
networks:
qkscc8480wws44kc4wkcskwg:
name: qkscc8480wws44kc4wkcskwg
external: true
configs: { }
secrets: { }
As you can see, the service no longer points to the CIFS volume, but to an internal volume. Yes, I could probably hack it by mounting a CIFS on the host machine and creating a sym link to the correct folder at |
Beta Was this translation helpful? Give feedback.
-
@CC007 If you havent found a good way to integrate your cifs share yet, i found this works pretty well. You can just create a volume manually that connects to the share and tell your compose or image to use that volume.
To create a new docker volume that connects to cifs: docker volume create \
--driver local \
--opt type=cifs \
--opt device=//Your_SMB_Host_here/Share_Path_here \
--name insert_coolifys_volume_name_here If needed, you can also add user and group ids like this: docker volume create \
--driver local \
--opt type=cifs \
--opt device=//Your_SMB_Host_here/Share_Path_here \
--opt uid=xxxx,gid=xxxx,filemode=xxxx,dirmode=xxxx # all of these are optional. i think every option that cifs allows works here
--name insert_coolifys_volume_name_here |
Beta Was this translation helpful? Give feedback.
-
The issue
To my knowledge, right now the one-click database resources (like PostgreSQL) only support standard docker volumes or bind-mounts, so to install a postgres database using a CIFS volume, you'd have to create a docker-compose file and specify the CIFS volume there, to be used by a postgres service.
This takes away from the one-click nature of creating a database, since you'd have to define the service, along with all the environment, port and image configuration
Alternatives
My server runs on a raspberry pi with only some sd-card storage, so running my postgres on a local volume is no option for me.
Solution
It would be nice if you could check a "Use network storage" checkmark, which gives you the option to provide a CIFS domain/ip, along with the share name and the folder to put the data in (or a single url field with the whole path) and the credentials.
Optionally, since you already support s3 buckets for backups, it would also be nice to use an s3 bucket as a volume. This might not be optimal for the random reads and writes of a database, but it might be useful for other file storage purposes. This could either be an Amazon s3 bucket or a self-hosted s3-compatible bucket hosted on something like a NAS.
Beta Was this translation helpful? Give feedback.
All reactions