Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PiHole on docker using networked (CIFS) volumes fails to initialize the gravity DB #750

Open
katbyte opened this issue Jan 4, 2021 · 16 comments
Labels
never-stale Use this label to ensure the stale action does not close this issue

Comments

@katbyte
Copy link

katbyte commented Jan 4, 2021

Versions

  • Pi-hole v5.2.2
  • Web Interface v5.2.2
  • FTL v5.3.4

Platform

Debian 8.3/linux 3.19 (intel NUC)

Docker 20 with docker-compose 1.21

Expected behavior

PiHole to correct initialize and persist querylog/data and be able to manage lists

Actual behavior / bug

The server boots up and filters, however i cannot manage lists, and if i restart the container queries are not persisted.

The docker volumes are on a network CIFS mount, if i move to a path on the local server everything works as expected. Other containers are using these mounts just fine.

The following can be found in the startup log:

pihole      | chown: cannot access '': No such file or directory
pihole      | chmod: cannot access '': No such file or directory
pihole      | chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory
...
::: Docker start setup complete
pihole      |   [i] Creating new gravity database
pihole      | Error: near line 190: database is locked
pihole      | Error: no such table: info
pihole      |   [i] Migrating content of /etc/pihole/adlists.list into new database
pihole      |
pihole      |   [✗] Unable to fill table adlist in database /etc/pihole/gravity.db
pihole      |   CREATE TABLE adlist(...) failed: duplicate column name: 1

and when navigating to the groups page

DataTables warning: table id=groupsTable - Ajax error. For more information about this error, please see http://datatables.net/tn/7

Steps to reproduce

A host CIFS mount:

//x.x.x.x/nuc/docker on /mnt/data/docker type cifs (rw,relatime,vers=3.0,cache=strict,username=nuc,uid=1001,forceuid,gid=1001,forcegid,addr=x.x.x.x,file_mode=0755,dir_mode=0755,soft,nounix,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)

And then pihole defined in a docker container such as:

  pihole:
    container_name: pihole
    image: pihole/pihole:v5.3.4
    hostname: nuc
    ports:
      - '53:53/tcp'
      - '53:53/udp'
      - '8080:80'
      - '8433:443'
    restart: unless-stopped
    volumes:
      - /mnt/data/docker/dns/pihole/pi:/etc/pihole
      - /mnt/data/docker/dns/pihole/dnsmasq:/etc/dnsmasq.d
    environment:
      - ServerIP=10.0.0.2
      - TZ="America/Vancouver"
      - WEBPASSWORD=docker
      - DNS1=10.0.0.2#553
    depends_on:
      - bind

and then docker-compose

Debug Token

Screenshots

Additional context

inspection of docker mounts:

"Mounts": [
        {
            "Destination": "/etc/pihole",
            "Mode": "rw",
            "Propagation": "rprivate",
            "RW": true,
            "Source": "/mnt/data/docker/dns/pihole/pi",
            "Type": "bind"
        },
        {
            "Destination": "/etc/dnsmasq.d",
            "Mode": "rw",
            "Propagation": "rprivate",
            "RW": true,
            "Source": "/mnt/data/docker/dns/pihole/dnsmasq",
            "Type": "bind"
        }
    ],

and mount from container

//x.x.x.x/nuc/docker on /etc/pihole type cifs (rw,relatime,vers=3.0,cache=strict,username=nuc,uid=1001,forceuid,gid=1001,forcegid,addr=x.x.x.x,file_mode=0755,dir_mode=0755,soft,nounix,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1)
@dschaper dschaper transferred this issue from pi-hole/pi-hole Jan 4, 2021
@edgd1er
Copy link
Contributor

edgd1er commented Jan 4, 2021

I had a similar problem with acl on openmediavault. Gravity db was in read only mode.

By default pihole user id inside the container is 999 and pihole group has another id.
I added a script in cont-initd to set pihole id and group according to env var. I also added www-data to pihole group also.
Here it would be to set pihole id as 1001 and group id as 1001 (cifs mount values).

This might be a partial solution for your case. At startup, i think gravity and pihole-ftl databases are created by root. Docker pihole might need further modifications to have databases created and written by a specific user.

@muzzah
Copy link

muzzah commented Jan 4, 2021

I created an issue for something similar just yesterday #749. I am using s3fs to mount S3 buckets.

It would be great if there was documentation on the user Id and group Ids used as this would help with setting up the right permissions for the mounted folders. @edgd1er can you point to where the users are created for the docker container? I tried looking but couldnt find it.

@katbyte
Copy link
Author

katbyte commented Jan 4, 2021

@muzzah - or if we could just configure the UID and GID to whatever we need - There is actually a 2 year old 2018 issue/feature request open with many a comment on it: #328

@edgd1er
Copy link
Contributor

edgd1er commented Jan 4, 2021

@muzzah ,

this is the script I created in my own version of docker-pihole which is 99% based on pihole sources. there is minor changes like redirection of logs to container stdout and user uid/gid changes. there was previously a fix for REV_SERVER_VARS and an upgrade to latest S6 version.

You may have to adapt the script to fit your needs.

As @DL6ER sais, there are 3 differents users running scripts in the container: root, www-data and pihole. The script fixes 2 problems: pihole owned files and www-data owned file by setting a env setted UID and GID, and adding www-data to pihole group. root owned/executed files are not changed.

As pihole-FTL needs a port less than 1024, root privileges are needed. a unique user seems difficult to fit for all roles.

@muzzah
Copy link

muzzah commented Jan 4, 2021

I ended up just getting rid of my S3 buckets. I dont understand why there has to be 3 users here and why there cant just be one, you are in a container after all, why not just setup one user and use that?
Its practically a nightmare for me to setup a s3fs mounted bucket to manage multiple permissions since S3 has no concept of permissions. I would have to setup separate buckets with the respective ownership and ensure that all the necessary files from the container are written into those respective buckets.

Btw, you can override userIds in docker containers by using the --user argument or user: field in docker compose. Though this also has its quirks.

@katbyte
Copy link
Author

katbyte commented Jan 5, 2021

I tried setting the docker user to root/0 and it still didn't work 😞

@DL6ER
Copy link
Member

DL6ER commented Jan 5, 2021

I dont understand why there has to be 3 users here and why there cant just be one

Because Pi-hole isn't meant to run only inside docker. It can also be installed natively on the system. And given the certain steep learning curve of docker and friends, native installs are (probably by far) the most often found types of installations. Over the past few years, Pi-hole got one of the top reasons why people are still buying Raspberry Pis. I think it was among the TOP 3 while I don't really remember the other projects. It was brought up in some DIY magazine one or two years ago.

Coming back to the question: We use three users because this is meaningful: www-data is the standard user for web server stuff. As users may have already an existing web user, we should not touch their stuff. Then we need root only for some few things like installing or modifying config files that are root-owned. And then we have the user pihole which is running the process at the core of Pi-hole: pihole-FTL. It is also a security feature for you all that we don't run pihole-FTL as root but rather as the (almost entirely) unprivileged user pihole. We give the binary special power using Linux capabilities so that it can bind to the necessary ports even without being root at all. Because of this, if there were a bug in pihole-FTL, it could never destroy your system or whatsoever.

Personally, I think this is how all the daemons on whatever operating system should work, but this is rarely (if ever) found anywhere. Imagine a buffer overflow with remote code execution flaw in some root daemon xyz. This remote code execution may very well wipe your entire system, read all your data and upload it somewhere else or encrypt your entire disk. All this because it has root powers. FTL, instead, could - at most - read and destroy the files it owns. It couldn't even read the files of other users on the system. Neither could it destroy the system in any other way. I'm just saying that I'm a lot more confident with such a security concept than what the others do ("our daemons have no exploitable flaws" ... "because we say so").

I know that docker solves such issues elegantly itself by isolating the stuff away. And I think how it does it is a very valid way to fix the issue I brought up (mostly). However, as docker is still the minority of installations out there, we cannot modify the base of Pi-hole to work how docker thinks things should work like. We also cannot afford maintaining a more docker-tailored fork of the project as it would just kill too much of the sparse (entirely volunteer!) development powers we have at hand. We're all just too busy in our everyday life jobs. Due to similar reasons, we also cannot "patch things" to make them work better with the docker concept as new releases chnage things and properly testing if the changes didn't break the container patches would be another heavy workload on us slowing down the release process just too much,

@muzzah
Copy link

muzzah commented Jan 5, 2021

Thanks for the explanation @DL6ER
I understand the reasoning and with all things in SWDev things are a tradeoff. In this instance your tradeoff makes sense but it does make using pihole with mounted drives a pain.

Though, even though I am now using local disk for bind mounts I am still seeing

pihole      | chown: cannot access '': No such file or directory
pihole      | chmod: cannot access '': No such file or directory
pihole      | chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory

and I had to downgrade from the latest tag to even be able to add block lists. There are indeed some permission problems I believe with the latest release as I could not use pihole due to some sqlite db errors. Teleporting also does not work in the latest tag

@DL6ER
Copy link
Member

DL6ER commented Jan 5, 2021

Yeah, it is indeed possible that CIFS/SMB doesn't work as Pi-hole expects to be installed on Linux and implicitly expectcs permissions to work. I haven't ever tried installing Pi-hole on a filesystem that does not support permissions myself so I cannot comment much. However, I know some people succeeded to install Pi-hole successfully in this Windows-embedded Linux Subsystem (if that is how this thing is called) and they were able to use Pi-hole (almost?) with no adaptations. Maybe they have some input for you, pinging @PromoFaux who experimented with this IIRC.

What is the file system on your disk?

pihole      | chown: cannot access '': No such file or directory
pihole      | chmod: cannot access '': No such file or directory
pihole      | chown: cannot access '/etc/pihole/dhcp.leases': No such file or directory

I cannot really say where the empty filenames are coming from, however, the warning about the dhcp.leases is fine and expected. This error should be suppressed if the file doesn't exist - where do you see it?

@katbyte
Copy link
Author

katbyte commented Jan 6, 2021

@DL6ER - thanks for you time/response, and personally i think it's great to default to the safe daemon on host model, especially for a project that is many people's first intro into this world and I've spent enough of my life lecturing people not to run services as root 😅 But it can be useful to specify the UID/GID to use for the processes even if not root and when run outside a vm/container (and as can be seen by #328 i'm not the only one who could benefit from this). I mount the docker volume with a unprivilegedd user and have done my best to set containers to run processes as that UID/GID (which is unique to docker), running 17 and while some end up as root pihole is the only one that i wasn't able to coax into working.

FWIW this is a pretty unique edge case with networked CIFS on a linux host as its this combination of OS/proto that mounts the volume as a single UID/GID, and so far it doesn't seem worth the hassle to setup NFS or mount the share as 777 just for this one container .. yet🙃 I presume it worked in WSL? (1 and 2 are quite different) as i think microsoft just hand waveses permissions away, very much their style. While i work with azure daily I haven't owned a window computerer for ages

Feel free to close this as #328 would solve my problem, Honestly i just threw it on a local volume with a cron to shut it down and copy it to where i want weekly, works just fine! i don't think there is much else to it other then pihole using fixed UID/GID. Would you accept a PR to expose those properties? (and any guidance on what would be involved?) i'm still moving services over to docker but if pihole is the only one that ends up local might just try and add it hah

@muzzah - Care to share your compose? The latest version worked just fine on a local path for me, make sure the docker user matches the owner of the mount point, or run the container as root.

Also wrt to the networked s3 volume did you try and set the mount to be 0777 so all users could write to it? i DO NOT recommend this as its is terrible security but if its just your pihole data maybe its ok. quick google found me:

Q: No permission for directories and files
A: s3fs supports files and directories which are uploaded by other S3 tools(ex. s3cmd/s3 console). Those tools upload the object to S3 without x-amz-meta-(mode,mtime,uid,gid) HTTP headers. s3fs uses these meta http headers for looking like filesystems, but s3fs can not know the meta data for file because there is no meta data. Thus s3fs displays "d---------" and "---------" for those file(directory) permission. There are several ways to solve. One is that you can give permission to files and directories by chmod command, or can set x-amz-meta- headers for files by other tools. Alternatively, you can use umask, gid and uid option for s3fs.
A: you can use complement_stat option. It gives the file/directory the permissions as appropriate as possible.

@DL6ER
Copy link
Member

DL6ER commented Jan 6, 2021

But it can be useful to specify the UID/GID to use for the processes

Oh, we have some facility to achieve this, but I'm not so sure if they are what you need. Before trying anything new, you may first want to try to modify the values in /etc/dnsmasq.d/01-pihole.conf (look for user= and group=). The user pihole is hard-wired at this time into the init.d/service script. I'll admit right-away that I'm not using Pi-hole in docker (nor am I using docker for anything else than controlled CI machines for compiling and checking stuff), so I'd not be the person saying what is feasible (and what not) with regards to docker. Let's see if Adam has time to chime in at some point. He's at least using Pi-hole in docker :-)

@github-actions
Copy link

github-actions bot commented Jan 9, 2022

This issue is stale because it has been open 30 days with no activity. Please comment or update this issue or it will be closed in 5 days.

@katbyte
Copy link
Author

katbyte commented Nov 6, 2022

As of now this is still an issue for me: i have been unable to get docker pihole to work on network mounts

can i please have this issue reopened?

@PromoFaux PromoFaux reopened this Nov 8, 2022
@PromoFaux PromoFaux added never-stale Use this label to ensure the stale action does not close this issue and removed Submitter Attention Required labels Nov 8, 2022
@madnuttah
Copy link

madnuttah commented Nov 28, 2022

I have solved this by just adding "Everyone" with write permissions and "Admin" and the admin group with full access to my smb share. If it's of any help, I could drop my fstab, too.

Edit: this is the line in my fstab:

//ipaddress/path /mnt/path cifs rw,auto,nobrl,vers=3,file_mode=0777,dir_mode=0777,credentials=/root/.smbcredentials 0 0

Maybe the 'nobrl' parameter prevents database locks.

@firewire10000
Copy link

firewire10000 commented Jan 3, 2023

I too have been having major frustration trying to get Pi-hole to work on SAMBA share on the host. The nobrl SAMBA mount option seems to fix it for me.

@RoyRock413
Copy link

same here regarding a synology nas, CIFS mounted docker volumes, and pihole having permission issues/locked databases upon attempting to update gravity. just as @madnuttah and @firewire10000 said, the "nobrl" option being passed when specifying the mount in my compose file seemed to fix this. Thanks folks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
never-stale Use this label to ensure the stale action does not close this issue
Projects
None yet
Development

No branches or pull requests

8 participants