Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Endpoint with scheme problem - storage class provisions, but unable to mount #80

Open
davidpechcz opened this issue Jun 23, 2023 · 2 comments

Comments

@davidpechcz
Copy link

geesefs -v
geesefs version 0.35.4

We are unable to mount volume correctly after being provisioned, we are using helm chart and storageClass:

mounter: geesefs
options: ' --no-systemd --memory-limit 1000 --dir-mode 0777 --file-mode 0666 --debug
--debug_fuse --debug_s3'

and secret.endpoint: https://fradozn9lozb.compat.objectstorage.eu-frankfurt-1.oraclecloud.com - this provisions the bucket, but does not mount it correctly - with this error:

Jun 23 13:58:42 metro-acc-node1 kubelet[885]: E0623 13:58:42.082579 885 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/ru.yandex.s3.csi^makro-bin-acc/pvc-6977b81b-884b-43c4-84f1-fafcca195753 podName: nodeName:}" failed. No retries permitted until 2023-06-23 14:00:44.082522884 +0000 UTC m=+195623.760539894 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-6977b81b-884b-43c4-84f1-fafcca195753" (UniqueName: "kubernetes.io/csi/ru.yandex.s3.csi^makro-bin-acc/pvc-6977b81b-884b-43c4-84f1-fafcca195753") pod "mocz-nginx-c68bf674b-zgrk6" (UID: "3d6c8e35-22b4-4bb2-8148-1e661965a8f2") : rpc error: code = Unknown desc = Error fuseMount command: geesefs
Jun 23 13:58:42 metro-acc-node1 kubelet[885]: args: [--endpoint https://fradozn9lozb.compat.objectstorage.eu-frankfurt-1.oraclecloud.com -o allow_other --log-file /dev/stderr --setuid 65534 --setgid 65534 --memory-limit 1000 --dir-mode 0777 --file-mode 0666 makro-bin-acc:pvc-6977b81b-884b-43c4-84f1-fafcca195753 /var/lib/kubelet/plugins/kubernetes.io/csi/ru.yandex.s3.csi/7ca88b01d8604c6cbabde677ea3418367b1a9047aa128a8139988732712f0046/globalmount]
Jun 23 13:58:42 metro-acc-node1 kubelet[885]: output:

All examples contain scheme in the endpoint. When we try to use endpoint without 'http://', we get an error before provisioning: (kubelet):
Jun 23 21:14:42 metro-acc-node1 kubelet[885]: E0623 21:14:42.071632 885 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/ru.yandex.s3.csi^makro-bin-acc/pvc-e7ca2fb3-43e6-4140-b7b4-3bb3240b195d podName: nodeName:}" failed. No retries permitted until 2023-06-23 21:16:44.071594657 +0000 UTC m=+221783.749611665 (durationBeforeRetry 2m2s). Error: MountVolume.MountDevice failed for volume "pvc-e7ca2fb3-43e6-4140-b7b4-3bb3240b195d" (UniqueName: "kubernetes.io/csi/ru.yandex.s3.csi^makro-bin-acc/pvc-e7ca2fb3-43e6-4140-b7b4-3bb3240b195d") pod "mkcz-nginx-5ffbb57f8b-mzwk6" (UID: "a33aee50-1a1c-4960-bcd9-25c10b3c8028") : rpc error: code = Unknown desc = failed to initialize S3 client: Endpoint: does not follow ip address or domain name standards.

From inside container (kubectl exec kube-system/csi-s3-9x685, container csi-s3) - when trying manually:

endpoint with "https" does not work, without it - works.

/ # export AWS_ACCESS_KEY_ID=XX
/ # export AWS_SECRET_ACCESS_KEY=YY
/ # geesefs --endpoint https://fradozn9lozb.compat.objectstorage.eu-frankfurt-1.oraclecloud.com -o allow_other --log-file /dev/stderr --setuid 65534 --setgid 65534 --
memory-limit 1000 --dir-mode 0777 --file-mode 0666 makro-bin-acc:pvc-e7ca2fb3-43e6-4140-b7b4-3bb3240b195d /var/lib/kubelet/plugins/kubernetes.io/csi/ru.yandex.s3.csi/
f03642a6dc198d3dd3165af841807f18095bf8eebcdf5c749940438e6cffa049/globalmount
2023/06/23 21:36:32.759654 main.ERROR Unable to access 'makro-bin-acc': bucket makro-bin-acc does not exist
2023/06/23 21:36:32.759813 main.FATAL Mounting file system: Mount: initialization failed
2023/06/23 21:36:33.760127 main.FATAL Unable to mount file system, see syslog for details

without 'https' -> works correctly:

/ # geesefs --endpoint fradozn9lozb.compat.objectstorage.eu-frankfurt-1.oraclecloud.com -o allow_other --log-file /dev/stderr --setuid 65534 --setgid 65534 --memory-l
imit 1000 --dir-mode 0777 --file-mode 0666 makro-bin-acc:pvc-e7ca2fb3-43e6-4140-b7b4-3bb3240b195d /var/lib/kubelet/plugins/kubernetes.io/csi/ru.yandex.s3.csi/f03642a6
dc198d3dd3165af841807f18095bf8eebcdf5c749940438e6cffa049/globalmount
2023/06/23 21:36:43.590764 s3.ERROR Unable to access 'makro-bin-acc': unsupported protocol scheme ""
2023/06/23 21:36:43.800459 main.INFO File system has been successfully mounted.

@Reonaydo
Copy link

I have same error. I am unable to use geesefs with csi driver. Driver requires endpoint with schema but geesefs able to mount only without schema

@Reonaydo
Copy link

Reonaydo commented Jul 13, 2023

I've found a strange workaround for me.
I edited csi-s3-secret and added region: us-east-1 (base64 encrypted of course)

kubectl -n kube-system edit secrets csi-s3-secret

Added
region: dXMtZWFzdC0x

And geesefs started working

p.s. I installed csi driver via helm and it's strange that there no region variable in the helm template

morguldir added a commit to morguldir/cluster that referenced this issue Aug 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants