-
Notifications
You must be signed in to change notification settings - Fork 257
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to replicate data using csi-driver-nfs #740
Comments
what's the result of |
@andyzhangx Thank you . root@deployment-nfs-6bd697cb78-bcwfj:/# mount | grep nfs root@deployment-nfs-6bd697cb78-bcwfj:/# df -h |
that means your nfs mount is broken, why it's mounted to
|
As per link https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/docs/csi-debug.md#case2-volume-mountunmount-failed. output is empty. I am getting empty output in pod with below command as well. Please find below steps I have followed for the NFS mount. Step 1: Step 2: Step 3: Step 4: Step 5: |
then what's the csi driver logs on that node:
|
csi-nfs-node.log Please find above log files for more details. |
Latest Update nfs mount inside driver: - kubectl exec -it csi-nfs-node-cvgbss -n kube-system -c nfs -- mount | grep nfs nfs-server.default.svc.cluster.local:/pvc-79d508c5-6e10-4c7c-a982-46c994a61142 on /var/snap/microk8s/common/var/lib/kubelet/pods/2beda551-3e7f-4907-baf4-5e6bb93815a3/volumes/kubernetes.io~csi/pvc-79d508c5-6e10-4c7c-a982-46c994a61142/mount type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.31.1.220,local_lock=none,addr=10.152.183.183) |
|
from csi driver logs, the nfs mount succeeded:
|
@andyzhangx yes, from csi driver logs, the nfs mount succeeded. |
Workaround as explained kubernetes/minikube#3417 By adding nfs-service.default.svc.cluster.local and cluster IP address of nfs-server in /etc/hosts of node, I am able to mount the nfs-share in pods. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
What happened:
We have followed the steps as mentioned in CSI driver example.
Ref: https://github.com/kubernetes-csi/csi-driver-nfs/tree/master/deploy/example
I change the number of replicas in deployment (https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/example/deployment.yaml) from 1 to 2
While accessing NFS shared folder from each pod, we are getting new NFS folder mounted with no shared and replicated data.
What you expected to happen:
We are expecting that shared data should be mounted on each replica of the pod.
If I create files on one pod then it should be reflected on other pod through NFS server.
How to reproduce it:
nfs/master/deploy/example/deployment.yaml) from 1 to 2
Anything else we need to know?:
Environment:
kubectl version
): v1.29.7PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
uname -a
): 6.1.0-23-cloud-amd64The text was updated successfully, but these errors were encountered: