-
Notifications
You must be signed in to change notification settings - Fork 269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Virtual image size is larger than the reported available storage #3159
Comments
Definitely weird - the error indicates that only 12Gi is available in the volume (even though you asked for 120) You could test this by creating a 120Gi PVC and a pod mounting it; bash-5.1# stat /pvcmountpath/ -fc %a
907
bash-5.1# stat /pvcmountpath/ -fc %f
974 (%a - available (what we care about), %f - total free) |
Hi @akalenyu It looks like the size of my pvc is only 4096 bytes? |
Actually, the size of your PVC is around 120Gi - |
Maybe this case is similar to the one described here https://issues.redhat.com/browse/CNV-36769? If a first upload attempt failed for some unrelated reason (maybe the upload pod was force-deleted), then in subsequent retries the original img.disk will be there occupying space and preventing the upload from succeeding, as @akalenyu suggested. |
I made some progress with this issue by using --volume-mode=FileSystem. This option works. |
ah okay if this is block, you could repeat the nginx experiment but instead use |
So I am wondering, which version of virtctl plugin do you have. I see you are creating a |
Hi @awels |
I tried another upload with virtio-win.iso which is about 600MB. Both options block or FileSystem PVC work. 598.45 MiB / 598.45 MiB [========================================================================================================================================================] 100.00% 16s Uploading data completed successfully, waiting for processing to complete, you can hit ctrl-c without interrupting the progress |
Interesting can you try Alex's suggestion of using |
Interestingly enough. Now it works with windows iso as well which didn't work before. I can not reproduce the issue now unfortunately. 4.67 GiB / 4.67 GiB [==========================================================================================================================================================] 100.00% 2m11s Uploading data completed successfully, waiting for processing to complete, you can hit ctrl-c without interrupting the progress |
Hmm maybe sometimes the linstor csi driver messes up the size calc? iirc you were doing |
I've been trying multiple sizes before, 40Gi, 64Gi, 120Gi. None of it worked. I will try out Portworx at a later time to see if it is more stable. |
@kvaps Do you have any insight as to what might be happening with LinStore here? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
Hey, did you get a chance to try this with a different provisioner? |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. /close |
@kubevirt-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/reopen The issue still persists
|
@kvaps: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
example error for
Inside the container I can see:
1073750016 bytes - It is even more than 1Gi |
I just reported this issue to LINSTOR: LINBIT/linstor-server#421 But does this check really makes sense? When I create a new volume I expect that it will be larger anyway. |
Maybe should we add |
The main reason we have the fsOverhead is because filesystems themselves take up space on the block device. For a block device we don't have this overhead. The way we get the available space on block device size is |
@awels it seems that there is a bug, we should check if actual device size is smaller than reported, but not larger. Because smaller image can be placed on larger drive but not vise-versa |
In my case blockdev reports the same size as qemu-img:
And this size is larger than |
It's up to consideration #3458 |
@awels LINSTOR creates block device a bit larger size than requested in PVC. I think this is an issue for CDI. If I understand this correctly there is no easy way for preparing a volume with exactly requested size using LINSTOR and DRBD. @ghernadi please correct me if I'm wrong about this statement ^^ So for now I don't understand on which side this issue should be fixed? CDI or LINSTOR? |
Nevermind, I just haven't increased it enough. |
This reminds me of a similar issue we had with our Proxmox plugin, where Proxmox also wants the exact same size when moving an existing Proxmox volume into a DRBD resource. Feel free to use this property for such migrations volumes, but please make sure to DELETE THIS PROPERTY once the migration is finished. We had already tried in the past to automatically manage DRBD's Alternative solution: Tell LINSTOR to configure DRBD with external metadata. This can of course also stay in production. This alternative solution works simply because if the user requests a 1GB volume, LINSTOR will create 2 devices. Assuming LVM with default settings as a backing storage, one of the devices will have exactly 1 GB (which will also be reported by DRBD and LINSTOR as "usable size"), and a 4MB device, from which DRBD only uses a few KiB for its own metadata. So the "a bit more space" (i.e. the unused 3.something MiB) is now "trapped" in the external metadata device and therefore cannot be part of the usable space. |
@ghernadi many thanks for such detailed comment and sharing your experience 🙏 It's very interesting to learn about this opportunity. But I think we can't teach CDI for adding DRBD specific options as it works purely with Kubernetes PVCs. Besides, I am sure that this issue also affects other storage providers that base their logic on LVM and ZFS (eg. topolvm and democratic-csi) I am proposing a fix to CDI to take requestImageSize into account only for filesystem volumes. Here is a PR: #3461 @awels could you share your thoughts on this, please? |
Okay, so I think I see what is going on, lets take the 1Gi example and follow the flow of the The reason we do this check is for 'shared' filesystem storage like hpp or nfs-csi, where the reported space is the entire disk, and we don't want to use it all. We want to use exactly what was requested, so we pick the minimum of the two. Aka we discovered the block devics > request size, so we picked the requested size as the 'available' space. Now this value is stored as the available space for the virtual disk. Now in the validation routine, we run So I think the most correct fix is to move code that makes the 'available' space the minimum inside the else that is associated with filesystem volumes. This way the targetSize returned is the value found from the call to blockdev. |
Yeah my bad, sorry, I just reabased the PR |
What happened:
I tried to upload .iso image via CDI to a PVC. The iso image is nearly 5GB. It always throws an error saying the virtual image is larger than the reported available storage. I have tried multiple PVC sizes: 12Gi, 64Gi, 120Gi but it is not working.
What you expected to happen:
The iso image should be successfully uploaded to PVC
How to reproduce it (as minimally and precisely as possible):
Following command:
kubectl virt image-upload pvc win-2022-std-iso --size=120Gi --image-path=win22-std.iso --storage-class=datavg-thin-pool --uploadproxy-url=https://10.49.172.185:31876 --insecure --wait-secs=60 --access-mode=ReadWriteOnce --force-bind
PVC default/win-2022-std-iso not found
PersistentVolumeClaim default/win-2022-std-iso created
Waiting for PVC win-2022-std-iso upload pod to be ready...
Pod now ready
Uploading data to https://10.49.172.185:31876
4.67 GiB / 4.67 GiB [==========================================================================================================================================================] 100.00% 2m13s
unexpected return value 400, Saving stream failed: Virtual image size 12886302720 is larger than the reported available storage 12884901888. A larger PVC is required.
Additional context:
Add any other context about the problem here.
Environment:
kubectl get deployments cdi-deployment -o yaml
): 1.58.3kubectl version
): 1.29.0uname -a
): 5.14.0-362.24.1.el9_3.x86_64The text was updated successfully, but these errors were encountered: