Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't send large folders (no space left on device) #93

Open
2 tasks done
raingloom opened this issue Jul 27, 2023 · 7 comments · Fixed by flathub/io.github.jacalz.rymdport#25
Open
2 tasks done
Labels
bug Something isn't working dependencies Issues related to project dependencies os-linux Issues specific to running on Linux performance Improve application performance
Milestone

Comments

@raingloom
Copy link

Checklist

  • I have searched the issue tracker for open issues that relate to the same problem, before opening a new one.
  • This issue only relates to a single bug. I will open new issues for any other problems.

Description

Here is the error:

2023/07/27 14:49:43 Fyne error:  Error on sending directory
2023/07/27 14:49:43   Cause: write /tmp/wormhole-william-dir1332995391: no space left on device
2023/07/27 14:49:43   At: github.com/Jacalz/rymdport/v3/internal/transport/bridge/send.go:190

I suspect that this is because similar to the Python version you are constructing the zip file in /tmp instead of doing the proper thing and streaming it. Otherwise you are doubling disk usage for sending a file, which is pretty bad when the file or directory is several gigabytes in size.

It's likely running out of memory because Flatpak mounts /tmp as a tmpfs instead of backing it by disk.

Steps to Reproduce

  1. Find a large folder or create one, it should be bigger than the amount of free RAM or storage you have.
  2. Try sending it.

Screenshots

No response

Rymdport version

3.4.0 (Flatpak)

Operating System

Linux

Operating System Version

guix 182be30

Go Compiler Version

No response

Additional Information

No response

@raingloom raingloom added the bug Something isn't working label Jul 27, 2023
@Jacalz
Copy link
Owner

Jacalz commented Jul 27, 2023

I appreciate the report but I don't think there is anything that I can do about this at this point in time.

The zip-creation happens inside the workhole-william dependency and is not a bug in this project. However, I also believe that it is an inherent weakness of the wormhole protocol that you need to know the size of the file (the zip-file in the case of directory transfers) before you send it.

I 100% agree with you that it would be a lot better to stream the zip-file but we need to know the size before starting the transfer (I think) and I doubt that it will be possible to use streaming in that case. However, I will look into this to see if there is anything that can be done about it.

FYI: It is also worth noting that the Python version of wormhole is the reference implementation of the protocol. If it doesn't work there then it probably wouldn't work in the Go implementation either.

@Jacalz Jacalz added enhancement New feature or request wontfix This will not be worked on upstream-issue An issue in a dependency or other upstream tool and removed bug Something isn't working labels Jul 27, 2023
@Jacalz
Copy link
Owner

Jacalz commented Jul 27, 2023

It definitely is problematic with the Flatpak. Will have to try and see if there is any way to work around the issue

@raingloom
Copy link
Author

raingloom commented Jul 28, 2023 via email

@Jacalz
Copy link
Owner

Jacalz commented Jul 28, 2023

That sounds like a good idea. I had some similar thoughts about how this might be solved. It sounds like we are on the same page

@Jacalz Jacalz removed the wontfix This will not be worked on label Aug 3, 2023
@Jacalz Jacalz added the performance Improve application performance label Aug 23, 2023
@Jacalz Jacalz added this to the v3.5.x milestone Aug 23, 2023
@Jacalz Jacalz added bug Something isn't working and removed enhancement New feature or request labels Sep 2, 2023
Jacalz added a commit to flathub/io.github.jacalz.rymdport that referenced this issue Jan 3, 2024
@Jacalz
Copy link
Owner

Jacalz commented Jan 3, 2024

Sorry for the long delay. I believe that this should be fixed with flathub/io.github.jacalz.rymdport#25. I'll close the issue for now but feel free to reopen if it doesn't work (note that it will take a few hours for the new build to land in the stable channel of Flathub).

@qlyoung
Copy link

qlyoung commented Jul 11, 2024

I am still encountering this issue on 3.6.0. I am not using the flatpak, I am using the AUR, which builds from source: https://aur.archlinux.org/packages/rymdport

The "fix" causes the flatpak to use the host /tmp. At least on my system that is still a memory backed filesystem so nothing changes.

Attempting a 20gb transfer on my system I observe rymdport write 20gb to /tmp. If I remount /tmp with a 60gb filesystem then that works (I have 32gb of memory so the 20gb happens to fit).

Aside from that, during the transfer rymdport's own memory usage steadily grows:

image

I guess that's a separate issue, though. Filed at #165

@Jacalz
Copy link
Owner

Jacalz commented Jul 11, 2024

Thinks for bringing this to my attention. I'll reopen this issue and look into it again. The intermediary zip file write is unfortunately part of how https://github.com/psanford/wormhole-william handles transfers so I might have to come up with a better solution there.

@Jacalz Jacalz reopened this Jul 11, 2024
@Jacalz Jacalz added os-linux Issues specific to running on Linux dependencies Issues related to project dependencies and removed upstream-issue An issue in a dependency or other upstream tool labels Jul 11, 2024
@Jacalz Jacalz modified the milestones: v3.5.x, v3.7.0 Jul 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working dependencies Issues related to project dependencies os-linux Issues specific to running on Linux performance Improve application performance
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants