Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Http get/put too fast after each other for Hubic? #615

Open
LubosKolouch opened this issue Dec 14, 2020 · 0 comments
Open

Http get/put too fast after each other for Hubic? #615

LubosKolouch opened this issue Dec 14, 2020 · 0 comments

Comments

@LubosKolouch
Copy link

I'm trying to do backup to Hubic... It fails regularly with 503 status code on certain chunks and the retries do not help.

I observed that once it gets to 503 for some chunks, it stays there. Other chunks are uploaded no problem at the same time.

What is interesting is that

  • it is always the same chunk causing the problem - when I re-run the backup, it fails at the same file
  • it has something to do with the total number of requests that are before or after the problematic chunk; when I setup ignore the way that the total number of files is smaller "around" the problematic file, the chunk gets uploaded correctly
  • number of threads seems to have no influence, the problematic chunk always returns 503, unless I limit the total number of new (not uploaded files yet) "around" the problematic chunk
  • it makes no difference if the remainder of the files in the batch are new (ie. not uploaded yet) or if they are uploaded already, but the new revision is not saved yet (the only difference is that they are processes at 50MB/s speed instead for example 1.4MB/s)
  • limit-rate has no effect on this problem
  • once I limit the number of the files and the chunk gets uploaded and the revision saved, it is no problem anymore.

So in practice to make an initial backup of a new bigger folder, I have to bisect the chunk where the upload stops (repeated 503), limit the files around the chunk (using filters), have it upload and the revision saved, then continue to the next failing chunk.

My guess is (from watching the debug lines) that duplicacy is shooting the request too fasts and on certain occasions it exceeds Hubic's (and potentially other services) threshold and it keeps returning a 503 for that chunk indefinetely?

Hope the description makes some sense... anyone had the same problem and fixed it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant