-
Notifications
You must be signed in to change notification settings - Fork 529
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Manual chunk upload for GCS #2480
Comments
I'll look at this when I get a chance, but it's unlikely to be today, I'm afraid. It's been a while since I've looked at "manual" resumable uploads so I'll need to get all the context again. |
@jskeet |
If by "soon" you mean you need to do this today, before I have a chance to look into this, then yes, you could look into doing it manually. But I don't remember enough about the protocol involved to give any more advice. |
I need to have this implemented by end of week. I'll give it a go :) Thanks for the prompt response! |
Just to check your use case, am I right in saying that you basically want to create a new object, upload lots of chunks separately (potentially from different servers?) and then finalize the object later? (I don't believe you can leave it in a perpetual "keep appending" state, but I could be wrong.) One option to consider by the way is using the "compose" operation - that's not directly exposed in StorageClient, but if you use the Service property you can get at the underlying StorageService from Google.Apis.Storage.v1 - that way you could upload each chunk as a separate object, and compose them all at the end. I wouldn't suggest that as a permanent solution, but it might be a simple temporary workaround until we've had time to get the write-multiple-chunks-to-a-single-object option working. |
@jskeet Yes, I want to upload from chunks the following way: client frontend --- send chunk 1 ---> our backend (.net core) --- send chunk 1 ---> google cloud storage The reason is that the file we'd like to upload is very voluminous and our backend would have Out of Memory exceptions if client uploads the entire file. The "compose" solution you are proposing would work as workaround :) I'll do this for now. |
On a mobile, so briefly - you call Execute or ExecuteAsync on the request. I suspect that client.Service.Objects.Compose(...) is a simpler way to get a request, too - but what you've got should work. |
+1 to what @jskeet has said. You can use https://cloud.google.com/storage/docs/composing-objects#create-composite-client-libraries as a reference. |
Okay, I've had a look now, and the Upload code always assumes it can tell the server that it's "done" when it reaches the end of the stream. Changing that to allow "upload but don't finalize" may be a significant amount of work - I'm not sure yet. Just working out the best API surface for it is at least somewhat challenging. I'll consult with colleagues next week about how we prioritize this feature request - in the meantime, I hope the workaround is working for you. |
Quick update: we've spoken with the GCS team, and while there's only one other client library that currently exposes this functionality (partial resumable upload) it is a feature that the GCS team would like to see. No promises on an implementation timeframe, but we'll include it in our planning considerations. |
Hello! Thank you for the work on this SDK as it has been all working perfectly for now.
We are trying to manually upload chunks the following way:
Google.Cloud.Storage.V1
to initialize a chunked upload of a large file (multiple GBs). For this we useclient.CreateObjectUploader
and thenawait uploader.InitiateSessionAsync
. We therefore get a session URI.Google.Cloud.Storage.V1
.Every time the client uploads a chunk of data, we do the following:
Unfortunately, everytime a new chunk is uploaded, it replaces the one before that. We are not sure what we are missing as we have tried a few ways to do that.
Can you please let us know if you have any idea of what we are doing wrong?
The text was updated successfully, but these errors were encountered: