You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Also support complex use cases, where the user is copying data themselves:
User is able to copy data synchronously (e.g. data comes from memory)
User must copy data async (e.g. data comes via HTTP calls to some other API)
User has no control over when (if ever) data arrives (e.g. you are a user space filesystem and have no idea when the next fwrite() will happen)
Uploader must always be mindful of flow-control and memory limits:
It must accept multiple chunks to work on concurrently, for performance.
But it must be able to limit that number, so that memory doesn't fill with buffers faster than they can be uploaded
And we might want users to be grabbing buffers from a pool
aws-c-s3 had to evolve its API several times to account for some of these use-cases, but Rust's async APIs might already take a lot of these design challenges into account, so it might not be such a big deal
Other thoughts:
Should Uploader encourage/force users to submit chunks along part boundaries?
Allow users to provider per-part checksums as they upload?
The text was updated successfully, but these errors were encountered:
Uploader must always be mindful of flow-control and memory limits:
aws-c-s3 had to evolve its API several times to account for some of these use-cases, but Rust's async APIs might already take a lot of these design challenges into account, so it might not be such a big deal
Other thoughts:
The text was updated successfully, but these errors were encountered: