-
Notifications
You must be signed in to change notification settings - Fork 247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Re-use downloaded pieces in subspace-gateway #3316
Comments
If it is actually small, what is the probability of cache hit vs cache miss? Or in other words, would this actually help or just result in higher CPU/RAM usage? |
AFAIK we are talking about sequential requests from the same client rather than optimizing requests in general. @clostao will likely have more information about the actual case. |
Then a better approach might be to send a batch request and get batch response back with multiple requests to the same piece(s) being de-duplicated. This seems much more efficient and precise than just slapping a cache of an arbitrary size on top. |
The purpose of the cache is to avoid re-fetching a piece from the DSN for every requested object within that piece. While grouping requests by piece would optimise retrieval and reduce redundant fetches, there is a very common scenario where a request needs to be duplicated: Files in the DSN are stored in IPLD format, meaning a file consists of multiple chunks published on-chain, along with a head node that represents an array of hashes mapping the chunks. With the grouping approach, we would need to fetch the piece containing the head node twice. This is because, until we retrieve the object mapping content from the head node, we cannot determine which chunks make up the file. That said, I agree that batching requests is a cleaner and worthwhile improvement. However, the cache could optimise file retrieval, reducing DSN fetches by one in most cases. Regarding the requirements of this cache, the TTL requirements of this cache would be minimal (in the order of seconds). |
Note that the gateway RPC server currently accepts multiple mappings per request, but doesn’t do anything to de-duplicate piece requests:
One possible implementation is:
|
I do not see why would it need to be downloaded twice. If you downloaded the piece already, keep it around (within limits) for the duration of the request. You can think of this as a local "cache" that is per-request, but precise and targeted rather than global, possibly limited to a single piece only.
It depends on the number of requests and latency. With any serious usage of the gateway the piece you have retrieved at the beginning will be evicted long before you retrieve the head node and have a chance to reuse it. Assuming the cache is actually small. |
The download is needed twice because the service that extracts the links from a IPLD node is not the subspace gateway. The workflow would be: Auto-Files Gateway asks for an object mapping hash, subspace gateway downloads piece containing this object and returns the object mapping content. The Auto-Files gateway parses that IPLD node and get that it's needed to fetch X amount of links. Some of these links are very likely to be placed in the piece already downloaded, though this would suppose another request to the subspace-gateway. |
Regarding cache size requirements, I see that would depend on the number of requests per unit of time but how would latency affect? If the cache.set is performed when the piece is retrieved from the DSN, the latency of DSN retrieval wouldn't affect the cache size. If we had a TTL-based cache it's size would be |
Having deduplication in the batched request it's the biggest optimisation we could do right now. For giving an example of how these two optimisations (same-piece object mappings batching vs caching pieces) compare: A file is composed by N object mappings.
|
Well, in-memory cache will not help with this either way. What is the rationale of splitting this logic between two applications? If it is expected that users will request files with IPLD format as an entrypoint, then it may make sense to integrate that into the gateway itself, WDYT? Maybe gateway should not even be in monorepo in that case, we already expose low-level libraries from which gateway can be built.
It depends on what data is being requested, if you have multiple requests for random pieces then the cache will be mostly useless due to small size and large amounts of cache misses. |
I don't know exactly what @shamil-gadelshin thinks about this, but for me the separation of these logics makes sense because I see two different concerns. The re-construction of object mapping ideally should be responsibility of this basecode since the entities of segment, pieces and object mappings are created & managed within this repo. This fact in addition to the premise that the object mappings should not be coupled to a specific format (in this case IPLD) for me takes to the conclusion of preferring the separation in two different layers.
Okey, I see how it'd affect. |
Isn't what what https://github.com/autonomys/subspace/tree/main/shared/subspace-data-retrieval is for? |
I see that this crate implements the piece fetch and object construction though other parts like DSN connection handling is not implemented. Anyways, generally what I'm understanding that you're suggesting is that instead of having the The current approach would be faster to implement since wouldn't require us to implement some IPLD-related tools that we've already built in TypeScript though would have some restrictions (or having to use some workarounds) in the optimisations we can perform. Since the current main objective is to figure out where the bottlenecks are going to be I'd prefer to continue with current version even though it means to not implement the optimisation of this issue. |
We could eventually split the gateway into a library and (tiny) binary, or split the HTTP server and DSN setup out into separate crates (like the Then if we wanted to implement a generic piece cache on top of the piece provider, it would go in the DSN setup crate. And if we wanted to cache pieces within object reconstruction, we’d add a batch interface to I think while we’re changing interfaces like this, it will be easiest to have them all in the same monorepo. We can split things out later if we settle on a different design. |
The rationale for splitting the logic was to move forward with PoC and discover bottlenecks, issues (like this one), and other concerns.
Yes, the exact components/services composition will likely change after testing/benchmarking. |
Tiny objects will request the same piece multiple times from
subspace-gateway
. Each request is a relatively expensive operation because it causes multiple requests to DSN.Alternatives
The obvious optimization would be to introduce a small in-memory piece cache, but this only works if the cache is big enough.
Another possible optimization is to take a batch of mappings, sort them in piece index order, and re-use downloaded pieces until they are no longer needed.
The text was updated successfully, but these errors were encountered: