-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Discussion: Keep Indexer Data on Disk #1634
Comments
We do currently keep the full retention window on disk, but we explicitly avoid storing a blockID -> height mapping because it will produce a heavy random write workload. To support lookups by both height and blockID, we iterate over the full height based mapping on startup so that we can load the blockID -> height mapping into memory. This motivates setting an upper bound to prevent prolonged load times on startup. There are a couple of tradeoffs here:
The big question is where to draw the line between an API served by the HyperSDK in the node, as a sidecar using code provided by the HyperSDK, or as an external service built to scale out horizontally? |
HyperSDK should provide sufficient tooling for at least 80% of projects by default without needing any additional software, IMHO. Let me know if you disagree. I'll run my benchmarks on NVMe and EBS and will get back to you with the results so we can continue the conversation with data. |
I made a benchmark, and the performance is more than adequate. The benchmark runs on 100k blocks, fills them, then queries randomly. SetupThe transactions are Transfer transactions with a Database structureBlocks are stored as Write benchmarkThe benchmark calls Read benchmarkIn the read benchmark, we retrieve:
Overall results:
Problems
ConclusionsFor 100k TPS writing 100 blocks per second, enabling the indexer could be a major slowdown for a validating node. However, for a node dedicated to indexing, running even on such an underpowered machine, as Macbook Air, the limit would be at least 2k blocks with 1000 transactions each—around 2 million TPS, which is more than enough. Possible improvements
Here's the branch with the benchmark: indexer-on-disk |
From offline conversation: Need to test 100+ gigs DBs |
Updated benchmark for an 11GB database: Database size on disk: 11GB. Blocks are digested at around 400-500 blocks per second without parallelization enabled. Each block contains 1000 transactions, so that's 400-500k TPS. The read speed has decreased slightly but remains solid—2k+ RPS for whole blocks and 1k+ RPS for individual transactions. Overall, this benchmark processed over 2 million blocks with a rolling window of 1 million blocks.
|
I propose storing the indexer data on disk. Right now, it's all kept in RAM, then copied to disk, and later restored from disk to RAM during Node startup. Speed shouldn't be a concern. Modern SSDs are cheap and abundant, and with ext4's page cache, performance should be solid. In production, a simple caching server could be placed in front for added speed.
I love having a built-in indexer. Indexers in EVM are a pain, so let's make this a fully functional one—we're already 99% there.
I also suggest removing the
const maxBlockWindow uint64 = 1_000_000
limit on stored blocks. Since the data will be on disk, it’s no longer necessary. Instead, we can limit the size with an option like--max-indexer-size=4TB
.Indexer nodes shouldn't be validators, and validator nodes shouldn't index. That's how it works in EVM, and I envision the same for HyperSDK.
P.S. The only issue I see is that block history won't be syncable across nodes, but we've never discussed keeping the entire chain history anyway.
The text was updated successfully, but these errors were encountered: