You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Must add benchmarking code to string_set_test to evaluate the performance
The benchmarking code must only evaluate Grow - which is not simple thing to do. For that it must add, say 2^15 items without mesuring and then use additional Add to trigger Grow event (when we cross 2^k we grow). The additional Add must be under measurement. See BM_AddMany for example, how we control timing with PauseTiming/ResumeTiming.
Once the benchmark exists you can run it with ./string_set_test --bench --benchmark_filter=.*BM_Grow - should be done in opt mode.
Now it is possible to improve the implementation of grow - introduce a state machine that takes a batch of buckets and iteratively reshards all the elements in them (GrowBatch). Grow should iterate over all old buckets and call GrowBatch.
I would expect ~50% CPU reduction for large sets
The text was updated successfully, but these errors were encountered:
DenseSet::Grow suffers from slow memory latency and it has a good potential for optimization.
introduce resharding algorithm that moves items to new buckets after the resize, but it does it in chunks of kBatchSize.
Similar to #3863
2^15
items without mesuring and then use additional Add to trigger Grow event (when we cross 2^k we grow). The additional Add must be under measurement. SeeBM_AddMany
for example, how we control timing with PauseTiming/ResumeTiming../string_set_test --bench --benchmark_filter=.*BM_Grow
- should be done in opt mode.I would expect ~50% CPU reduction for large sets
The text was updated successfully, but these errors were encountered: