You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
3d59efb Don't run if the snapshots weren't found. Can't work with a nil pointer anyway.
0d13767 Drafted some tests for new snapshots-backed job splitting.
77f9dff First draft of the splitter rework. Next: * update the splitter_test and separate the two stages: partials expectations from request ranges expectations. * make sure the SplitWork feeds into the right things after: * squasher from partialsPresent * notification sent to end user with partialsPresent.Merged() * recompute reqChunk based on partialsMissing.MergedChunked() * finish computeRequests(subreqSplit)
264299e Fixup risk of corruption for stores, by separating the generation of the content to be written from the upload, which is where the latency occurred.
d8dffec No logic change. Moved code around, and some renames.
a30525a Now fetch the stores. Not yet using the new data to do anything.
2ffb44d Now send the proper progress messages, based on partialsPresent too. Reverted to storeSplit in the SplitWork job. Segregated the batchRequests() function with its own concern. Save partial stores with better conditions. Roll() now truncates.
b359fe0 Only send request for initial store snapshots for store modules.
a54f358 Ready test suite for modifications and using the snapshots
6a9cb83 Remove since reqChunk not exist any more // TODO(abourget): what is dispatched here would much better be some of those objects // in the WorkUnit instead, like the reqChunk directly // Ideally the Callback over there carries the reqChunk, and was seeded with that reqChunk
37f4455 Splitter -> workplan.go Fixed workplan now ALWAYS returning a WorkUnit for all stores, so you can safely iterate on the WorkPlan and be sure to cover all required stores. (backprocess.go and workplan_test.go where we never expect a nil anymore.)
70036bb Test and ensure bound checks for bump intervals in store saves.
0bcc95b WIP: now the partials that are computed are not controlled by the orchestrator, he only receives what the backends produced as partials, according to their own storeInterval configurations. They will therefore be able to take more or less snapshots depending on the size of stores, or other heuristics.
f6e9b90 adding append state method with extern calls and implementation
248f89a adding cacheEnabled attribute to pipeline and adding if statements to use and not use the cache in places where the output cache is loaded, get, save and update
c06d3b6 adding check for the splitwork to make sure we have some work todo
33ba09b multi-threaded squasher store access, again
329599b nextExpectedBoundary refactor. We keep the explicit passing of block numbers for both LOADING and DELETING, as to not hack our way around setting nextExpectedBoundary only to get a proper filename (if it were to pluck the end boundary from its nextExpectedBoundary). Also, for WriteState(), there are 2 situations, one where the Squasher receives instructions as to what should be written, from the subrequest, and has its own tracker of nextExpectedStartBlock, so rather than being driven by the underlying nextExpectedBoundary, is the one driving the ranges. The only left is the real-time one in saveStoresSnapshots(). It is therefore going to instruct WriteState() with the right block number. It concentrates all of those boundary checking INSIDE that function. And that's it.
67bab9b only updating pipeline store map for backProcessedStores
249935f orchestrator: debugging job pool ordering. (queue channels are now unbuffered)
37ec9e6 orchestrator: first attempt at getting correct ordering
a3bf0ec remove the EOF err when cache streaming is done
89cefe5 remove the block dep on ModuleExecutor run func
a1920d4 remove this todo: because the only place StoresReady is call is after we received all sub request results and all callback and squash process are completed at that point // FIXME(abourget): Before checking the state of all those squashables, // we need to make sure all those Scheduler::Callback and Squash() calls // have finished, and that those merge() operations have completed.. // otherwise here we can't loop the squashables and expect to have // merged stores. // LET'S CHECK THAT LATER