You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This mainly happens running EOS blockchain specificaly when there is already data in the target s3 bucket. If the bucket is create by the process, it runs fine (using around 3Gi of RAM). But if the container gets killed and restarts, the memory consumption steadily starts to climb up, as shown in the image below.
The text was updated successfully, but these errors were encountered:
The bundle of a segment is all done in memory, so that is expected right now. I haven't implemented scratch space to put data on disk and limit memory for now.
Workaround if too bit currently is to reduce blocks per bundle and perform merging of "parquet bundle" off process.
Thank you for the quick answer. After further investigation, the problem was related to the cursor file being lost on container restart since I was not using a persistent volume. I am not sure why this use case was causing some sort of memory leak. I would have expect the process to simply start back at the start block.
Using the
feature/parquet
branch, I get really high memory usage from running this configuration:This mainly happens running EOS blockchain specificaly when there is already data in the target s3 bucket. If the bucket is create by the process, it runs fine (using around 3Gi of RAM). But if the container gets killed and restarts, the memory consumption steadily starts to climb up, as shown in the image below.
The text was updated successfully, but these errors were encountered: