-
-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Struggling with unmanaged memory #361
Comments
Hey @EmanueleAlbero, By default Streamiz and (Kafka Streams JAVA also) use one Rocksdb instance per store per partition, for Windowed store it's at least 3 Rocksdb instance per store per partition. For each RocksDb instance, we have :
So more partitions you have, more unmanaged memory you need, especially if you have stream-stream join operations. In JAVA, you can configure a RocksDb config setter to override the default behavior : In Streamiz, you can do more or less the same thing, excepting setting the index and filter in the block cache to avoid lot of unmanaged memory consumption. It could be a good enhancement to bound the unmanaged memory. Let me fill the gap in the next release. |
Hey @LGouellec thanks, very informative!
|
Hey @EmanueleAlbero , 1- RocksDb use index and filter to rapidly get data. These index and filters are stored in the memory, but let me try to reproduce to avoid memory leak. 2- Which Metrics package do you use ? Btw, I'm currently conducting a satisfaction survey to understand how I can better serve and I would love to get your feedback on the product. Thank you for your time and feedback! |
Hy @LGouellec here is the result of a test disabling the metrics at all I've participated in the survey and I want to thank you once again for the amazing job you are doing. |
Hey @EmanueleAlbero , So it seems that the OpenTelemetry exporter has a memory leak. I'll fix it. |
Hi @LGouellec we are experiencing a similar memory leak when using the prometheus exporter instead of OpenTelemetry. We are using streamiz 1.6 |
Hey @hedmavx , You mean if you disable the prometheus exporter, you have no longer memory leak ? |
@hedmavx , Can you reproduce the memory leak with the prometheus exporter and provide a thread dump please ? Best regards, |
I have found the memory leak of the OpenTelemetry reporter. I'll try to fix asap. Best regards, |
Hi sadly we can't get a thread dump in the environment we are running the application Best regards |
Description
Hi, I'd start saying that I don't know if this is an actual issue or just a misconfiguration problem.
However I've a topology with 2 KStreams and an Outer Join between these 2 KStreams.
KStream1 receive a data every 100ms
KStream2 receive a data every about 3/4s
Uses a RocksDb store with default settings.
Everything works fine but I can see the unmanaged memory keep growing indefinitely.
This is a picture from dotmemory of the application after several hours of work (on the very same partition).
To add more context I'm also applying a 20 min Grace period, 10 min Retention Time and 10 min for
WindowStoreChangelogAdditionalRetentionMs
Run the application both on Linux or Windows environment shows a similar behavior
Is there something I can check\verify in the configuration to avoid this issue?
The text was updated successfully, but these errors were encountered: