Replies: 1 comment 1 reply
-
If you want to avoid thread blocking, then use the async API consistently. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
During our load testings, we notice that when our ec2 instances are busy (cpu utilization is close to 100%), the lettuce latency metrics based on CommandLatencyEvent shows slowness. For example, a simple get command could take 100ms, while on redis server side it only takes about 10ms.
When the ec2 instances are not busy, then all the redis commands are fast on client side.
We have captured the thread profiles (using new relic thread profiler) for both busy and non-busy scenarios. When it is busy, the profile shows non-0%. We have many other asynchronous services such as Async apache http client thread pool, etc which are also using the ForkJoin.commonPool and our ec2 instance is pretty small. See attached file.
Do you have any suggestion on how to improve our performance? Is there a way to use a non-default threadpool in lettuce client config? Any suggestion which could improve our performance is greatly appreciated. (we are using 6.2)
Beta Was this translation helpful? Give feedback.
All reactions