You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to fluentbit documentation and statistics provided by AWS, fluentbit is much less CPU & memory intensive as compared to fluentd, so why is this example using a higher CPU than the fluentd counterparts from the past? I would like to know how do I decide how much CPU/memory must be allocated to the daemonset pod because with the above configuration, it is almost impossible for me to use a t3.medium or t3.large instance as most of its CPU is consumed by the daemonset pod leaving none to little room for real workload pods.
The text was updated successfully, but these errors were encountered:
I have a similar issue. I have a tiny cluster used for very light work, it normally runs about 2 to 5 nodes with autoscaler and the fluent-bit consumes more CPU than any other workload in the cluster. when the cluster is on 2 nodes with not a lot of load it consumes about 1vCPU of the EC2 instances. Not sure why or how to determine why!?
As per the example provided on https://github.com/aws-samples/amazon-cloudwatch-container-insights/blob/master/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/fluent-bit/fluent-bit.yaml, fluentbit is requesting
500m
of CPU inside EKS.According to fluentbit documentation and statistics provided by AWS, fluentbit is much less CPU & memory intensive as compared to fluentd, so why is this example using a higher CPU than the fluentd counterparts from the past? I would like to know how do I decide how much CPU/memory must be allocated to the daemonset pod because with the above configuration, it is almost impossible for me to use a
t3.medium
ort3.large
instance as most of its CPU is consumed by the daemonset pod leaving none to little room for real workload pods.The text was updated successfully, but these errors were encountered: