failed to write data into buffer by buffer overflow action=:throw_exception #4506
-
Describe the bug
To Reproducecreate fluentd as a kubernetes pod and use the below fluentd.conf from configuration tab which will send logs to AWS OpenSearch Expected behaviorlogs needs to be sent continuously to AWS OpenSearch without buffer related error Your Environment- Fluentd version:1.16.5
- Operating system: Amazon Linux 2
- Kernel version: 5.10.210-201.852.amzn2.x86_64 Your Configuration# Input plugin to tail container logs
<source>
@type tail
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
tag kubernetes.*
read_from_head true
<parse>
@type multi_format
<pattern>
format json
time_key @timestamp
time_format %Y-%m-%dT%H:%M:%S.%NZ
</pattern>
<pattern>
format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/
time_format %Y-%m-%dT%H:%M:%S.%N%:z
</pattern>
</parse>
</source>
# Kubernetes metadata filter
<filter kubernetes.**>
@type kubernetes_metadata
</filter>
# Output plugin to send logs to OpenSearch
<match kubernetes.**>
@type opensearch
include_tag_key true
host "opensearch domain"
port "443"
scheme https
ssl_verify true
ssl_version TLSv1_2
index_name services_log
include_timestamp true
tag_key @log_name
time_key @timestamp
time_format %Y-%m-%dT%H:%M:%S.%NZ
buffer_chunk_limit 2M
buffer_queue_limit 32
flush_interval 5s
max_retry_wait 30
disable_retry_limit
num_threads 8
</match> Your Error Log2024-05-24 04:41:09 +0000 [warn]: #0 failed to write data into buffer by buffer overflow action=:throw_exception
2024-05-24 04:41:09 +0000 [warn]: #0 emit transaction failed: error_class=Fluent::Plugin::Buffer::BufferOverflowError error="buffer space has too many data" location="/fluentd/vendor/bundle/ruby/3.2.0/gems/fluentd-1.16.5/lib/fluent/plugin/buffer.rb:330:in `write'" tag="kubernetes.var.log.containers.service.log" Additional contextNo response |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 15 replies
-
This is not a bug. The amount of logs and the transfer rate need to be adjusted. Is it possible that the pace of transfer is not keeping up with the pace of log generation? |
Beta Was this translation helpful? Give feedback.
In your config, the total buffer limit size will be
buffer_chunk_limit
*buffer_queue_limit
(2M * 32 = 64M
).buffer space has too many data
means the buffer size has reached this limit and new data cannot be written.You can increase this limit by adjusting
buffer_chunk_limit
orbuffer_queue_limit
.(This is an old setting format for Fluentd v0 series. Although you can still use these options in v1 series, but you should use the v1 format if possible. Anyway, it has nothing to do with this issue.)
If there's a temporary increase in data generation and the transfer can't keep up temporarily, this should mitigate the issue.
If the transfers are not keeping up constantly, this will not be eff…