You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our scenario is use modify_user_record.lua to add some log dimensions. Some value is dynamic, so we need hot reload to update the dynamic value and add correct value accordingly.
My question is what's the flush strategy to handle reload? Will we have data pollution between reload? For example: some logs belong to {Groupid:group1} but after reload its group number change to group2. This dimension will add via lua file. Is this possible that the group1 logs are sent with delay and are mistakenly tagged as group2?Will the group1 logs before reload be dropped or kept?
If the existing logic/configuration settings can avoid the above problems that will be very helpful.
Looking forward for your reply. Thanks
[SERVICE]
Flush 1
Daemon Off
Log_Level warning
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_PORT 2020
Hot_Reload On
Parsers_File parsers.conf
# input for kubelet logs
[INPUT]
Name tail
Tag mdsd.amlmanagedcomputelogs
Path /var/log/kubelet/*.log
Path_Key source_path
Parser extract_kubelet_log
DB /tmp/flb_kube.db
DB.Sync Normal
read_from_head true
Mem_Buf_Limit 10MB
Skip_Long_Lines On
Refresh_Interval 2
Rotate_Wait 10
# filter for reconstructing all infra_container_logs
[FILTER]
Name lua
Match mdsd.amlmanagedcomputelogs
script /fluent-bit/etc/modify_infra_record.lua
call modify_record
# filter for reconstructing all user_container_logs according to the requirements of shoebox
[FILTER]
Name lua
Match mdsd.*azuremonitorlogs
script /fluent-bit/etc/modify_user_record.lua
call modify_record
# filter for extract event logs from kubelet_logs
# rewrite_tag will duplicate matched records and rewrite to new tag
# refer: https://docs.fluentbit.io/manual/pipeline/filters/rewrite-tag
[FILTER]
Name rewrite_tag
Match mdsd.amlmanagedcomputelogs
Rule $log ^(Event\(v1\.ObjectReference) container_event true
# filter for reconstructing kubelet event logs
[FILTER]
Name lua
Match container_event
script /fluent-bit/etc/modify_event_record.lua
call modify_record
[OUTPUT]
Name forward
Host 127.0.0.1
Port 5101
Match mdsd.*
[OUTPUT]
Name HTTP
Match container_event
Host 127.0.0.1
Port 8911
URI /v1/event
Format json
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Our scenario is use
modify_user_record.lua
to add some log dimensions. Some value is dynamic, so we need hot reload to update the dynamic value and add correct value accordingly.My question is what's the flush strategy to handle reload? Will we have data pollution between reload? For example: some logs belong to {Groupid:group1} but after reload its group number change to group2. This dimension will add via lua file. Is this possible that the group1 logs are sent with delay and are mistakenly tagged as group2?Will the group1 logs before reload be dropped or kept?
If the existing logic/configuration settings can avoid the above problems that will be very helpful.
Looking forward for your reply. Thanks
Beta Was this translation helpful? Give feedback.
All reactions