-
Notifications
You must be signed in to change notification settings - Fork 981
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support containerd log format #412
Comments
you can add the env variable |
The above regex worked for me! thanks! Could we make it work for both containerd and docker without setting the type? |
Hi, It may look like it works, though having dealt with OpenShift a lot lately: you're missing something. Eventually, you'll see log messages being split into several records. I've had to patch the We could indeed set However we also need to add the following:
Note that I'm setting a |
I have enabled rancher logging with fluentd for containerD , but still getting issue.Below our the enn variable i have pasted in daemon set env: output:
|
@arthurdarcet @faust64 How the regex string supposed to work in
Allowed |
Maybe this should just be addressed with a flag. The issue has been present for such a long time and impacts other vendors that choose to spin value-added products around this. Word to the wise, docker is not the only front-end to containers and container evolution continues. Addressing this now external to the sloppy work arounds with regex or manipulation would be a good thing. Better to get in front of the issue than lag behind. DB |
We can put additional plugin into plugins directory, e.g. https://github.com/fluent/fluentd-kubernetes-daemonset/tree/master/docker-image/v1.11/debian-elasticsearch7/plugins |
That would work as I am willing to write a custom parser for this to contribute and save others the same issues. Or re-phrased, that is perhaps the best option, an additional plugin for this specific use case |
@arren-ru : you are right, my mistake. Either way, that's not something you can currently configure only using environment variables. |
@faust64 I solved this by overriding kubernetes.conf with configmap mounted in place of original configuration with modified content, this gives basic working solution
|
The kind of solutions presented here will cause json log to be parsed as a string, and no fields defined in json itself will be recongnized as Elasticsearch fields correct ? |
Not sure understood you, but CRI logs are represented as a string line, this is not a docker logs, so if you want to parse json further you may want to add pipelined parser or filter |
I got an issue, that my logfile was filled with backslashes. I am using containerd instead of docker. I solved it by putting in the following configuration:
|
Did not work for me on 1.20.1 hosted on VMs. Still the same backslashes full of error |
I am using containerd as the CRI for kubernetes and used FLUENT_CONTAINER_TAIL_PARSER_TYPE env var. Any solution to this problem or can we change the time format by any env var? |
Ok, got it on how to fix this one. First we know that we need to change the logging format as containerd do not use json format and is a regular text format.
Now when we do this, it still shows error with the time format.
So to fix the error, we update the following value inside the source.
Now deploy the daemonset, it will work. |
I'd published the new parser included images: #521, 2736b68 With ref: https://github.com/fluent/fluentd-kubernetes-daemonset#use-cri-parser-for-containerdcri-o-logs |
we are facing this issue with slashes would like to know if there would be a newer version of the daemonset after fixing the issue or do we need to use the workarounds permanently, thanks, |
With BDRK-3386 is this issue fixed? |
From what I can see, there's still no way to concatenate partial logs coming from containerd or cri-o. Nor to pass a regular expression, when Containerd and cri-o requires something such as, reconstructing logs split into multiple lines (partials):
The filter above relies on some
I'm not sure how to make those filter block and regexpr conditional. |
I was stuck this question all day until i see you answer! Love this answer and the author !!!!!!!!!!!!!! |
As per discussion and this change, make sure to turn off greedy parsing for the timestamp. e.g.
With greedy parsing, there's a chance of runaway logging (log errors caused by scraping log errors). Context: |
cool. its work well |
Hi, i have received your email. Thanks!This is a auto-reply email.
|
it works for me ! you are so gorgeous @vipinjn24 |
I have seperated file, outside kubernetes.conf named
Just use env |
Hi, i have received your email. Thanks!This is a auto-reply email.
|
Hmm let me see this one. |
After reading the whole thread, and experimenting with different settings posted here I managed to set up fluentd working with OKD4.
I set these two env vars and it works without overwriting any config files in the container. |
For the record, as it's now the 7th answer suggesting this ... With something like |
Hi I dont know why the logs is not trying parsed as json first
|
Hi, i have received your email. Thanks!This is a auto-reply email.
|
Nevermind, successed with this config:
|
The `containerd` runtime generate logs as a non-JSON string. When switched to `containerd` runtime, `fluentd` will fail to parse any non-JSON log message and produce a large amount of parse error messages in its container logs. Here is an open issue at `fluentd` repo: fluent/fluentd-kubernetes-daemonset#412 **docker** runtime (a valid JSON string) `{"log":"2023-05-02 20:17:16 +0000 [info]: #0 [filter_kube_metadata_host] stats - namespace_cache_size: 0, pod_cache_size: 0\n","stream":"stdout","time":"2023-05-02T20:17:16.666667387Z"}` **containerd** runtime (just a string) `2023-05-02T20:17:28.143532061Z stdout F 2023-05-02 20:17:28 +0000 [info]: #0 [filter_kube_metadata_host] stats - namespace_cache_size: 0, pod_cache_size: 0` Here is an example of a short entry from a `fluentd` container log. ``` 2023-05-02 19:51:40 +0000 [warn]: #0 [in_tail_fluentd_logs] pattern not matched: "2023-05-02T19:51:17.411234908Z stdout F \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"" ```
We are getting issue in cri parser of fluentbit after EKS Upgrade to 1.24 With below parser , log: prefix is missing when it forward logs to splunk
|
Hi, I'm running k3s using containerd instead of docker.
The log format is different to docker's.
AFAIK it would just involve changing the @type json to a regex for the container logs, see k3s-io/k3s#356 (comment)
Would anyone be up for doing this? With maybe some kind of env var to switch on the container d support, eg
CONTAINER_RUNTIME=docker
as default, withcontainerd
as an alternativeThe text was updated successfully, but these errors were encountered: