-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logzio appender stops sending log events #45
Comments
@martijnblankestijn Thank you for letting us know. |
The application is running on How do I check the queue? I have seen that it is a binary file in the /tmp file system, and I see that it is changed (looking at the timestamp of the file) And would it be sensible to switch to the In-memory queue, as we do not really need the file based mechanism? |
@martijnblankestijn |
Just happened again, last log message send to logz.io was from 'July 6th 2019, 05:36:53.238'. Looking at the /tmp directory, I found the following files Time of looking was 08:42, so the queue file page-1.dat is probably still being changed.
This got me a hit on page-1.dat, so yes the appender is still writing to disk. |
@martijnblankestijn |
To reproduce it, is just running our application for a couple of days and then it happens. I activate the debugging log, you can find typical logging in my first post, just that it stops sending and draining. If there is more debug logging I can turn on, just let me know. What do you mean with restarting the logger? How can this be done? |
@martijnblankestijn I will try to reproduce it on our end. It's not that easy so it might take me some time. If you're running in a kubernetes set-up you can use for now our k8s daemonset or docker collector |
@martijnblankestijn Sorry for the late response. I can't figure out how to reproduce it on our end. |
Hi Ido,
No problem. I'm just back from holiday, so I can totally understand.
We still see it happening after 2 or three days.
We downgraded to a previous version: same result
We tried the InMemory variant: same result.
We have not tried the kubernetes variant yet.
…On Thu, Aug 8, 2019 at 12:52 PM Ido Halevi ***@***.***> wrote:
@martijnblankestijn <https://github.com/martijnblankestijn> Sorry for the
late response. I can't figure out how to reproduce it on our end.
I will update when I will succeed
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#45?email_source=notifications&email_token=AACCF5VFZ7YO75DDDWXSOIDQDP3G3A5CNFSM4H4OWJPKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD33HURI#issuecomment-519469637>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AACCF5X2WEZ2ANWE3WC2553QDP3G3ANCNFSM4H4OWJPA>
.
|
@martijnblankestijn Hi, checking to see if you managed or you still need our help? |
@idohalevi The issue hasn't been fixed |
@Doron-Bargo @yyyogev can you assist? |
We're seeing the same issue with a similar stack using Micronaut. I've tried both in-memory and disk queues. In both cases all running replicas running in kubernetes will work fine for days and then all of a sudden stop shipping logs. Kubernetes: v1.16.6 logback.groovy
For what it's worth in our case, a container seems to ship logs successfully for about 6 days and then stop. If I restart the container it starts logging again. |
@idohalevi @Doron-Bargo @yyyogev Any progress or suggestion on how to proceed from here? |
Does it happen consistently after 6 days? |
@yyyogev Yes, so far. Could be less about time and more about how long it takes before the queue fills up. The log rate is pretty consistent. |
Any update on this ? |
Not yet, we are working on it but it's very hard to reproduce.. we have a machine running a few instances more than 2 weeks now and it's fine.. |
We're running on K8S as well and using on disk. We tried to switch to in-memory but we got the same issue. |
@pachecopaulo did you use a sidecar container approach with filebeat in the same pod? |
@pajaroblanco I didn't. We just have another app running with Filebeat |
For us, moving from the alpine-slim variant to the standard adoptopenjdk image fixed the issue. Old: "adoptopenjdk/openjdk11-openj9:jdk-11.0.1.13-alpine-slim" After making that change logs have been shipping consistently from the logback appender. |
Can anyone else who had this problem try the above fix and let us know if it works? |
Hmm, sorry, I left the project, so I do not have the option anymore to
verify it.
Kind regards,
Martijn Blankestijn
Op di 17 nov. 2020 16:41 schreef Yogev Mets <[email protected]>:
… Can anyone else who had this problem try the above fix and let us know if
it works?
@martijnblankestijn <https://github.com/martijnblankestijn> @pachecopaulo
<https://github.com/pachecopaulo>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#45 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACCF5XETGU42TABOS657JTSQKKRZANCNFSM4H4OWJPA>
.
|
This one was painful to debug but I think I got to the bottom of it after 4 hours of debugging. In my case it stopped because fileSystemFullPercentThreshold is pre-set to 98%. My laptop has 35GB left of disk space out of 2TB and that's why it stopped logging. If you set it to -1 it will not check for disk space which is ok for a laptop however I don't recommend doing it if running inside a container. I hope it helps. |
Everything goes well with shipping the log evens to logz.io, until after a day of three the appender stops sending events to logzio.
This has happened on multiple occasions.
We enabled debug logging of the logzio appender and sender. A snippet of the output in the log file is captured below.
Looking at the log file, it tries to drain the queue with one logging event in it, but never sends it. After that it does not attempt to drain the queue again.
And the log line that should be send is identical to other log lines it already sent earlier in the run of the application, so we have no reason to suspect that it has anything to do with the content of that log line.
The version we use:
compile "io.logz.logback:logzio-logback-appender:1.0.22"
The text was updated successfully, but these errors were encountered: