Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

redis client connections #433

Open
sdarwin opened this issue Jul 11, 2024 · 3 comments
Open

redis client connections #433

sdarwin opened this issue Jul 11, 2024 · 3 comments

Comments

@sdarwin
Copy link

sdarwin commented Jul 11, 2024

Hi,

We have django-health-check installed on our Django website.

INSTALLED_APPS += [
...
    "health_check",
    "health_check.db",
    "health_check.contrib.celery",
...

There are db and celery checks.

Last week, I set up external monitoring so every 5 minutes the health check is contacted. The checks were all passing.

After 5 days, an outage. It appears that every health check opens a connection to the redis memorystore and does not close it. There were 44,000 open connections to redis. It can crash the app.

What is unknown, is whether this bug is specific to django-health-check, and doesn't affect the rest of the website, or if it has uncovered a problem of the website itself that could show up later with many visitors.

Usually databases have connection pooling that limits the number of connections. What about celery and redis?

Do you believe this would be a "general website issue", and not caused by django-health-check?

I disabled the frequent health checks. The client connections stabilized again, and stopped increasing.

@frankwiles
Copy link
Member

Hey @sdarwin !

I would imagine we (both REVSYS and other django-health-check users) would have run into this with the celery check if it was in this library. I spun through the code looking for any spot where it might be opening an ancillary connection to check on anything and I'm not spotting anything.

The redis check does open a connection, but in a context block that should drop the connection when it's done.

Also if the timing and days you mentioned were just estimates it would have opened 1440ish connections in that time frame and not 44k so I think something else is going on. I have seen this issue before with Celery itself but I don't think it's health check related. Easy way to test would be to remove the celery check for awhile and see if you redis connection count continues to rise.

@sdarwin
Copy link
Author

sdarwin commented Jul 11, 2024

See screenshot.
July 5 through July 10, increasing.
Then a maintenance event cleared all connections. Returned to creating client connections until monitoring was disabled this morning, and it levels off, flatline.

5 minutes wasn't exactly right. I created nagios checks (every 5 minutes) AND prometheus, which seems to default to 10 seconds! That was the problem, at 10 seconds it would account for the 44k.

Will keep it this way until next week. And then attempt to reintroduce nagios (5 minutes), without prometheus, since that must be a factor.

Screenshot from 2024-07-11 14-26-04

@sdarwin
Copy link
Author

sdarwin commented Jul 31, 2024

Hi Frank,
Switching from prometheus (15 seconds) to nagios (5 minutes) reduced the rate of new connections as expected. However they still occurred. After a day, there were around 600 open connections.

In this case, Redis is "GCP Memorystore". probably shouldn't matter.

The open connections also appear on the Django side.

Here's an idea to replicate the issue. If you send me the URL of a health check on any other django website, either by private email, or in this issue, I will point prometheus at that other website. :-) At the rate of 15 seconds, after a day, there might be 1000's of connections. To observe open connections from python, log into a k8s pod:

apt-get update
apt-get install net-tools
netstat -anptu | grep 6379 | wc -l

k8s deployments generate new pods which clears the connections. During the test, don't deploy code, observe that the pods are long-lived. Such an experiment may provide a perspective if the bug is widespread or not.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants