-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Give redis-box a restart policy #47
Comments
I haven't had this problem. I think I usually leave the redis box alive and scale my crawl cluster to 0 or 1 notes. And then when I want to get going again, I used this redis flush command to make sure the crawl is clear.
|
I was describing a scenario using Kubernetes on GCP and from the commands you are using I'm guessing you're running locally using docker-compose |
Perhaps my approach of just scaling down the cluster to a single node was just semantically wrong and I should have instead reduced the parallelism of the openwpm-crawl job. But even so from my understanding the redis-box should always be running until the cluster is deleted. |
Sorry, in that particular case I was but I do most of my crawls on GCP |
No, I regularly scale down my kubernetes cluster as you described. It's normal for me to want to run a series of crawls. Setting up all the infrastructure over and over again is a pain. So as a cost compromise, I will scale up and down resources as you describe so I'm not just needlessly leaving lots of nodes on for an extended period of time, but I save my self from repeating the same setup over and over again. |
Hmm, interesting. Maybe I was just too impatient and the redis box would have popped up again, but the cluster claimed it was healthy and I was unable to do |
well the redis box is seperate. if you run |
Hmm, this is the error message I got: |
yep. happy to chat here or on matrix. |
When I wanted to scale down the cluster to a single node to get the jobs redistributed, the node that the redis-box pod was running on got killed and it just didn't spin back up but accepted it's deletion leaving me unable to access the hosted redis. Maybe that pod should have a restart policy
The text was updated successfully, but these errors were encountered: