You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Operator have very nice feature node_readiness_labeldocs to track if node where master runs under upgrade procedure. However current implementation work only for some cloud providers. I already mentioned it here
If operator move master to other replica more accurately and smoothly than just allowing Kubernetes to force drain a node (when provider eventually give up on PDB and drain the node anyway) or removing master replica PDB and push your luck (k8s can kill master first, then kill new master etc - you can't predict rotation order), I believe it could and should be addressed.
Citing myself from previous issue:
I wanted to implement node_readiness_label on my clusters, but during testing I found out my managed k8s provider (DigitalOcean) doesn't set any labels for Ready nodes. Instead it set annotation for nodes, which are drained:
label added for Ready node, removed when node is going to recycle (addressed by node_readiness_label)
label removed for Ready node, added when node is going to recycle
annotation added for Ready node, removed when node is going to recycle
annotation removed for Ready node, added when node is going to recycle (my case)
Only 1st case covered by node_readiness_label feature. It should be addressed somehow.
Simplest solution is watch for node spec.unschedulable==true or when taint effect: NoSchedule; key: node.kubernetes.io/unschedulable appeared. It have drawback - is engineer kubectl cordon node for some purpose master will be rotated.
The text was updated successfully, but these errors were encountered:
Operator have very nice feature
node_readiness_label
docs to track if node where master runs under upgrade procedure. However current implementation work only for some cloud providers. I already mentioned it hereIf operator move master to other replica more accurately and smoothly than just allowing Kubernetes to force drain a node (when provider eventually give up on PDB and drain the node anyway) or removing master replica PDB and push your luck (k8s can kill master first, then kill new master etc - you can't predict rotation order), I believe it could and should be addressed.
Citing myself from previous issue:
I wanted to implement
node_readiness_label
on my clusters, but during testing I found out my managed k8s provider (DigitalOcean) doesn't set any labels for Ready nodes. Instead it setannotation
for nodes, which are drained:So, there is few possible use cases:
node_readiness_label
)Only 1st case covered by
node_readiness_label
feature. It should be addressed somehow.Simplest solution is watch for node
spec.unschedulable==true
or when tainteffect: NoSchedule; key: node.kubernetes.io/unschedulable
appeared. It have drawback - is engineerkubectl cordon
node for some purpose master will be rotated.The text was updated successfully, but these errors were encountered: