Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow removing some volumes so that bounded PV check can pass when scaling down #6976

Closed
wants to merge 1 commit into from

Conversation

jewelzqiu
Copy link

What type of PR is this?

/kind feature

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #6417

Special notes for your reviewer:

Previous PR: #6418
Specifying --skip-nodes-with-local-storage=false won't help, because this flag only affects drainable check
After drainable check, the pod will enter bounded PV check where it will fail
(because local pv is bounded to the specified node)

Does this PR introduce a user-facing change?

Allow removing some volumes so that bounded PV check can pass when scaling down

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Jun 26, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @jewelzqiu. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Jun 26, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: jewelzqiu
Once this PR has been reviewed and has the lgtm label, please assign feiskyer for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@jewelzqiu
Copy link
Author

@x13n Hi, I have resubmitted the PR, because specifying --skip-nodes-with-local-storage=false won't help in the scenario of the original issue (bounded PV check will still fail)

@jewelzqiu
Copy link
Author

@wackxu check this PR

@x13n
Copy link
Member

x13n commented Aug 30, 2024

Can you clarify which check is failing on scale down for bounded PV?

@x13n
Copy link
Member

x13n commented Aug 30, 2024

Ok, I went through the linked issue, looks like the problem is with logic checking whether other nodes in the cluster would be able to run the pod. They can't because local volume has affinity to a specific node. The change currently proposed in this PR might work, though I think modifying pod object in place like this is error prone: this is supposed to be a check, not a mutable operation on pod objects. A better place to remove these volumes would be during simulation of node removal. We're already making a copy of the object there: https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/simulator/cluster.go#L229

@x13n
Copy link
Member

x13n commented Sep 24, 2024

I'm closing this due to lack of activity. If anyone wants to pick up this work, I'd actually suggest making it work for both scale up and scale down paths - to both unblock scale down of pods with local PV and prevent such pods from getting stuck pending.

/close

@k8s-ci-robot
Copy link
Contributor

@x13n: Closed this PR.

In response to this:

I'm closing this due to lack of activity. If anyone wants to pick up this work, I'd actually suggest making it work for both scale up and scale down paths - to both unblock scale down of pods with local PV and prevent such pods from getting stuck pending.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Local PV will prevent node scaling down
3 participants