-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deprecate and remove kustomize from kubectl #4706
Comments
/sig cli |
Hello @soltysh 👋, Enhancements team here. Just checking in as we approach enhancements freeze on on 02:00 UTC Friday 14th June 2024 / 19:00 PDT Thursday 13th June 2024. This enhancement is targeting for stage Here's where this enhancement currently stands:
For this KEP, we would need to update the following:
The status of this enhancement is marked as If you anticipate missing enhancements freeze, you can file an exception request in advance. Thank you! |
Hello @soltysh 👋, 1.31 Enhancements team here. Now that PR #4712 has been merged, all the KEP requirements are in place and merged into k/enhancements, this enhancement is all good for the upcoming enhancements freeze. 🚀 The status of this enhancement is marked as |
Hello @soltysh 👋, 1.31 Docs Lead here. |
Hey @Princesso we'll probably want to put together a blog post around 1.31 release, to more advertise the fact of this deprecation along with the future plan for removal. So that more users are aware of this fact. I'll followup with appropriate PRs. |
I'm surprised and disappointed to see this proposed. I don't love the current state, but it's the commitment we made to users... we should not just break them without very very good reasons. I think the KEP enormously underestimates the impact. There are thousands of publicly visible uses and likely even more non-public uses. Dropping support / breaking those uses does reputational damage to kubernetes for being unstable in new versions. The justification / motivation in the KEP is vague
|
Without having read the KEP, just the headline....I'm pretty strongly in the "no, that would break users" camp. I'm all for throwing warnings - use colors and flashing terminal codes, heck - make it play the Star Wars alarm siren on the PC speaker if you can. As a recent victim of tools breaking underneath me, let's PLEASE take this seriously. It's just about the worst thing we can do to people. Look, I hate past me more than anyone, but I have to live with his idiotic, short-sighted decisions. https://youtu.be/EjR1Ht__9KE?si=8cymBHCdN-UbPx4U - FF to 12:22 |
Hi @soltysh, 👋 from the v1.31 Communications Team! We'd love for you to opt in to write a feature blog about your enhancement! To opt in, let us know and open a Feature Blog placeholder PR against the website repository by 3rd July, 2024. For more information about writing a blog see the blog contribution guidelines. Note: In your placeholder PR, use XX characters for the blog date in the front matter and file name. We will work with you on updating the PR with the publication date once we have a final number of feature blogs for this release. |
@liggitt @thockin thanks for your valuable input, I think it's important that we start having those conversations. I'll probably either open this topic again with sig-cli or even with sig-arch, so that we can discuss the potential path forward. Like I said when talking with Jordan on slack, nothing set in stone, but at the same time we shouldn't be stuck in a place that we seem to all agree is not the best one. |
Has it been considered to implement a compatibility mode after kubectl moves from the deprecation to the removal state? Such a compatibility mode could shell out to the kustomize binary and emulate what the kubectl native kustomize integration was doing. This compatibility/emulation mode could be in deprecation phase from day one while printing HUGE warning messages about what is happening. This way, kubectl could remove its compile-time dependency into kustomize and leave the compatibility mode for a much longer time. Of course, users would be required to install kustomize along kubectl, but that might be an acceptable tradeoff compared to all scripts breaking immediately. |
Hi! I'm currently subproject lead for kustomize. I have some comments on the
I feel we are able to make a kustomize release cycle to match what Kubernetes is using. In my memory, the reason the current release cycle is not regular is because we did not discuss whether it is necessary or not.
I agree with @liggitt's opinion. In my memory, I didn't notice any related issues, and I tried to clean up dependencies at any time.
I completely agree with your opinion. So, I'm able to agree with your idea to remove kustomize from kubectl to improve the maintainability of both projects. |
Hi @soltysh, by this comment, I am assuming that this enhancement does not need any updates to the Docs. Please correct me if I am wrong. If it does indeed need documentation updates, please follow the steps here to open a PR against dev-1.31 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thursday June 27, 2024 18:00 PDT. NB: Doc updates are different from blog posts. |
That is correct. |
Hey again @soltysh 👋, 1.31 Enhancements team here, Just checking in as we approach code freeze at at 02:00 UTC Wednesday 24th July 2024 / 19:00 PDT Tuesday 23rd July 2024. Here's where this enhancement currently stands:
Regarding this enhancement, it appears that there are currently no pull requests in the k/k repository related to it. For this KEP, we would need to do the following:
If you anticipate missing code freeze, you can file an exception request in advance. The status of this enhancement is marked as |
Enhancement Description
k/enhancements
) update PR(s): KEP-4706: Deprecate and remove kustomize from kubectl #4712k/k
) update PR(s):k/website
) update PR(s): WIP: kubectl kustomize deprecation website#46868Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.
The text was updated successfully, but these errors were encountered: