-
Notifications
You must be signed in to change notification settings - Fork 428
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Update VM based templates with private IP and update Tiltfile #5248
Conversation
Skipping CI for Draft Pull Request. |
7f45e81
to
8718474
Compare
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #5248 +/- ##
==========================================
- Coverage 53.00% 52.53% -0.47%
==========================================
Files 272 272
Lines 29429 29433 +4
==========================================
- Hits 15598 15463 -135
- Misses 13027 13167 +140
+ Partials 804 803 -1 ☔ View full report in Codecov by Sentry. 🚨 Try these New Features:
|
8eed73c
to
40a2e9d
Compare
40a2e9d
to
65119d8
Compare
65119d8
to
0fde5aa
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a reasonable approach. Should we have somewhere to document which CIDR ranges are "reserved" by which flavors? If we add a new flavor, we'd want to make sure it doesn't overlap, and it would be better not to have that rely just on maintainers' memories.
local_resource( | ||
name = "delete-all-workload-clusters", | ||
cmd = kubectl_cmd + " delete clusters --all --wait=false", | ||
cmd = ["sh", "-ec", delete_all_workload_clusters], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to run this command with sh
explicitly? It was just kubectl
directly before, which seems more portable (although I doubt anyone is running make tilt-up
on Windows).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we do need sh
here. We are updating delete_all_workload_clusters
to have a bunch of delete vnet peering commands added in this function https://github.com/kubernetes-sigs/cluster-api-provider-azure/pull/5248/files#diff-c2ee8653e1d6b85f0aadf87cd438a9250806c052877248442be4d434cbc52425R522-R532
|
||
# copy the kubeadm configmap to the calico-system namespace. | ||
# This is a workaround needed for the calico-node-windows daemonset to be able to run in the calico-system namespace. | ||
if "windows" in flavor_name: | ||
flavor_cmd += "; until " + kubectl_cmd + " --kubeconfig ./${CLUSTER_NAME}.kubeconfig get configmap kubeadm-config --namespace=kube-system > /dev/null 2>&1; do sleep 5; done" | ||
flavor_cmd += "; " + kubectl_cmd + " --kubeconfig ./${CLUSTER_NAME}.kubeconfig create namespace calico-system --dry-run=client -o yaml | " + kubectl_cmd + " --kubeconfig ./${CLUSTER_NAME}.kubeconfig apply -f -; " + kubectl_cmd + " --kubeconfig ./${CLUSTER_NAME}.kubeconfig get configmap kubeadm-config --namespace=kube-system -o yaml | sed 's/namespace: kube-system/namespace: calico-system/' | " + kubectl_cmd + " --kubeconfig ./${CLUSTER_NAME}.kubeconfig apply -f -" | ||
|
||
# TODO: no one is clearing MGMT_CLUSTER_NAME when using KIND, so this is always going to be true. Improve this logic. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we create a GitHub issue to improve this? It's easy to forget about // TODO
comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, good call out, will update the epic with it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created #5289
b5e35e0
to
dfa6e38
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Lemme break this PR into multiple smaller PRs updating each of the templates. I will ship this PR with changes to default template and addressing the comments. |
6cb4ead
to
d4fcec6
Compare
Where could be a good place to put this? |
@nawazkh: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Sorry for the churn @mboersma , I need to close out this PR and open a new one. I will address your comments on there. |
/close |
@nawazkh: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Updated https://github.com/kubernetes-sigs/cluster-api-provider-azure/pull/5288/files#diff-e7ce6ca4dcc711afa1a2e1afdb50d9c9902749a29f4af50cde1a12f2e51f4858 with the CIDR matrix |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes # #5261
Special notes for your reviewer:
TODOs:
Release note: