-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
etcd operator working group #7917
base: master
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: ahrtr The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
4d94d31
to
8d79f06
Compare
We need to finalize:
|
d90e81c
to
39a4a74
Compare
39a4a74
to
4176e7d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added some suggestions.
suggested to replace & with 'and' for more format language.
4176e7d
to
d0a44a7
Compare
LGTM, awesome to see renewed interest into etcd-operator. For me this falls into making "etcd simple to operate" as proposed by the SIG-etcd vision making it a worthy investment. |
d0a44a7
to
d9d0301
Compare
/hold I am going to send an email "WG-Creation-Request: WG etcd-operator" to [email protected] in the following 1~2 weeks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall this looks good, thanks for drafting @ahrtr. A few thoughts below.
bf8114e
to
df89c33
Compare
sigs.yaml
Outdated
tz: PT (Pacific Time) | ||
frequency: bi-weekly | ||
url: https://zoom.us/my/cncfetcdproject | ||
archive_url: provide-a-google-doc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jberkus could you please create a shared google doc in Kubernetes workspace, and share edit permission to both etcd-dev and cluster lifecycle?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the name/format of the doc? Is this for meeting notes, or something else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think just we just need a google doc for meeting notes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bit of a holdup with google drive permissions. I'll get this created within then next couple days.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See request here: #7937
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks
/hold cancel |
df89c33
to
2fbcf88
Compare
archive_url: provide-a-google-doc | ||
recordings_url: TBD | ||
contact: | ||
slack: wg-etcd-operator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jberkus can we have a new slack channel or reuse sig-etcd?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
up to sig etcd to decide if it would be too noisy. new channel seems better to me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, all these non existing bits like recording URL and slack can be left TBD or empty and filed later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Applied "TBD", thx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can create a new slack channel if we want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, please.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so we'll get the channel once Steering approves the new WG.
The channel name will be #wg-etcd-operator
154d95f
to
41b3383
Compare
- Bootstrap a project "etcd-operator" owned by SIG etcd which resides in the etcd-io or kubernetes-sigs Github orgs. | ||
- Review existing etcd operators to see if any could be forked or referenced to advance the project. | ||
- Discuss and design the core reconciliation workflow, and potentially provide a proof of concept (PoC). | ||
- Figure out how to get resource for following dev/test, i.e. AWS S3. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if this was already discussed, would a possible integration with Cluster API in SIG Cluster Lifecycle also be in scope? Can be as much as "it works with as an addon" or provider of sorts. It'd be great to just keep it in mind for native support of external etcd w/ kubeadm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd be great to just keep it in mind for native support of external etcd w/ kubeadm
thx for the comment. It's already been discussed. It might be a potential long-term goal, but not in the scope for now. The intention is to keep it as simple as possible in the first place. It's also clarified in the section "Out of scope" (see below),
### Out of scope
- Manage etcd clusters running within non-Kubernetes environments.
- Manage etcd clusters which are used as the storage backend of a host (non-nested) kube-apiserver.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think CAPI would be doing neither of those. CAPI would be using it roughly like any other application.
We would host etcd on one Kubernetes cluster and the apiserver of another cluster would use it as backend. So no chicken-egg problems I think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thx for the info.
Seems like the use case is similar to the Hosted Control Plane.
The etcd cluster, which is managed by the etcd-operator, will run as PODs within Kubernetes environment, treating them like any other typical applications. I think we are open to any use cases on top of this base.
Regarding integrating with Cluster API to support the use case of external etcd with Kubeadm, we need cluster API maintainers' help to make it work, and it's open to get it included in the roadmap (pls see the section "In Scope") of the subproject.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think, it's responsibility of the ControlPlane Provider to deploy etcd, not the CAPI manager itself. That's how we do in in k0smotron ControlPlane Provider, other implementations do more or less the same.
Hi! The aenix's etcd-operator team is here. We have multiple regular contributors to the project and weekly meetings established. We would like to participate in this project and are open to donating our etcd-operator code here (if applicable) We started this initiative to write a fully community-driven etcd-operator a few months ago and informed sig-etcd about our intention. I am surprised that nobody mentioned us in this issue. Related issues:
How can we ensure this process will not run without our team? /cc @sircthulhu @aobort @sergeyshevch @lllamnyp @Kirill-Garbar @Uburro |
@kvaps: GitHub didn't allow me to request PR reviews from the following users: lllamnyp, Kirill-Garbar, Uburro, sircthulhu, aobort. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
41b3383
to
900c7fb
Compare
Signed-off-by: Benjamin Wang <[email protected]>
900c7fb
to
fc6e7df
Compare
As mentioned in #7917 (comment), anyone is welcome to participate, we definitely need community help on this project. Regarding the decision to pick an existing project or start from scratch, it's up to sig-etcd and sig-cluster-lifecycle leads. |
@ahrtr thank you. Is there any planned schedule or meeting regarding this project? |
We need to wait for the Kubernetes Steering Committee to approve the WG creation request. Once we get approval, we will have regular WG Meeting, e.g bi-weekly. |
Initial PR to create wg-etcd-operator.
/committee steering
Adding stakeholder SIGs
/sig etcd
/sig cluster-lifecycle
cc @jmhbnz @jberkus @serathius @wenjiaswe @fabriziopandini @hakman @neolit123 @justinsb @vincepri