-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
💡How to access ETCD from mgmt cluster? #75
Comments
cc @zawachte |
The doc says we can use apiserver as proxy to access node!? Any one has more details on this? The apiserver proxy: is a bastion built into the apiserver |
i'm not familiar with the apiserver proxy, so can't shed any light on it. docs are quite thin, they mention it can be used to access nodes but don't show you how. |
I think we should be able to manually create a endpoint resource and service resource for etcd which should allow us to do the same tunneling (port-forward) that kubeadm provider does. This is similar to how prom-operator sets up prom to scrape kubelet/cadvisor metrics. |
ah so something like this?
given you can point endpoints subsets to nodes, this might be what the docs meant when they say you can use api proxies to communicate with nodes. well you can, indirectly, via a service, if you create one by hand like that. seems a bit unclear of the docs if that's what they meant! |
tested the solution (service+endpoints) zawachte provided on my k3s enviornment. yaml are like below
Yes, pod was able to access host etcd by accessing the service clusterIP, However, this service is not allowed for port-forward. port foward will give err below: searched a bit, and someone gave an explanation and solution https://stackoverflow.com/questions/56870808/access-external-database-resource-wrapped-in-service-without-selector. In short: port forward service need the pod to back, and still true now. The solution so far, is create a pod to act as proxy. |
tested deploying pod as etcd proxy and use port-forwarding to connect, works! the proxy yaml are as below
We can let k3s bootstrap provider to install this daemonset. Guys, what do you think? if looks good, i will proceed to work on a pr. |
I am good to move forward with this approach for now. I think this should be the job of the control-plane provider, not bootstrap. Currently, we only support etcd+k3s today, but we need to ensure that we add this in a way where we have an etcd mode that installs this daemonset and manages etcd but also leaves the door open for us to develop modes that:
|
There are several things need to be done. I'll try to break into several pr, so that it can be easier for review and test.
|
Update: |
k3s etcd is not a static pod but inside k3s host process. Which means we cannot use kubernetes port-forward to access k3s endpoint. I want to start this thread to discuss how we can access k3s etcd endpoint from capi mgmt cluster.
Below are some ideas in my mind:
This is the most straightforward way. capi machine has
MachineAddresses
field which expose vm address info, mgmt cluster can start a connection to machine address on 2379 port to access etcd client endpoints.The big drawback could be: it has more network requirement, it requires network be routable between mgmt cluster and target cluster vm. Besides, it could expose more attack surface, that means users may take more effort to ensure the system security.
We deploy daemonset to target cluster's cp nodes, it does one simple job, just forward 2379 traffic to its host's 2379 port. So, mgmt cluster can still use kubernetes port-forward to access daemonset pod, and forward again to host etcd endpoint. Theoretically, this shall work, but I haven't tested yet.
The drawback is also very obvious, it need deploy pods on target cluster, which is not very user-friendly.
Although k3s cp provider can still work without access etcd, but it's not safe, we need fix this issue to make project production ready. Please comment below to share your idea⚡
The text was updated successfully, but these errors were encountered: