Skip to content
This repository has been archived by the owner on Jul 25, 2022. It is now read-only.

gardenctl v2 (overhaul it, make it safer, and more extensible) #499

Open
vlerenc opened this issue Dec 19, 2020 · 25 comments
Open

gardenctl v2 (overhaul it, make it safer, and more extensible) #499

vlerenc opened this issue Dec 19, 2020 · 25 comments
Labels
component/gardenctl Gardener CLI kind/epic Large multi-story topic lifecycle/rotten Nobody worked on this for 12 months (final aging stage)

Comments

@vlerenc
Copy link
Contributor

vlerenc commented Dec 19, 2020

Motivation

gardenctl v1 was written well before Gardener extensibility and kubectl plugins. It also has a lax handling of kubeconfigs as it uses admin kubeconfigs and doesn't rewrite them with OIDC kubeconfigs if possible. Therefore, we'd like to suggest a v2 overhaul of gardenctl. Furthermore, there was usability feedback in regards to the targetting that we want to address with v2 as well.

Proposal

  • Use kubectl plugins as this speaks to the community and features already an extension concept that we need for IaaS-specific subcommands.
    • One (target) or multiple (garden, seed, project, shoot) plugins that could/would deal with managing the kubeconfigs, i.e. what is called targetting in gardenctl v1.
    • Multiple plugins for the different general gardenctl commands, e.g. logs, shell, etc.
    • More plugins that are infrastructure specific, e.g. ssh, resources, orphans, the various CLIs, etc.
    • Optional, not reflected here: Teams can create proprietary plugins that can bootstrap their gardenctl plugin configuration from GitHub, Vault or wherever they hold the garden configurations.
  • The gardenctl config should be as minimal as possible:
      - name: dev
        kubeconfig: ~/.kube/...
      - name: qual
        kubeconfig: ~/.kube/...
      - name: prod
        kubeconfig: ~/.kube/...
      ...
    
    gardenctl should cache information such as domains and identities locally in a gardenctl local config folder. That will be useful for smart and context-aware targetting (see below).
  • Easy setup/installation via brew/krew.
  • Targetting would use shell hooks (e.g. $PROMPT_COMMAND) to inject the new kubeconfig into the parent process (the shell), so that one can directly work with the cluster with standard tools (such as again kubectl and others).
  • Taregtting should cater to these use cases, so that most users can and want to switch over to v2 which will be safer:
    • Hierarchical targetting of a seed or shoot with garden, then seed or project, then shoot "steps" (classical gardenctl approach)
    • Direct targeting of a seed or shoot with garden/project/seed, e.g. gardenctl shoot -g prod -p core funny-cluster that people can then put behind their own shell aliases, so that switching over to v2 gets simple
    • Domain targeting of a shoot with either its API server URL or its Gardener Dashboard URL shall be possible for quick targetting after either of the information was made available to the operator (to cater to another batch of operators that have built themselves similar tooling)
    • Fuzzy targetting of a seed or shoot (including Levenshtein distance), selecting from all available seeds or shoots, shrinking down the list while typing (to cater to another batch of operators that have built themselves similar tooling)
    • History targetting, either by picking the previous kubeconfig, the N'th last kubeconfig or selecting from a list of last targetted clusters
  • Targetting should avoid security hazards by the following means:
    • Use safe OIDC kubeconfigs for garden and seed clusters.
    • If this is not possible/available, have the shoot kubeconfigs encrypted and decrypted on-the-fly with personal credentials, e.g. the user's GPG key. This can be done by providing a binary/command that gets injected into the kubeconfig (like it is the case for the oidc-plugin, see https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins) that gets invoked and can either even directly access our infrastructure (slow) or decrypt a previously gardenctl-encrypted token on-the-fly.
    • To avoid having the admin kubeconfigs on disks (even if encrypted and placed into a RAM disk that gets cleanup up), let's implement a controller in the seed that generates personalised kubeconfigs with admin service accounts in the shoot that get removed again automatically after 8h or whatever. This way, operators would have auditable and personalised kubeconfigs that last only for a given time.
  • The cleanup for the ssh and shell commands shall not happen within the CLI (in case it panics, loses connectivity, etc.), but be executed by another controller in the seed that always takes care (safely) of the cleanup.
  • Ideally, the user's node credentials for ssh are created by the above seed controller on-the-fly as well and be then fetched by the node, then injected into the sshd configuration, and get removed again automatically after 8h or whatever. This way, operators would have auditable and personalised ssh credentials that last only for a given time.
  • It should be possible to open gardenctl up to end users as well, e.g.:
    • Operators and end users:
      • Very useful: shell to schedule a regular or priviliged pod in a cluster (on any node or a particular node), possibly not necessary if solutions like https://github.com/kvaps/kubectl-node-shell could replace it (however, it would require to supply a configurable image like our ops-toolbelt image)
      • Most useful: aliyun|aws|az|gcloud|openstack to invoke the CLIs, so we should continue to support these
      • Most annoying if not available, so very useful to have, though less often required: ssh into a node, but it should be reimplemented using the SDKs directly instead of invoking the CLIs
      • Useful: diag (merge with orphan) to run a self-diagnostic on a shoot (much like the robot does on /diag, see below), but it should work with operators (full diagnosis as they have access to the seed clusters, which the command should silently obtain if possible) and end users (which have only access to the shoot cluster)
    • Operators only:
      • Very useful: logs fetching the logs for a given time window and component (not individual pod) from loki, which helps with rescheduled or deleted pod logs tremendously and which we therefore should continue to support
      • Useful: info to get landscape information, though this information should also be available in the garden monitoring
      • Nice to have: ls to list gardens, seeds, projects, shoots, and issues
      • Nice to have: download/terraform to download/execute everything that's necessary for the infrastructure bring-up if the extension is using the default TF support in Gardener
  • We decided to abandon the Python-based robot shoot probes and instead have the probes in gardenctl implemented in Go using the Kubernetes client and possibly native SDKs; the latter should be extensible as robot shoot probes are. Like robot shoot probes, the probes should not only list resources (as is the case today in gardenctl with the diag command, which is not helpful), but check for actual issues and assess the situation with "severity", "description", "consequence", and "recommendation" (see robot shoot probes, e.g. web hooks or PDBs).

Note: Ideally, this description is moved to a docs PR, somtheing like a GEP, to faciliate collaboration and detailing it out.

Taregetting

Hierarchical, direct, domain, fuzzy, and history targetting should be possible. Here some examples how this could look like. Some expressions are lengthy, but if they offer all options, these can then be put behind personal shell aliases (of functions) that operators already use today, which should help with broad adoption, which is in our interest to remove he security hazards of v1 with v2. The targetting should be smart and context-aware, e.g. if the currently active kubeconfig is for a seed or shoot in the prod landscape, that's your targetted/context garden. If you then target a project or another seed or shoot, it should automatically happen within this garden cluster. It is not yet clear how fluent this should be, e.g. if a cluster is not found on one garden, are then really all gardens included into a fuzzy search?

Hierarchical Targetting

kubectl garden prod    # targets the prod garden, if it matches, otherwise goes fuzzy across all gardens (?)
kubectl seed aws-001   # targets then a particular seed in that garden, if it matches, otherwise goes fuzzy across all gardens (?)
kubectl project core   # targets then a particular project in that garden, if it matches, otherwise goes fuzzy across all gardens (?)
kubectl shoot funny    # targets then a particular shoot in that garden, if it matches, otherwise goes fuzzy across all gardens (?)

Note: Targetting a project targets the backing namespace in the corresponding garden cluster.

Direct Targetting

kubectl garden prod                     # targets the prod garden, if it matches, otherwise goes fuzzy across all gardens  (?)
kubectl seed -g prod aws-001            # targets a particular seed in that garden, if it matches, otherwise goes fuzzy across this particular garden, if it matches, otherwise goes fuzzy across all gardens (?)
kubectl project -g prod core            # targets a particular project in that garden, if it matches, otherwise goes fuzzy across this particular garden, if it matches, otherwise goes fuzzy across all gardens (?)
kubectl shoot -g canary -p core funny   # targets a particular shoot in that project of that garden, if it matches, otherwise goes fuzzy across this particular project, if it matches, otherwise goes fuzzy across this particular garden, if it matches, otherwise goes fuzzy across all gardens (?)

Note: Targetting a project targets the backing namespace in the corresponding garden cluster.

Domain Targetting

Domain targetting extracts the domain from an API server or Gardener Dashboard URL and matches it against the domain secrets in the garden namespaces of all configured gardens (pre-fetched, of course) or accesses the cluster-identity configmap in the kube-system namespace of the shoot if the shoot uses a custom, i.e. unknown domain (slower, therefore only second option).

kubectl [seed|shoot] https://api.funny.core.shoot.prod.com                                  # target seed or shoot by API server URL
kubectl [seed|shoot] https://dashboard.garden.prod.com/namespace/garden-core/shoots/funny   # target seed or shoot by Gardener Dashboard URL

Fuzzy Targetting

If a cluster is targetted via one of the above means and the selection is unambiguous, access is fastest. If however, fuzzy search is directly invoked or a selection is ambiguous, a cluster metadata cache is accessed and in the background the cache is refreshed (possibly even while typing). If the cache is empty, it always needs to be built up/refreshed. That usually takes a few seconds, though. However, only the list of seeds and shoots is retrieved and nothing else, especially not sensitive data like kubeconfigs (only metadata).

kubectl [seed|shoot]          # show all available seed or shoot clusters and history of in the past targetted clusters for selection
kubectl [seed|shoot] clustr   # show all available seed or shoot clusters that match the cluster name, here `clustr` (incomplete or partly misspelled) if there is no exact match / multiple options

History Targetting

Requires something like a gardenctl cache of last targetted clusters by type (seed or shoot).

kubectl [seed|shoot]          # show history of in the past targetted seed or shoot clusters and all available seed or shoot clusters for selection
kubectl [seed|shoot] -        # swap the current and the previous seed or shoot cluster in the history, just like `cd -` let's you switch back and forth between two folders
kubectl [seed|shoot] HEAD     # target the last targetted cluster
kubectl [seed|shoot] HEAD^1   # target the cluster targetted before the last one
@vlerenc vlerenc added kind/epic Large multi-story topic component/gardenctl Gardener CLI labels Dec 19, 2020
@gardener-attic gardener-attic deleted a comment from gardener-robot Dec 19, 2020
@vlerenc vlerenc changed the title Overhaul gardenctl, make it safer and more extensible gardenctl v2 (overhaul it, make it safer, and more extensible) Dec 19, 2020
@timebertt
Copy link
Contributor

Thanks for summarizing, @vlerenc!

Here are some additional thoughts from my side. They don't have to be discussed here, just wanted to put them on the record somewhere for future discussions.

  • Most useful: aliyun|aws|az|gcloud|openstack to invoke the CLIs, so we should continue to support these

I think we have multiple options for realizing this on the executable/plugin level:

  • one binary/plugin per cloudprovider, each using a central library from gardenctl for the smart detection features, access management and so on
  • no full executable binaries and rather have a clean go interface, that cloudprovider plugins can implement as a slim go plugin or some form of grpc service. Gardenctl would then again manage some symlinks/scripts for making the kubectl plugins available and call out to the implementations of the "extension API"

Probably the decision here is about "simple with overhead vs. complex with clean abstraction".

Also I'm still struggling how the ssh feature could be integrated:

  • one option would be to implement provider-specifics as yet another set of functions in the cloudprovider plugin interface
  • the other option would be to have ssh as a subcommand for each provider plugin and gardenctl does some smart command rewrites according to the targeted cluster. Though, this ssh subcommand could potentially clash with other native cli subcommands

Probably the same questions apply to something like orphan, maybe also diag if it includes provider-specific checks. There, also the best UX would be to have a top-level plugin (k orphan) which can call out to provider-specific implementations.

One (target) or multiple (garden, seed, project, shoot) plugins that could/would deal with managing the kubeconfigs

I thought about this a bit again, and I personally would actually like k garden, k shoot, etc. and use it in my normal workflow. I think, it would be a great UX and also nice signal to the community, when we integrate with kubectl so tightly.
So I vote for multiple plugins for the targeting mechanism. Probably we can use some form of symlinks/wrapper scripts, that are managed by gardenctl itself (no need for extra binaries).

Also, I would like to see the controlplane target, which targets a seed with a specific Shoot namespace selected.

Hierarchical Targetting

One thing, I would also like to see as a targeting mechanism is jumping between clusters/targets in the hierarchy.
E.g. I'm targeting a shoot as a first investigation step, then I want to check the control plane and would therefore issue one single command in order to target the shoot's namespace in the hosting seed, e.g. k controlplane or k seed with some additional flag/smart preselection. Afterwards, I would like to jump back to the shoot. This could of course be realized via history targeting, but could also be some additional flag or smart preselection for k shoot.
This mechanism could also be applied to other target kinds like projects, i.e. jumping with k project to the containing project of a Shoot.

@andrei-panov
Copy link
Contributor

Hi,
I’m a big fan of the command line but sometimes there too much to type even with autocompletion.
During my contribution to gardenctl I thought about to start local gardentctl in such API server mode and expose available functionality via REST to make it possible to interact via some kind of web-based UI.
What I saw during DoD hand-on session that valuable time spent is navigation over different components for observing mostly.
At the moment I don’t have the answer if the Gardner Dashboard should be extended and cover functionality of gardnerctl or gardenctl might be extended in the way that will become feature-rich locally-executed Dashboard.
Definitely CLI and web-UI it’s two different interfaces but both can have the same foundation and we can code it in such a way.

@vlerenc
Copy link
Contributor Author

vlerenc commented Dec 21, 2020

The best UX would be to have a top-level plugin (k orphan or k diag) which can call out to provider-specific implementations.

@timebertt Yes, from experience with /diag in the bot, I can only agree that we should have only one command that executes a multitude of checks, some of them are IaaS-specific. They should all adhere to the same contract, so that the findings can be uniformly aggregated, assessed and visualised (json, yaml, table, markdown).

I thought about this a bit again, and I personally would actually like k garden, k shoot, etc. and use it in my normal workflow. I think, it would be a great UX and also nice signal to the community, when we integrate with kubectl so tightly.
So I vote for multiple plugins for the targeting mechanism. Probably we can use some form of symlinks/wrapper scripts, that are managed by gardenctl itself (no need for extra binaries).

Also my favorite, see examples.

Also, I would like to see the controlplane target, which targets a seed with a specific Shoot namespace selected.

That I do not understand. I would still target the shoot and use --control-plane or whatever as an argument to specify that I want to get to the control plane of that shoot cluster, but it's still the shoot I am interested in and that its control plane is on a seed is an implementation detail.

One thing, I would also like to see as a targeting mechanism is jumping between clusters/targets in the hierarchy.
E.g. I'm targeting a shoot as a first investigation step, then I want to check the control plane and would therefore issue one single command in order to target the shoot's namespace in the hosting seed, e.g. k controlplane or k seed with some additional flag/smart preselection. Afterwards, I would like to jump back to the shoot. This could of course be realized via history targeting, but could also be some additional flag or smart preselection for k shoot. This mechanism could also be applied to other target kinds like projects, i.e. jumping with k project to the containing project of a Shoot.

Hmm… “seed” mixes things for me. There are two reasons to visit a seed:

  1. Either I want to target a seed and am interested in the seed itself, then I want to be in the role of the overall seed admin and am interested in kube-system/garden/controller-registrations, etc. That’s what the seed sub command would be for in my eyes.
  2. Or, I want to reach the control plane of a shoot and then I want to end up in the control plane namespace of a seed, which is an implementation detail of the shoot, hence seed would be somewhat unintuitive for me.

But I get your point. How about:
kubectl shoot --control-plane
kubectl shoot --cluster which is the default, i.e. equivalent with kubectl shoot ...

But I am not too happy about this either. Some "switch" functionality would be nice.

Usually though, I am not switching. In most cases I open two panes, one on the control plane and one on the cluster (I have the "coordinates", e.g. the dashboard URL then still in my clipboard).

@timebertt
Copy link
Contributor

But I get your point. How about:
kubectl shoot --control-plane

Yes, something like this would work for me. And is probably even cleaner semantically...

There are two reasons to visit a seed:

I would add one more reason, which also partly motivates the request for switching between Control Plane and shoot.
Oftentimes I'm analysing some seed-global problem or some issue effecting multiple shoots on one seed and then I target a Seed, randomly look into some namespaces and if I find something interesting, I jump to the shoot.
Currently I achieve this by getting the kubecfg secret and directly sourcing it. Though, my tooling then lacks some functionality to jump back...

Usually though, I am not switching. In most cases I open two panes, one on the control plane and one on the cluster (I have the "coordinates", e.g. the dashboard URL then still in my clipboard).

Yes, that's also often the case for me, which brings me to another question, which we haven't covered until now:
Do we want to support some session mechanism?

Use case a: I'm targeting some control plane and now want to analyze control plane and the shoot cluster side-by-side in multiple panes.
If I have the coordinates still in my clipboard, this is easy. Still, I oftentimes don't find myself having the coordinates at hand when doing such things.

Continuing the session by default is dangerous, so we would need some command to continue the last session on demand, meaning targeting the last cluster again. Then operators can directly jump where ever they want from thereon.
This can probably be solved by the history targeting mechanisms (k seed - and similar).

Use case b: I'm targeting some control plane and now want to analyze another control plane on the same seed side-by-side in multiple panes.
What I currently do for this is "cloning" my kubeconfig, which allows me to use kubens to have to panes targeting the same cluster but different namespaces.

This can probably be supported either by some session mechanism in gardenctl directly or something like the kubeconfig cloning.
I don't have a good picture for this right now, but I tend to prefer the cloning approach which has lower thinking/typing overhead and might feel more "natural" when navigating clusters (no need to issue some commands for starting/continuing a session, ...)

@vlerenc
Copy link
Contributor Author

vlerenc commented Dec 22, 2020

Continuing the session by default is dangerous, so we would need some command to continue the last session on demand, meaning targeting the last cluster again. Then operators can directly jump where ever they want from thereon.
This can probably be solved by the history targeting mechanisms (k seed - and similar).

Yes, I know the problem and also hope, the history feature might help. Something similar to git was what I was thinking, but haven't detailed out above and chickened out with - only, maybe kubectl shoot HEAD or kubectl shoot HEAD^1 for the previous one. You could combine that with additional parameters like --control-plane. To remain in the example, you target one cluster directly, e.g. with kubectl shoot https://dashboard.garden.prod.com/namespace/garden-core/shoots/funny and then you do in another pane kubectl shoot --control-plane HEAD to reach its control plane.

In the end, all of us will anyhow create aliases for the most useful cases, but if the CLI supports this generally in a clean way, it would be good.

@vlerenc
Copy link
Contributor Author

vlerenc commented Dec 22, 2020

Use case B is (maybe) harder and generally another problem. It's also bugging me and my workflows. Generally, I would very much like to have a solution that provides me the targetting, but side-effect free. If I target a cluster in one pane and the same in another and then kubens, I hate its side-effects. Is it possible to automatically rectify the situation, something like clone-on-demand?

@timebertt
Copy link
Contributor

Well, I can imagine different flows for this:

  • k clone to clone the current kubeconfig
  • k shoot|seed|... --clone to target a new cluster but get a cloned/"isolated" kubeconfig
  • option in the plugin config allowing to turn on "clone-on-target" by default

Or even all of them, so everyone can choose the workflow they like.

Personally, I would really like to see support for this, as I heavily use this cloning approach and I guess, other folks will find this useful, too.

@vlerenc
Copy link
Contributor Author

vlerenc commented Dec 22, 2020

@timebertt Hmm... isn't a general clone difficult? Let's say, your current kubeconfig is one in a local folder backing a git repo or whatever. Where would you physically clone the kubeconfig to? That would only work in combination with your other work, i.e. a session-based temp folder of sorts, right?

The second option, k shoot|seed|... --clone has too much cognitive overhead for me personally. I don't want to concern myself with these things - I want no side effects.

Therefore, I am clearly for option #3. Every time you target something or retarget it, it's another "instance" of the kubeconfig.

The trouble is, this is only because of the namespace-gets-smeared-into-the-kubeconfig problem. :-(

@timebertt
Copy link
Contributor

Let's say, your current kubeconfig is one in a local folder backing a git repo or whatever. Where would you physically clone the kubeconfig to? That would only work in combination with your other work, i.e. a session-based temp folder of sorts, right?

Yeah, I would never clone it to the current working directory. That's apparently quite dangerous.

Yes, I would employ some similar mechanism like I do currently with the terminal session specific temp dir. We will anyways need some local cache managment for the smart targeting, credentials encryption and so on, so this shouldn't be too much overhead.

@vlerenc
Copy link
Contributor Author

vlerenc commented Dec 22, 2020

@timebertt Hmm... not sure. I like to have a robust solution with only minimal assumptions and integration into "my personal shell environment". The $PROMPT_COMMAND modification I need for the hooks is necessary, but otherwise I would like to keep it to a minimum.

We will anyways need some local cache managment for the smart targeting, credentials encryption and so on, so this shouldn't be too much overhead.

What is the "local cache management for the smart targeting"? I am torn between my wish to not have side effects and having a shared/global history, so that I can refer to targetted clusters in new/other shells. Shells have the same issue with their history. What's our take here?

E.g. looking here at shell features such as:

# Avoid duplicates
HISTCONTROL=ignoredups:erasedups
shopt -s histappend

credentials encryption

Hmm... I thought there is no urgent need anymore for a/the tmp folder approach anymore, because of our other measures (OIDC, transient access, local encryption).

Anyways, I am not totally against it. If it can be done nicely, OK. I am just saying that a slim solution would be much appreciated where people can work with the plugins without much ceremony to get it set up first.

@timebertt
Copy link
Contributor

I was just talking about that we will anyways need some local directory structure for caching the topology detection results and also to store the kubeconfigs with the encrypted credentials somewhere.
gardenctl should of course manage all of this under the hood without any need for the user to set it up (only requirement should be the hook setup).

And if that structure and mechanisms are already in place, we can also add a temp directory to that, where we can store cloned kubeconfigs.
gardenctl can clean it up every now and then on some invocations. It's not strictly required anymore, as you correctly pointed out.
Though, I still would try to avoid keeping unneeded kubeconfigs on disk for too long and spamming the users disk and just regularly clean the cache (also the "non-cloned" kubeconfigs).

@vlerenc
Copy link
Contributor Author

vlerenc commented Dec 23, 2020

Sure, thanks @timebertt.

Though, I still would try to avoid keeping unneeded kubeconfigs on disk for too long and spamming the users disk and just regularly clean the cache (also the "non-cloned" kubeconfigs).

👍

@vlerenc
Copy link
Contributor Author

vlerenc commented Jan 19, 2021

@mvladev You were mentioning that we have another/better option to access the shoots than temporary service accounts (which, incidentally, is also the way the web terminals get access)? Could you please share the link here?

@tedteng
Copy link
Contributor

tedteng commented Jan 20, 2021

  • Useful: diag (merge with orphan) to run a self-diagnostic on a shoot (much like the robot does on /diag, see below), but it should work with operators (full diagnosis as they have access to the seed clusters, which the command should silently obtain if possible) and end users (which have only access to the shoot cluster)

I was thinking how about implement operation CRD somewhere and expose functions to support API call (etc. HTTP/REST/GraphQL). The credentials of shoot cluster/Seed cluster process communcation internally inside of cluster. API expose Seed/Shoot functions which used by /diag,orphan

It generated output for Github comments when call from /diag or raw data output used by gardenctl or plugins calling then display in terminal

One (target) or multiple (garden, seed, project, shoot) plugins that could/would deal with managing the kubeconfigs

  • Most useful: aliyun|aws|az|gcloud|openstack to invoke the CLIs, so we should continue to support these

I have some thoughts regarding the plugins, (eg garden,seed,project,shoot) . To deal with KUBECONFIG maybe we can follow the same principle from https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/ All environment variables are also passed as-is to the executable:

export KUBECONFIG=~/.kube/config
kubectl foo config
/home/<user>/.kube/config
KUBECONFIG=/etc/kube/config kubectl foo config
/etc/kube/config

That means the plugins not generated KUBECONFIG file directly.

The core of targetting logic still remain in cmd/target.go package when gardenctl target . That means only one target logic needs maintenance instead of inside of each Plugins individually.

To handle with Plugins target eg kubectl shoot --seedkubectl shoot -u url , The plugin code can import gardenctl/cmd/target package and passing the variable to take care generated KUBECONFIG (in memory or local case by case )

Which lead to me a final thought, similar idea as above use API call, What about make gardenctl/cmd/target package become CRD in somewhere internal access which also compliance security policy (Global VPN , Corp network) . To expose a function's process output kubeconfig and decode in local or memory. instead of download garden kubeocnfig to local when init.

Then the authentication and authorization set up in API, integrate with SSO or other third-party tools

as default use API to fetch garden kubeconfig, seed kubeconfig, shoot kubeconfig when target. The current gardenctl target method still available in binary in case of any issue when API down.

@vlerenc
Copy link
Contributor Author

vlerenc commented Mar 4, 2021

In regards to the targetting topic, @danielfoehrKn suggested to use https://github.com/danielfoehrKn/kubeswitch, i.e. to split off the kubeconfig retrieval from the gardenctl commands.

@petersutter
Copy link
Contributor

kubeswitch depends on having all the required kubeconfigs locally / fetch them in advance. With OIDC kubeconfigs this "may" not be that much of an issue, however I would rather not have all the kubeconfigs locally on my system (being in vault or not) and only fetch them on demand if I really need them.
However, I generally like the idea of splitting the kubeconfig retrievel from the other gardenctl commands like ssh etc. This means that the other tool, like kubeswitch would need to implement things like the switching logic between control plane and shoot, cloning the kubeconfig, etc.
This means operators can still use their own tooling and the contract between gardenctl and the ops-tool is the exported KUBECONFIG, right? And gardenctl of course still needs to have the (virtual) garden kubeconfig configured.

@danielfoehrKn
Copy link

kubeswitch depends on having all the required kubeconfigs locally / fetch them in advance.

That would not be required. If you like, we can setup a short sync

@petersutter
Copy link
Contributor

sounds good

@danielfoehrKn
Copy link

@petersutter and I had a quick sync regarding reusing the targeting functionality and have a very rough idea how it could look like.
However, we would like to have some additional people involved - maybe we can have a meeting next week or so @vlerenc - WDYT

@neo-liang-sap
Copy link
Contributor

Hi @danielfoehrKn , could you please forward the mtg request to me? SRE team would like to be involved in this too, if proper :)

@tedteng
Copy link
Contributor

tedteng commented Mar 5, 2021

@danielfoehrKn Please add me, I am also interested in this topic and kubeswitch as well

@danielfoehrKn
Copy link

Sure, will do.

@danielfoehrKn
Copy link

After thinking about it again, we decided, for now, to not invest in integrating the gardenctl v2 with kubeswitch to reuse functionality.

Reason

  • kubeswitch only knows the concepts of kubecontext names and not Gardener resources Shoot / Seed / ... This makes it hard to build gardenctl features on top that rely on this notion.
  • Many of kubeswitche's advanced features are only needed for fuzzy search capabilities over multiple landscapes (e.g index cache, hot reload). Gardenctl's does not need to offer that. The user can use kubeswitch for this, his own tooling or use non-fuzzy means via gardenctl e.g past the dashboard link.

As a result, Gardenctl needs to implement its own basic kubecontext switching functionality. This should not be so much effort and can also be copied / inspired by kubeswitch (this is not much code actually).
This enables the following use cases described in Targeting in the issue

  • Direct targeting
  • Hierarchical targeting
  • Domain targeting
  • NOT: fuzzy targeting (if you want that, install kubeswitch).

The contract between Gardenctl and another tool such as kubeswitch would be the current kubeconfig pointing to a Shoot/Garden/Seed cluster.
Then Gardenctl can for instance

  • provide ssh
  • switch to the control plane of a Shoot

Determining the Garden cluster for a Shoot/Seed relies on the cluster identity config map present in Shoot clusters, Shooted seeds, and the Garden cluster.

To sum it up: We think that Gardenctl and kubeswitch can be used in a complementary fashion but do not need to be integrated necessarily. We propose to narrow the scope of Gardenctl to exclude fuzzy targeting.

@danielfoehrKn
Copy link

However @tedteng @neo-liang-sap you are more than welcome to approach me if you are interested in a quick introduction to kubeswitch (for fuzzy search over all landscapes, history targeting, aliasing, etc.).

@petersutter
Copy link
Contributor

petersutter commented Mar 19, 2021

I just created a proposal for the ssh controller task in a separate issue which can be discussed here #508 #510

@tedteng tedteng mentioned this issue May 14, 2021
@gardener-robot gardener-robot added the lifecycle/stale Nobody worked on this for 6 months (will further age) label Sep 27, 2021
@gardener-robot gardener-robot added lifecycle/rotten Nobody worked on this for 12 months (final aging stage) and removed lifecycle/stale Nobody worked on this for 6 months (will further age) labels Mar 27, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
component/gardenctl Gardener CLI kind/epic Large multi-story topic lifecycle/rotten Nobody worked on this for 12 months (final aging stage)
Projects
None yet
Development

No branches or pull requests

8 participants