English | 简体中文
cloudtty is an easy-to-use operator to run web terminal and cloud shell intended for a kubernetes-native environment. You can easily open a terminal on your own browser via cloudtty. The community is always open for any contributor and those who want to have a try.
Literally, cloudtty herein refers to a virtual console, shell, or terminal running on web and clouds. You can use it anywhere with an internet conneciton and it will be automatically connected to the cloud.
Early user terminals connected to computers were electromechanical teleprinters or teletypewriters (TeleTYpewriter, TTY), which might be the origin of TTY. Gradually, TTY has continued to be used as the name for a text-only console although now this text-only console is a virtual console not a physical console.
A project ttyd provides some features to share terminals over the web.
But if you use Kubernetes, a more cloud-native enviroment is required to run the webtty via the kubernetes
way (running as a pod, and generated by CRDs),
which is covered by cloudtty. You are welcome to try cloudtty 🎉.
-
Many enterprises use a cloud platform to manage Kubernetes, but due to security reasons, you cannot use SSH to connect the node host to execute
kubectl
commands. In this case, you may require a cloud shell capability. -
A running container on kubernetes can be "entered" (via
Kubectl exec
) on a browser web page. -
The container logs can be displayed in real time (scrolling) on a browser web page.
After the cloudtty is intergated to your own UI, it would look like:
-
Step 1: Install the Operator and CRDs
a. Install the operator using Helm
helm repo add cloudtty https://cloudtty.github.io/cloudtty helm repo update helm install cloudtty-operator --version 0.5.0 cloudtty/cloudtty
b. Wait for the operator pod until it is running
kubectl wait deployment cloudtty-operator-controller-manager --for=condition=Available=True
-
Step 2: Create a cloudtty instance by applying CR, and then monitor its status
kubectl apply -f https://raw.githubusercontent.com/cloudtty/cloudtty/v0.5.0/config/samples/local_cluster_v1alpha1_cloudshell.yaml
By default, it will create a cloudtty pod and expose the
NodePort
service. Alternatively,Cluster-IP
,Ingress
, andVirtual Service
(for Istio) are all supported asexposureMode
, please refer toconfig/samples/
for more examples. -
Step 3: Observe CR status to obtain its web access url, such as:
kubectl get cloudshell -w
You can see the following information:
NAME USER COMMAND TYPE URL PHASE AGE cloudshell-sample root bash NodePort 192.168.4.1:30167 Ready 31s cloudshell-sample2 root bash NodePort 192.168.4.1:30385 Ready 9s
When the status of cloudshell changes to
Ready
and theURL
field appears, copy and paste the URL on your browser to access the cluster withkubectl
, as shown below:
Most users need more than just the basic kubectl
tools to manage their clusters. we can customize image based on cloudshell base image. here is an example of adding the karmadactl
tool.
-
Modify Dockerfile.example.
FROM ghcr.io/cloudtty/cloudshell:v0.5.0 RUN curl -fsSLO https://github.com/karmada-io/karmada/releases/download/v1.2.0/kubectl-karmada-linux-amd64.tgz \ && tar -zxf kubectl-karmada-linux-amd64.tgz \ && chmod +x kubectl-karmada \ && mv kubectl-karmada /usr/local/bin/kubectl-karmada \ && which kubectl-karmada ENTRYPOINT ttyd
-
Rebuild new image with
karmadactl
tool:docker build -t <IMAGE> . -f docker/Dockerfile-webtty
There are two ways to set the customized cloudshell image:
-
You can set the image directly by using the cloudshell CR field
spec.image
.apiVersion: cloudshell.cloudtty.io/v1alpha1 kind: CloudShell metadata: name: cloudshell-sample spec: secretRef: name: "my-kubeconfig" image: ghcr.io/cloudtty/customize_cloudshell:latest
-
Set the 'JobTemplate' image parameter to run the customized cloudshell image when installing cloudtty.
helm install cloudtty-operator --version 0.5.0 cloudtty/cloudtty --set jobTemplate.image.registry=</REGISTRY> --set jobTemplate.image.repository=</REPOSITORY> --set jobTemplate.image.tag=</TAG>
If you have installed cloudtty, you can also modify the configMap of JobTemplate to set the cloudshell image.
If cloudtty manages a remote cluster (another cluster than which the cloudtty operator runs on), you need tell cloudtty the kube.conf of the remote cluster as below.
You can copy the kube.config, ~/.kube/config
, from a remote cluster.
kubectl create secret generic my-kubeconfig --from-file=kube.config
Be careful to ensure the /root/.kube/config
:
-
contains the base64 encoded certs/secrets instead of local files.
-
can reach the k8s api-server endpoint (via host IP or cluster IP) instead of localhost.
-
If the cluster is remote,
cloudtty
needs to specifykubeconfig
to access the cluster using the kubectl command tool. You need to provide the kubeconfig stored in secret and specify the name to cloudshellspec.secretRef.name
CR. kubeconfig will be automatically mounted to the cloudtty container. Ensure that the server IP address is properly connected to the cluster network. -
If cloudtty runs on the same cluster which to be managed, you don't need to do this (a ServiceAccount with
cluster-admin
role permissions will be binded to the pod automaticlly. Inside the container, kubectl automatically detectsCA
certificates and token. If any concern with security, you can also provide your own kubeconfig to control the permissions for different users.)
The basic image to cloudshell had integrated with the plugin of kubectl-node-shell, you can use its command to connect an arbitrary node of specified cluster. It will run a pod with privilege, if you attach importance to the pod security, please be careful with the feature. See the following sample:
apiVersion: cloudshell.cloudtty.io/v1alpha1
kind: CloudShell
metadata:
name: cloudshell-node-shell
spec:
secretRef:
name: "<KUBECONFIG>"
commandAction: "kubectl node-shell <NODE_NAME>"
For more samples refer to kubectl-node-shell.
If a cluster has the security policy such as
PodSecurity
andPSP
, the feature may be affected.
Cloudtty provides the following four modes to expose cloudtty services to satisfy different usage scenarios:
-
ClusterIP
: Service ClusterIP type in a cluster, which is suitable for third-party integration of cloudtty server. You can choose a more flexible way to expose your services. -
NodePort
(default): The simplest way to expose the service mode is to create a service resource with type NodePort in a cluster. You can access the cloudtty service using the master node IP address and port number. -
Ingress
: Create a Service resource of ClusterIP type in a cluster and create an ingress resource to load the service based on routing rules. This works when the Ingress Controller is used for traffic load in the cluster. -
VirtualService
(Istio): Create a ClusterIP Service resource in a cluster and create aVirtaulService
resource. This mode is used when Istio is used to load traffic in a cluster.
AllowSecretStoreKubeconfig
:Restore kubeconfig file with secret resource, if the featureGate is enabled, the field spec.configmapName
will be disabled.
You can use the field spec.secretRef.name
to difine where kubeconfig is. Currently the featureGate is in alpha phase, disabled by default.
- If you use
yaml
to deploy cloudtty, add--feature-gates=AllowSecretStoreKubeconfig=true
to operator to run arguments. - If you use
helm
to deploy cloudtty, set the parameter--set image.featureGates.AllowSecretStoreKubeconfig=true
.
-
Operator creates a job and a service with the same name in the proper namespace. If Ingress or VitualService is used, it also creates the routing information.
-
When the pod status is
Ready
, it will show the access url to the cloudshell status. -
When a job ends after the TTL is expired or the job is terminated for some other reasons, the cloudshell status changes to
Completed
once the job changes toCompleted
. You can set cloudshell to delete associated resources when the status isCompleted
. -
When cloudshell is deleted, the corresponding job and service (through 'ownerReference') are automatically deleted. If Ingress or VitualService mode is used, the corresponding routing information will be deleted too.
-
Generate CRDs to charts/_crds
make generate-yaml
-
Install CRDs
make install
-
Run the operator
make run
For example, automatically print logs for a container.
apiVersion: cloudshell.cloudtty.io/v1alpha1
kind: CloudShell
metadata:
name: cloudshell-sample
spec:
secretRef:
name: "my-kubeconfig"
runAsUser: "root"
commandAction: "kubectl -n kube-system logs -f kube-apiserver-cn-stack"
once: false
This project is based on https://github.com/tsl0922/ttyd. Many thanks to tsl0922
, yudai
, and the community.
The frontend UI code was originated from ttyd
project, and the ttyd binary inside the container also comes from ttyd
project.
If you have any question, feel free to reach out to us in the following ways:
-
WeChat Group: contact
calvin0327
([email protected]) to join
-
Control permissions through RBAC (to generate the
/var/run/secret
file). -
For security, jobs should run in separate namespaces, not in the namespace same as cloudshell.
-
Check the pod is running and endpoint status changes to
Ready
, and the cloudshell phase is set toReady
. -
TTL should be set to both job and shell.
-
Job creation templates are currently hardcode and should provide a more flexible way to modify the job template.
More will be coming Soon. Welcome to open an issue and propose a PR. 🎉🎉🎉
Made with contrib.rocks.