diff --git a/docs/CKA/Labs/01.md b/docs/CKA/Labs/01.md
new file mode 100644
index 00000000..ed675856
--- /dev/null
+++ b/docs/CKA/Labs/01.md
@@ -0,0 +1,4 @@
+# 01 - Fix a problem with kube-api
+
+You have problems with k8s cluster.
+Connect to controlPlane **ssh k8s1_controlPlane_1** and fix them.
diff --git a/docs/CKA/Labs/02.md b/docs/CKA/Labs/02.md
new file mode 100644
index 00000000..7487bb76
--- /dev/null
+++ b/docs/CKA/Labs/02.md
@@ -0,0 +1,7 @@
+# 02 - Create HPA based on the CPU load
+
+We have an enterprise application in namespace **prod-jobs**.
+The application fetches tasks for processing from a queue (Kafka).
+We need to create **Horizontal Pod Autoscaling** based on **CPU** load.
+When the CPU load rises to **100%** we need to increase the number of pods to **6**.
+The **minimum** amount of pods should be **2**.
diff --git a/docs/CKA/Labs/03.md b/docs/CKA/Labs/03.md
new file mode 100644
index 00000000..d188cec0
--- /dev/null
+++ b/docs/CKA/Labs/03.md
@@ -0,0 +1,98 @@
+# 03 - Operations with Nginx ingress. Routing by header
+
+We use the **ingress nginx** controller.
+We have an application (deployment, service, ingress) in namespace **meg**. You can check the app via command `curl http://ckad.local:30102/app`
+
+You need create new deployment, service, ingress in namespace **meg**
+
+- **deployment** : name `meg-app2`, env: name: `SERVER_NAME` and value: `megApp2` , all other parameters copy from `meg-app` deployment
+
+- **service** : name `meg-app2` -> deployment `meg-app2`
+
+- **ingress** : name `meg-app2` -> service `meg-app2`, route to the same address `curl http://ckad.local:30102/app` but with header `X-Appversion=v2`
+
+In case of any other header value or unset header, the request should go to the old service.
+
+check ` curl -H "X-Appversion: v2" http://ckad.local:30102/app `
+
+```text
+Server Name: megApp2
+URL: http://ckad.local:30102/
+Client IP: 10.0.158.196
+Method: GET
+Protocol: HTTP/1.1
+Headers:
+X-Forwarded-For: 10.2.25.129
+X-Forwarded-Proto: http
+X-Scheme: http
+X-Request-Id: 78fa94a9ffcab9268864bf006fa67cfa
+X-Real-Ip: 10.2.25.129
+X-Forwarded-Host: ckad.local:30102
+X-Forwarded-Port: 80
+X-Forwarded-Scheme: http
+User-Agent: curl/7.68.0
+Accept: */*
+X-Appversion: v2
+```
+
+Check ` curl -H "X-Appversion: v3" http://ckad.local:30102/app `
+
+```text
+Server Name: megApp
+URL: http://ckad.local:30102/
+Client IP: 10.0.158.196
+Method: GET
+Protocol: HTTP/1.1
+Headers:
+X-Forwarded-For: 10.2.25.129
+X-Request-Id: 139e579f92671116b1e12570e564f569
+X-Real-Ip: 10.2.25.129
+X-Forwarded-Scheme: http
+X-Scheme: http
+User-Agent: curl/7.68.0
+Accept: */*
+X-Appversion: v3
+X-Forwarded-Host: ckad.local:30102
+X-Forwarded-Port: 80
+X-Forwarded-Proto: http
+```
+
+check ` curl http://ckad.local:30102/app `
+
+```sh
+Server Name: megApp
+URL: http://ckad.local:30102/
+Client IP: 10.0.158.196
+Method: GET
+Protocol: HTTP/1.1
+Headers:
+X-Request-Id: 119892e6edf24b4b794a025a9fb5c87e
+X-Real-Ip: 10.2.25.129
+X-Forwarded-For: 10.2.25.129
+X-Forwarded-Host: ckad.local:30102
+X-Forwarded-Port: 80
+X-Scheme: http
+Accept: */*
+X-Forwarded-Proto: http
+X-Forwarded-Scheme: http
+User-Agent: curl/7.68.0
+```
+
+check result via tests
+
+```sh
+check_result
+```
+
+```sh
+ubuntu@worker:~> check_result
+ ✓ 0 Init
+ ✓ 1.1 Check routing by header X-Appversion = v2
+ ✓ 1.2 Check routing by header X-Appversion = v3
+ ✓ 1.3 Check routing without header X-Appversion
+
+4 tests, 0 failures
+ result = 100.00 % ok_points=3 all_points=3
+time_left=327 minutes
+you spend 32 minutes
+```
diff --git a/docs/CKA/Labs/04.md b/docs/CKA/Labs/04.md
new file mode 100644
index 00000000..ab3ecc28
--- /dev/null
+++ b/docs/CKA/Labs/04.md
@@ -0,0 +1,37 @@
+# 04 - Nginx ingress. Canary deployment
+
+## Routing 30% of requests to new version of app
+
+We use the **ingress nginx** controller.
+We have an application (deployment, service , ingress) in namespace **meg**. You can check the app via command `curl http://ckad.local:30102/app`
+
+You need create new deployment, service, ingress in namespace **meg**
+
+- **deployment** : name `meg-app2` , env: name: `SERVER_NAME` and value: `megApp2` , all other parameters copy from `meg-app` deployment
+
+- **service** : name `meg-app2` -> deployment `meg-app2`
+
+- **ingress** : name `meg-app2` -> service `meg-app2` , route 30% of requests to new version of app .
+
+In case of any other header value or unset header, the request should go to the old service.
+
+Run the command 10 times to test it ` curl http://ckad.local:30102/app `
+You are expected to see a response from **megApp2** of about **30%**.
+
+check result via tests
+
+```sh
+check_result
+```
+
+```sh
+ubuntu@worker:~> check_result
+ ✓ 0 Init
+ ✓ 1.1 Check routing to version 2
+ ✓ 1.2 Check ingress v2 canary-weight
+
+3 tests, 0 failures
+ result = 100.00 % ok_points=2 all_points=2
+time_left=333 minutes
+you spend 26 minutes
+```
diff --git a/docs/CKA/Labs/05.md b/docs/CKA/Labs/05.md
new file mode 100644
index 00000000..6b3bee74
--- /dev/null
+++ b/docs/CKA/Labs/05.md
@@ -0,0 +1,12 @@
+# 05 - PriorityClass
+
+You have a DaemonSet with a specialized monitoring system.
+The problem is that under this system is superseded by other pods and you lose the data of the monitoring system.
+
+You must create a PriorityClass and apply it to the DaemonSet monitoring system in NS `monitoring`
+
+| **1** | Create a PriorityClass and apply it to the DaemonSet |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 3% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - PriorityClass name `monitoring` with value `1000000000` - DaemonSet `monitoring-system` in `monitoring` NS has PriorityClass `monitoring` - all pods in `monitoring` NS have status `Running` |
diff --git a/docs/CKA/Labs/Solutions/01.md b/docs/CKA/Labs/Solutions/01.md
new file mode 100644
index 00000000..25e781db
--- /dev/null
+++ b/docs/CKA/Labs/Solutions/01.md
@@ -0,0 +1,192 @@
+# 01
+
+
+
+
+```sh
+ssh k8s1_controlPlane_1
+```
+
+```sh
+sudo su
+```
+
+```sh
+service kubelet status
+
+● kubelet.service - kubelet: The Kubernetes Node Agent
+ Loaded: loaded (/lib/systemd/system/kubelet.service; disabled; vendor preset: enabled)
+ Drop-In: /usr/lib/systemd/system/kubelet.service.d
+ └─10-kubeadm.conf
+ Active: inactive (dead)
+ Docs: https://kubernetes.io/docs/
+
+Feb 03 18:20:40 ip-10-2-4-149 kubelet[5330]: W0203 18:20:40.973916 5330 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "htt>
+Feb 03 18:20:40 ip-10-2-4-149 kubelet[5330]: E0203 18:20:40.973966 5330 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed >
+Feb 03 18:20:41 ip-10-2-4-149 kubelet[5330]: E0203 18:20:41.373569 5330 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info>
+Feb 03 18:20:41 ip-10-2-4-149 kubelet[5330]: W0203 18:20:41.466839 5330 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: >
+Feb 03 18:20:41 ip-10-2-4-149 kubelet[5330]: E0203 18:20:41.466881 5330 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass:>
+Feb 03 18:20:41 ip-10-2-4-149 kubelet[5330]: E0203 18:20:41.613869 5330 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.2.4.149:6443/>
+Feb 03 18:20:42 ip-10-2-4-149 kubelet[5330]: I0203 18:20:42.243248 5330 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki>
+Feb 03 18:20:42 ip-10-2-4-149 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
+Feb 03 18:20:42 ip-10-2-4-149 systemd[1]: kubelet.service: Succeeded.
+Feb 03 18:20:42 ip-10-2-4-149 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
+```
+
+```sh
+status is **inactive (dead)**
+
+start kubelet
+```
+
+```sh
+systemctl enable kubelet
+systemctl start kubelet
+systemctl status kubelet
+```
+
+```
+● kubelet.service - kubelet: The Kubernetes Node Agent
+ Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
+ Drop-In: /usr/lib/systemd/system/kubelet.service.d
+ └─10-kubeadm.conf
+ Active: active (running) since Sat 2024-02-03 18:41:35 UTC; 1s ago
+ Docs: https://kubernetes.io/docs/
+ Main PID: 5761 (kubelet)
+ Tasks: 11 (limit: 4597)
+ Memory: 25.3M
+ CGroup: /system.slice/kubelet.service
+ └─5761 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var
+```
+
+now status is **active (running)**
+check connection to k8s
+
+```sh
+# k get ns
+
+E0203 18:48:25.894982 5900 memcache.go:265] couldn't get current server API group list: Get "https://10.2.4.149:6443/api?timeout=32s": dial tcp 10.2.4.149:6443: connect: connection refused
+E0203 18:48:25.895314 5900 memcache.go:265] couldn't get current server API group list: Get "https://10.2.4.149:6443/api?timeout=32s": dial tcp 10.2.4.149:6443: connect: connection refused
+E0203 18:48:25.896459 5900 memcache.go:265] couldn't get current server API group list: Get "https://10.2.4.149:6443/api?timeout=32s": dial tcp 10.2.4.149:6443: connect: connection refused
+E0203 18:48:25.896778 5900 memcache.go:265] couldn't get current server API group list: Get "https://10.2.4.149:6443/api?timeout=32s": dial tcp 10.2.4.149:6443: connect: connection refused
+E0203 18:48:25.898099 5900 memcache.go:265] couldn't get current server API group list: Get "https://10.2.4.149:6443/api?timeout=32s": dial tcp 10.2.4.149:6443: connect: connection refused
+The connection to the server 10.2.4.149:6443 was refused - did you specify the right host or port?
+```
+
+kube api don't work .
+
+check kube-api container
+
+```sh
+# crictl | grep api
+```
+
+kube-api is a static pod with starts by kubelet from manifest **/etc/kubernetes/manifests/kube-apiserver.yaml**
+
+check kubelet logs
+
+```sh
+journalctl -u kubelet | grep 'kube-apiserver'
+
+Feb 03 18:20:31 ip-10-2-4-149 kubelet[4732]: E0203 18:20:31.063086 4732 file.go:108] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver.yaml\": /etc/kubernetes/manifests/kube-apiserver.yaml: couldn't parse as pod(no kind \"PoD\" is registered for version \"v1\" in scheme \"pkg/api/legacyscheme/scheme.go:30\"), please check config file"
+```
+
+error `pod(no kind \"PoD\"`
+
+replace **PoD** to Pod in `/etc/kubernetes/manifests/kube-apiserver.yaml` and restart kubelet .
+
+```sh
+service kubelet restart
+```
+
+check connection to k8s
+
+```sh
+# k get ns
+
+E0203 18:48:25.894982 5900 memcache.go:265] couldn't get current server API group list: Get "https://10.2.4.149:6443/api?timeout=32s": dial tcp 10.2.4.149:6443: connect: connection refused
+E0203 18:48:25.895314 5900 memcache.go:265] couldn't get current server API group list: Get "https://10.2.4.149:6443/api?timeout=32s": dial tcp 10.2.4.149:6443: connect: connection refused
+E0203 18:48:25.896459 5900 memcache.go:265] couldn't get current server API group list: Get "https://10.2.4.149:6443/api?timeout=32s": dial tcp 10.2.4.149:6443: connect: connection refused
+E0203 18:48:25.896778 5900 memcache.go:265] couldn't get current server API group list: Get "https://10.2.4.149:6443/api?timeout=32s": dial tcp 10.2.4.149:6443: connect: connection refused
+E0203 18:48:25.898099 5900 memcache.go:265] couldn't get current server API group list: Get "https://10.2.4.149:6443/api?timeout=32s": dial tcp 10.2.4.149:6443: connect: connection refused
+The connection to the server 10.2.4.149:6443 was refused - did you specify the right host or port?
+```
+
+check kubelet logs
+
+```sh
+journalctl -u kubelet | grep 'kube-apiserver'
+```
+
+```log
+Feb 03 19:11:55 ip-10-2-4-149 kubelet[6661]: E0203 19:11:55.449778 6661 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-2-4-149_kube-system(b1982a51593e867c8f49a556991190ef)\"" pod="kube-system/kube-apiserver-ip-10-2-4-149" podUID="b1982a51593e867c8f49a556991190ef"
+```
+
+don't work it .
+
+check container logs
+
+```sh
+rm -rf /var/log/containers/*
+service kubelet restart
+ls /var/log/containers/
+```
+
+```sh
+kube-apiserver-ip-10-2-4-149_kube-system_kube-apiserver-a686706a02ecb891bd5f38eb467a231eb3ec82fc3043fca9ae292a8f4248d09a.log
+```
+
+```sh
+# cat /var/log/containers/kube-apiserver-ip-10-2-4-149_kube-system_kube-apiserver-80029fa2ce0099c1537c155f9f9e05ad9f95bfd7b98a10fb9ab1f7afe0ad3a91.log
+
+2024-02-03T19:15:16.153152153Z stderr F Error: unknown flag: --new-option2
+```
+
+Error: unknown flag:**--new-option2**
+
+```sh
+vim /etc/kubernetes/manifests/kube-apiserver.yaml
+
+# delete line with `--new-option2`
+and restart kubelet
+```
+
+```sh
+service kubelet restart
+```
+
+check connection to kube-api
+
+```sh
+k get ns
+```
+
+```txt
+NAME STATUS AGE
+default Active 60m
+kube-node-lease Active 60m
+kube-public Active 60m
+kube-system Active 60m
+```
+
+exit to work pc
+```sh
+exit
+exit
+```
+
+check connection from work pc
+
+```sh
+k get ns
+```
+
+```txt
+NAME STATUS AGE
+default Active 60m
+kube-node-lease Active 60m
+kube-public Active 60m
+kube-system Active 60m
+```
+
+It is done .
diff --git a/docs/CKA/Labs/Solutions/02.md b/docs/CKA/Labs/Solutions/02.md
new file mode 100644
index 00000000..50d55bda
--- /dev/null
+++ b/docs/CKA/Labs/Solutions/02.md
@@ -0,0 +1,132 @@
+# 02
+
+[documentation](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/)
+
+[example](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/)
+
+### steps
+* check cpu load (find min and max usage )
+* add request / limit
+* create HPA
+* check result
+
+
+
+```
+watch -n 1 'kubectl top po -n prod-jobs ; kubectl get po -n prod-jobs '
+
+```
+
+```
+# max usage (usage time)
+
+NAME CPU(cores) MEMORY(bytes)
+app-6f6846bc44-8hfm6 267m 1Mi
+NAME READY STATUS RESTARTS AGE
+app-6f6846bc44-8hfm6 1/1 Running 0 20m
+
+
+```
+
+```
+# min usage (idle time)
+
+NAME CPU(cores) MEMORY(bytes)
+app-6f6846bc44-8hfm6 15m 1Mi
+NAME READY STATUS RESTARTS AGE
+app-6f6846bc44-8hfm6 1/1 Running 0 21m
+
+```
+276/15 *100 = 1840 % (increase )
+```
+# update deployment (add resources.limits.cpu resources.requests.cpu
+# k edit deployment app -n prod-jobs
+
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ annotations:
+ deployment.kubernetes.io/revision: "2"
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"app":"app"},"name":"app","namespace":"prod-jobs"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"app"}},"strategy":{},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"app"}},"spec":{"containers":[{"env":[{"name":"ENABLE_LOAD_CPU","value":"true"},{"name":"CPU_MAXPROC","value":"1"},{"name":"CPU_USAGE_PROFILE","value":"1=800=1=60 1=30=1=60"}],"image":"viktoruj/ping_pong","name":"ping-pong-cp6bg","resources":{}}]}}},"status":{}}
+ creationTimestamp: "2024-02-02T04:38:24Z"
+ generation: 2
+ labels:
+ app: app
+ name: app
+ namespace: prod-jobs
+ resourceVersion: "3100"
+ uid: c4f19a91-8549-4424-83fd-814d79291d3e
+spec:
+ progressDeadlineSeconds: 600
+ replicas: 1
+ revisionHistoryLimit: 10
+ selector:
+ matchLabels:
+ app: app
+ strategy:
+ rollingUpdate:
+ maxSurge: 25%
+ maxUnavailable: 25%
+ type: RollingUpdate
+ template:
+ metadata:
+ creationTimestamp: null
+ labels:
+ app: app
+ spec:
+ containers:
+ - env:
+ - name: ENABLE_LOAD_CPU
+ value: "true"
+ - name: CPU_MAXPROC
+ value: "1"
+ - name: CPU_USAGE_PROFILE
+ value: 1=800=1=120 1=30=1=30
+ image: viktoruj/ping_pong
+ imagePullPolicy: Always
+ name: ping-pong-cp6bg
+ resources: # add
+ limits: # add
+ cpu: 400m # add
+ requests: # add
+ cpu: 20m # add
+ terminationMessagePath: /dev/termination-log
+ terminationMessagePolicy: File
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+ schedulerName: default-scheduler
+ securityContext: {}
+ terminationGracePeriodSeconds: 30
+
+
+
+```
+
+````
+k autoscale deployment app --cpu-percent=500 --min=2 --max=6 -n prod-jobs
+````
+
+```
+watch -n 1 'kubectl top po -n prod-jobs ; kubectl get po -n prod-jobs ; kubectl get hpa -n prod-jobs '
+```
+
+````
+Every 1.0s: kubectl top po -n prod-jobs ; kubectl get po -n prod-jobs ; kubectl get hpa -n prod-jobs worker: Fri Feb 2 06:00:06 2024
+
+NAME CPU(cores) MEMORY(bytes)
+app-569b78dcb4-4cs6z 262m 1Mi
+app-569b78dcb4-6zktc 210m 1Mi
+app-569b78dcb4-zsf9z 14m 1Mi
+NAME READY STATUS RESTARTS AGE
+app-569b78dcb4-4cs6z 1/1 Running 0 40m
+app-569b78dcb4-6zktc 1/1 Running 0 2m40s
+app-569b78dcb4-cnmcj 1/1 Running 0 10s
+app-569b78dcb4-f5rjq 1/1 Running 0 10s
+app-569b78dcb4-rvlrn 1/1 Running 0 10s
+app-569b78dcb4-zsf9z 1/1 Running 0 6m32s
+NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
+app Deployment/app 873%/500% 2 6 3 3m25s
+
+
+````
diff --git a/docs/CKA/Labs/Solutions/03.md b/docs/CKA/Labs/Solutions/03.md
new file mode 100644
index 00000000..2914eed5
--- /dev/null
+++ b/docs/CKA/Labs/Solutions/03.md
@@ -0,0 +1,98 @@
+# 03
+
+[video](https://youtu.be/1-qA7RjSx4A)
+
+https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#canary
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ annotations:
+ deployment.kubernetes.io/revision: "1"
+ labels:
+ app: meg-app2
+ name: meg-app2
+ namespace: meg
+spec:
+ progressDeadlineSeconds: 600
+ replicas: 1
+ revisionHistoryLimit: 10
+ selector:
+ matchLabels:
+ app: meg-app2
+ strategy:
+ rollingUpdate:
+ maxSurge: 25%
+ maxUnavailable: 25%
+ type: RollingUpdate
+ template:
+ metadata:
+ creationTimestamp: null
+ labels:
+ app: meg-app2
+ spec:
+ containers:
+ - env:
+ - name: SERVER_NAME
+ value: megApp2
+ - name: SRV_PORT
+ value: "80"
+ image: viktoruj/ping_pong:alpine
+ imagePullPolicy: IfNotPresent
+ name: app
+ ports:
+ - containerPort: 80
+ protocol: TCP
+ resources: {}
+ terminationMessagePath: /dev/termination-log
+ terminationMessagePolicy: File
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+ schedulerName: default-scheduler
+```
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ annotations:
+ labels:
+ app: meg-service2
+ name: meg-service2
+ namespace: meg
+spec:
+ ports:
+ - port: 80
+ protocol: TCP
+ targetPort: 80
+ selector:
+ app: meg-app2
+ type: ClusterIP
+```
+
+```yaml
+# 3_ing.yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: meg-app2
+ namespace: meg
+ annotations:
+ nginx.ingress.kubernetes.io/canary: "true"
+ nginx.ingress.kubernetes.io/canary-by-header: "X-Appversion"
+ nginx.ingress.kubernetes.io/canary-by-header-value: "v2"
+ nginx.ingress.kubernetes.io/rewrite-target: /
+spec:
+ ingressClassName: nginx
+ rules:
+ - http:
+ paths:
+ - path: /app
+ pathType: Prefix
+ backend:
+ service:
+ name: meg-service2
+ port:
+ number: 80
+```
diff --git a/docs/CKA/Labs/Solutions/04.md b/docs/CKA/Labs/Solutions/04.md
new file mode 100644
index 00000000..1e75fefc
--- /dev/null
+++ b/docs/CKA/Labs/Solutions/04.md
@@ -0,0 +1,99 @@
+# 04
+
+[video](https://youtu.be/IC_0FeQtgwA) solution
+
+Reference:
+
+https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#canary
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ annotations:
+ deployment.kubernetes.io/revision: "1"
+ labels:
+ app: meg-app2
+ name: meg-app2
+ namespace: meg
+spec:
+ progressDeadlineSeconds: 600
+ replicas: 1
+ revisionHistoryLimit: 10
+ selector:
+ matchLabels:
+ app: meg-app2
+ strategy:
+ rollingUpdate:
+ maxSurge: 25%
+ maxUnavailable: 25%
+ type: RollingUpdate
+ template:
+ metadata:
+ creationTimestamp: null
+ labels:
+ app: meg-app2
+ spec:
+ containers:
+ - env:
+ - name: SERVER_NAME
+ value: megApp2
+ - name: SRV_PORT
+ value: "80"
+ image: viktoruj/ping_pong:alpine
+ imagePullPolicy: IfNotPresent
+ name: app
+ ports:
+ - containerPort: 80
+ protocol: TCP
+ resources: {}
+ terminationMessagePath: /dev/termination-log
+ terminationMessagePolicy: File
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+ schedulerName: default-scheduler
+```
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ annotations:
+ labels:
+ app: meg-service2
+ name: meg-service2
+ namespace: meg
+spec:
+ ports:
+ - port: 80
+ protocol: TCP
+ targetPort: 80
+ selector:
+ app: meg-app2
+ type: ClusterIP
+```
+
+```yaml
+# 3_ing.yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: meg-app2
+ namespace: meg
+ annotations:
+ nginx.ingress.kubernetes.io/canary: "true"
+ nginx.ingress.kubernetes.io/canary-weight: "30"
+ nginx.ingress.kubernetes.io/rewrite-target: /
+spec:
+ ingressClassName: nginx
+ rules:
+ - http:
+ paths:
+ - path: /app
+ pathType: Prefix
+ backend:
+ service:
+ name: meg-service2
+ port:
+ number: 80
+```
diff --git a/docs/CKA/Labs/Solutions/05.md b/docs/CKA/Labs/Solutions/05.md
new file mode 100644
index 00000000..b5e39143
--- /dev/null
+++ b/docs/CKA/Labs/Solutions/05.md
@@ -0,0 +1,174 @@
+# 05
+
+[video](https://youtu.be/7MhXfbiMfOM)
+
+https://kubernetes.io/blog/2023/01/12/protect-mission-critical-pods-priorityclass/
+
+Check the problem
+
+```sh
+# k get po -n monitoring
+
+NAME READY STATUS RESTARTS AGE
+monitoring-system-ggwrg 0/1 Pending 0 3m35s
+monitoring-system-v7nb2 1/1 Running 0 3m36s
+monitoring-system-xcpq9 1/1 Running 0 3m41s
+
+```
+
+```sh
+# k get po -n monitoring -o wide
+
+ubuntu@worker:~> k get po -n monitoring -o wide
+NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
+monitoring-system-d74xv 1/1 Running 0 4m39s 10.0.74.1 ip-10-2-3-254
+monitoring-system-sxkbj 0/1 Pending 0 4m40s
+monitoring-system-td6wp 1/1 Running 0 4m49s 10.0.194.67 ip-10-2-0-200
+```
+
+```sh
+$ k describe po monitoring-system-sxkbj -n monitoring
+
+Name: monitoring-system-sxkbj
+Namespace: monitoring
+Priority: 0
+Service Account: default
+Node:
+Labels: app=monitoring-system
+ controller-revision-hash=7b7f5d5689
+ pod-template-generation=1
+Annotations:
+Status: Pending
+IP:
+IPs:
+Controlled By: DaemonSet/monitoring-system
+Containers:
+ app:
+ Image: viktoruj/ping_pong
+ Port:
+ Host Port:
+ Limits:
+ memory: 2500Mi
+ Requests:
+ memory: 2500Mi
+ Environment:
+ ENABLE_LOAD_MEMORY: true
+ MEMORY_USAGE_PROFILE: 500=60 1024=30 2048=30
+ ENABLE_LOG_LOAD_MEMORY: true
+ Mounts:
+ /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8g4r8 (ro)
+Conditions:
+ Type Status
+ PodScheduled False
+Volumes:
+ kube-api-access-8g4r8:
+ Type: Projected (a volume that contains injected data from multiple sources)
+ TokenExpirationSeconds: 3607
+ ConfigMapName: kube-root-ca.crt
+ ConfigMapOptional:
+ DownwardAPI: true
+QoS Class: Burstable
+Node-Selectors:
+Tolerations: node-role.kubernetes.io/control-plane:NoSchedule
+ node.kubernetes.io/disk-pressure:NoSchedule op=Exists
+ node.kubernetes.io/memory-pressure:NoSchedule op=Exists
+ node.kubernetes.io/not-ready:NoExecute op=Exists
+ node.kubernetes.io/pid-pressure:NoSchedule op=Exists
+ node.kubernetes.io/unreachable:NoExecute op=Exists
+ node.kubernetes.io/unschedulable:NoSchedule op=Exists
+Events:
+ Type Reason Age From Message
+ ---- ------ ---- ---- -------
+ Warning FailedScheduling 2m2s (x3 over 7m58s) default-scheduler 0/3 nodes are available: 1 Insufficient memory, 2 node is filtered out by the prefilter result. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
+
+```
+
+One of pods can't start because not enough memory on the node.
+
+Check priorityclasses
+
+```sh
+# k get priorityclasses.scheduling.k8s.io
+
+NAME VALUE GLOBAL-DEFAULT AGE
+system-cluster-critical 2000000000 false 9m30s
+system-node-critical 2000001000 false 9m30s
+```
+
+Create priorityclass monitoring
+
+```sh
+k create priorityclass monitoring --value 1000000000
+```
+
+Edit monitoring system DaemonSet
+
+```yaml
+# k edit DaemonSet monitoring-system -n monitoring
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ creationTimestamp: null
+ labels:
+ app: monitoring-system
+ name: monitoring-system
+ namespace: monitoring
+spec:
+ selector:
+ matchLabels:
+ app: monitoring-system
+ template:
+ metadata:
+ labels:
+ app: monitoring-system
+ spec:
+ tolerations:
+ - key: node-role.kubernetes.io/control-plane
+ effect: "NoSchedule"
+ priorityClassName: monitoring # add it
+ containers:
+ - image: viktoruj/ping_pong
+ name: app
+ resources:
+ limits:
+ memory: 2500Mi
+ requests:
+ memory: 2500Mi
+
+ env:
+ - name: ENABLE_LOAD_MEMORY
+ value: "true"
+ - name: ENABLE_LOG_LOAD_MEMORY
+ value: "true"
+ - name: MEMORY_USAGE_PROFILE
+ value: "500=60 1024=60 2048=60"
+````
+
+Check the problem
+
+```sh
+$ k get po -n monitoring
+
+NAME READY STATUS RESTARTS AGE
+monitoring-system-2ss5z 1/1 Running 0 3s
+monitoring-system-kdf26 1/1 Running 0 7s
+monitoring-system-lc845 1/1 Running 0 6s
+```
+
+Now you can see that all pods of monitoring-system have status running.
+To check you result, you might run `check results`.
+
+```sh
+# check_result
+
+ubuntu@worker:~> check_result
+ ✓ 0 Init
+ ✓ 1.1 PriorityClass
+ ✓ 1.2 DaemonSet PriorityClass
+ ✓ 1.3 monitoring-system pods ready
+
+4 tests, 0 failures
+ result = 100.00 % ok_points=3 all_points=3
+time_left=306 minutes
+you spend 53 minutes
+```
diff --git a/docs/CKA/Mock exams/01.md b/docs/CKA/Mock exams/01.md
new file mode 100644
index 00000000..3e86d912
--- /dev/null
+++ b/docs/CKA/Mock exams/01.md
@@ -0,0 +1,245 @@
+# 01 - Tasks
+
+## Allowed resources
+
+### **Kubernetes Documentation**
+
+https://kubernetes.io/docs/ and their subdomains
+
+https://kubernetes.io/blog/ and their subdomains
+
+This includes all available language translations of these pages (e.g. https://kubernetes.io/zh/docs/)
+
+![preview](../../../static/img/cka-01-preview.png)
+
+- run ``time_left`` on work pc to **check time**
+- run ``check_result`` on work pc to **check result**
+
+## Questions
+
+### 01
+
+| **1** | **Deploy a pod named `nginx-pod` using the `nginx:alpine` image** |
+| :-----------------: | :---------------------------------------------------------------- |
+| Task weight | 1% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Name: `nginx-pod` - Image: `nginx:alpine` |
+
+---
+
+### 02
+
+| **2** | **Deploy a messaging pod using the redis:alpine image with the labels set to tier=msg** |
+| :-----------------: | :-------------------------------------------------------------------------------------- |
+| Task weight | 1% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Pod Name: `messaging` - Image: `redis:aline` - Labels: `tier=msg` |
+---
+
+### 03
+
+| **3** | **Create a namespace named apx-x9984574** |
+| :-----------------: | :-------------------------------------------------------------- |
+| Task weight | 1% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Namespace: `apx-x9984574` |
+
+---
+
+### 04
+
+| **4** | **Get the list of nodes in JSON format and store it in a file at `/var/work/tests/artifacts/4/nodes.json`** |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - list of nodes `/var/work/tests/artifacts/4/nodes.json` |
+
+---
+
+### 05
+
+| **5** | **Create a service messaging-service to expose the messaging application within the cluster on port 6379** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Use imperative commands. - Service: `messaging-service` - Port: `6379` - Type: `ClusterIp` - Use the right labels |
+
+---
+
+### 06
+
+| **6** | **Create a deployment named `hr-web-app` using the image nginx:alpine with 2 replicas** |
+| :-----------------: | :-------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Name: `hr-web-app` - Image: `nginx:alpine` - Replicas: `2` |
+
+---
+
+### 07
+
+| **7** | **Create a static pod named static-busybox with label pod-type=static-pod on the controlplane node that uses the busybox image and the command sleep 60000.** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Name: `static-busybox` - Image: `busybox` - label: `pod-type=static-pod` - command: sleep 60000 |
+
+---
+
+### 08
+
+| **8** | **Create a POD in the finance namespace named temp-bus with the image redis:alpine.** |
+| :-----------------: | :------------------------------------------------------------------------------------ |
+| Task weight | 1% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Namespace: `finance` - Name: `temp-bus` - Image: `redis:alpine` |
+---
+
+### 09
+
+| **9** | **Use JSON PATH query to retrieve the osImages of all the nodes and store it in a file `/var/work/tests/artifacts/9/os.json` each node - new line.** |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 3% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | file `/var/work/tests/artifacts/9/os.json` |
+---
+
+### 10
+
+| **10** | **Create a pod called multi-pod with two containers** |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 5% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Pod Name: `multi-pod` - Container 1, name: *alpha*, image: *nginx* , variable *name=alpha* - Container 2: name: *beta*, image: *busybox*, command: *sleep 4800*, variable *name=beta* |
+---
+
+### 11
+
+| **11** | **Expose the hr-web-app as service** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - The web application listens on port 80 - Name: `hr-web-app-service` - Type: `NodePort` - Endpoints: `2` - Port: `80` - NodePort: `30082` |
+---
+
+### 12
+
+| **12** | **Create a Persistent Volume with the given specification. Run pod with pv.** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 6% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Volume name: `pv-analytics` - pvc name: `pvc-analytics` - Storage: `100Mi` - Access mode: `ReadWriteOnce` - Host path: `/pv/analytics`
- pod name: `analytics` - image: `busybox` - node: `nodeSelector` - node_name: `node_2` - command: `"sleep 60000"` - mountPath: `/pv/analytics` |
+---
+
+### 13
+
+| **13** | **Take a backup of the etcd cluster and save it to /var/work/tests/artifacts/13/etcd-backup.db** |
+| :-----------------: | :----------------------------------------------------------------------------------------------- |
+| Task weight | 3% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - etcd backup on control-plane node `/var/work/tests/artifacts/13/etcd-backup.db` |
+---
+
+### 14
+
+| **14** | **Create a Pod called redis-storage with image: redis:alpine with a Volume of type emptyDir that lasts for the life of the Pod** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Pod name: `redis-storage` - container name: `redis-storage` - image: `redis:alpine` - volumes.name: `data` - volumes.type: `emptyDir` - volumes.sizeLimit: `500Mi` - volumeMounts.mountPath: `/data/redis` - volumeMounts.name: `data` |
+---
+
+### 15
+
+| **15** | **Create a new pod called super-user-pod with image busybox:1.28. Allow the pod to be able to set system_time.** |
+| :-----------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Pod name: `super-user-pod` - container name: `super-user-pod` - Container Image: `busybox:1.28` - command: sleep for 4800 seconds. - capability: `SYS_TIME` |
+---
+
+### 16
+
+| **16** | **Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica. Next upgrade the deployment to version 1.17 using rolling update.** |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 3% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Deployment : nginx-deploy. - Image: nginx:1.16 - Task: Upgrade the version of the deployment to 1:17 with image 1.17 - Task: Record the changes for the image upgrade |
+---
+
+### 17
+
+| **17** | **Create a new user called john. Grant him access to the cluster. John should have permission to create, list and get pods in the development namespace.** |
+| :-----------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 6% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - create ns `development` - create private key and csr - CSR: `john-developer` with Status:Approved - Role Name: `developer`, namespace: `development`, Resource: `pods` , verbs: `create,list,get` - rolebinding: name=`developer-role-binding` , role=`developer`, user=`john` , namespace=`development` - Access: User 'john' has appropriate permissions |
+---
+
+### 18
+
+| **18** | **Create a new service account with the name pvviewer. Grant this Service account access to list all PersistentVolumes in the cluster by creating an appropriate cluster role called pvviewer-role and ClusterRoleBinding called pvviewer-role-binding. Next, create a pod called pvviewer with the image: redis and serviceAccount: pvviewer in the default namespace.** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| Task weight | 5% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - ServiceAccount: `pvviewer` - ClusterRole: `pvviewer-role`, resources - `persistentvolumes`, verbs - `list,get` - clusterrolebinding: `pvviewer-role-binding` - Pod: `pvviewer` - image: `viktoruj/cks-lab:latest` - command: `sleep 60000` |
+---
+
+### 19
+
+| **19** | **Create a Pod called non-root-pod, image: redis:alpine** |
+| :-----------------: | :--------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - pod name: `non-root-pod` - image: `redis:alpine` - runAsUser: `1000` - fsGroup: `2000` |
+---
+| **20** | **Create secret, configmap. Create a pod with mount secret and configmap.** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| Task weight | 8% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - namespace: prod-apps - secret: name=prod-secret, ns=prod-apps, variables var1=aaa, var2=bbb - configmap: configmap_name=prod-config,ns=prod-apps,file_name_for_configmap=config.yaml, file_content= "test config" - pod: name=prod-app, ns=prod-apps, - container1: name=app1, image=viktoruj/cks-lab:latest , command="sleep 60000", volume_name=config, volume_type=configmap, mount_path="/app/configs", ENV=from secret "prod-secret" - container2: name=app2, image=viktoruj/cks-lab:latest , command="sleep 60000", volume_name=secret, volume_type=secret, mount_path="/app/secrets" |
+---
+
+### 21
+
+| **21** | **Resolve dns svc and pod. Create a nginx pod called nginx-resolver using image nginx, expose it internally with a service called nginx-resolver-service.** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| Task weight | 3% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Pod: `nginx-resolver` - image: `nginx` - Service: `nginx-resolver-service` - lookup pod name : `test-nslookup` - lookup pod image : `busybox:1.28` - service file: `/var/work/tests/artifacts/21/nginx.svc` - pod file: `/var/work/tests/artifacts/21/nginx.pod` |
+---
+
+### 22
+
+| **22** | **Update Kubernetes cluster.** |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 7% |
+| Cluster | cluster2 (`kubectl config use-context cluster2-admin@cluster2`) |
+| Acceptance criteria | - The cluster is running Kubernetes 1.29.0, update it to 1.29.1 . - Use apt package manager and kubeadm for this. - Use ssh to connect to the instances. |
+---
+
+### 23
+
+| **23** | **Network policy.** |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 6% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - create default deny ingress policy in `prod-db` NS - create policy with allow connections from `prod` Namespaces to `prod-db` - create policy with allow connections from `stage` Namespaces and have label: `role=db-connect` - create policy with allow connections from `any` Namespaces and have label: `role=db-external-connect` |
+---
+
+### 24
+
+| **24** | **Create DaemonSet to run pods on all nodes (control-plane too)** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------ |
+| Task weight | 6% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - namespace: `app-system` - ds: `name=important-app`, image=`nginx` - run on all nodes (control-plane too) |
+---
+
+### 25
+
+| **25** | **Create deployment and spread the pods on all nodes(control-plane too). Add PodDisruptionBudget** |
+| :-----------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 8% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - namespace: `app2-system` - deployment: `name=important-app2` , image=`nginx`, replicas=`3` - PodAntiAffinity: `nodename` - PodDisruptionBudget: `name=important-app2` min available pod = 1 |
diff --git a/docs/CKA/Mock exams/02.md b/docs/CKA/Mock exams/02.md
new file mode 100644
index 00000000..91e1a40e
--- /dev/null
+++ b/docs/CKA/Mock exams/02.md
@@ -0,0 +1,207 @@
+# 02 - Tasks
+
+## Allowed resources
+
+### **Kubernetes Documentation**
+
+https://kubernetes.io/docs/ and their subdomains
+
+https://kubernetes.io/blog/ and their subdomains
+
+This includes all available language translations of these pages (e.g. https://kubernetes.io/zh/docs/)
+
+![preview](../../../static/img/cka-02-preview.png)
+
+- run ``time_left`` on work pc to **check time**
+- run ``check_result`` on work pc to **check result**
+
+## Questions
+
+### 01
+
+| **1** | find a pod in `dev-1` namespace with labels `team=finance` and maximum memory usage . Add label `usage=max` to it |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - pod from `dev-1` NS - with label `team=finance` - has max memory usage - has label `usage=max` |
+---
+
+### 02
+
+| **2** | Deploy a `util` pod using the `busybox:1.36` image in the `dev` namespace. Use `sleep 3600 command` to keep it running. |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 1% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Pod Name: `util` - Namespace: `dev` - Image: `busybox:1.36` - Commands: `sleep 3600` |
+---
+
+### 03
+
+| **3** | Create a `namespace` named `team-elephant` |
+| :-----------------: | :------------------------------------------------------------------------------ |
+| Task weight | 1% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Namespace: `team-elephant` Namespace is present in the namespaces list. |
+---
+
+### 05
+
+| **4** | Create pod `alpine` with image `alpine:3.15` and and command `sleep 6000` . Make sure the pod is running on node with label `disk=ssd`. |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Pod is running on the node with label disk=ssd. |
+---
+
+### 05
+
+| **5** | Create deployment `web-app` with image `viktoruj/ping_pong:latest` and `2` replicas. Container port should be configured on port `8080` and named `http-web`. |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Deployment name: `web-app` - Image: `viktoruj/ping_pong:latest` - Replicas: `2` Deployment pods are running, containers' ports 8080 named http-web. |
+---
+
+### 06
+
+| **6** | Create a service `web-app-svc` in namespace `dev-2` to expose the `web-app` deployment on port `8080` on cluster nodes. |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 3% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Use imperative commands to create mainifest. - namespace `dev-2` - Service: `web-app-svc` - Port: `8080` - Type: `NodePort` - Use the right labels to select targer port. |
+---
+
+### 07
+
+| **7** | Create a pod `web-srv` based on image `viktoruj/ping_pong:latest` in the default namespace. The container in the pod should named `app1`. |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 1% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Pod name: `web-srv` - Image: `viktoruj/ping_pong:latest` - Container name: `app1` |
+---
+
+### 08
+
+| **8** | 2 pods are running in the namespace `db-redis` named `redis-node-xxxx`. You need to scale down number of replicas to `1`. |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------ |
+| Task weight | 1% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Name: `redis-node-xxxx` - Number of pods running: 1 - Nuber of pods is scaled down to 1. |
+---
+
+### 09
+
+| **9** | Write cli commands with shows pods from `dev-2` namespace in `json` format . script is located `/var/work/artifact/9.sh` |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 1% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - script shows pods from `dev-2` namespace in json format - script is located `/var/work/artifact/9.sh` |
+---
+
+### 10
+
+| **10** | **Create a Persistent Volume with the given specification. Run pod with pv.** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 8% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Volume name: `pv-analytics` - pvc name: `pvc-analytics` - Storage: `100Mi` - Access mode: `ReadWriteOnce` - Host path: `/pv/analytics`
- pod name: `analytics` - image: `busybox` - node: `nodeSelector` - node_name: `node_2` - command: `"sleep 60000"` - mountPath: `/pv/analytics` |
+---
+
+### 11
+
+| **11** | **Update Kubernetes cluster.** |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 7% |
+| Cluster | cluster2 (`kubectl config use-context cluster2-admin@cluster2`) |
+| Acceptance criteria | - The cluster is running Kubernetes 1.28.0, update it to 1.28.4 . - Use apt package manager and kubeadm for this. - Use ssh to connect to the instances. |
+---
+
+### 12
+
+| **12** | **Create new ingress resource to the service. Make it available at the path `/cat`** |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - NameSpace: cat - service: cat - Annotation: `nginx.ingress.kubernetes.io/rewrite-target: /` - path: `/cat` - check ` curl cka.local:30102/cat ` |
+---
+
+### 13
+
+| **13** | In the Namespace `team-elephant` create a new ServiceAccount `pod-sa`. Assing an account permissions to `list and get` `pods` using Role `pod-sa-role` and RoleBinding `pod-sa-roleBinding` |
+| :-----------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 8% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Namespace `team-elephant` - ServiceAccount `pod-sa` - Role `pod-sa-role` : resource `pods` , verb : `list and get` -RoleBinding `pod-sa-roleBinding` - create pod `pod-sa` image = `viktoruj/cks-lab`, command = `sleep 60000`, ServiceAccount `pod-sa` |
+---
+
+### 14
+
+| **14** | Create a **DaemonSet** named **team-elephant-ds** with the requested parameters |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 5% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - DaemonSet: `team-elephant-ds` - Namespace: `team-elephant` - Image: `viktoruj/ping_pong` - Labels: `team=team-elephant`, `env=dev` - requests CPU: `50m` - requests Memory: `50Mi` - Pods are running on **all nodes**, including **control plane**. |
+---
+
+### 15
+
+| **15** | You have a legacy app in a `legacy` namespace . The application contains 2 containers . The first container writes log files to `/log/logs1.txt` . The second container `/log/logs2.txt` . you need to add another container `log` that will collect logs from these containers and send them to stdout . |
+| :-----------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 6% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - volule : name `logs` , type `emptyDir` , sizeLimit `500Mi` : - Containers `app1`,`app2` , `log` have /log to `/log` - log container : name `log`, Image: `viktoruj/cks-lab`, command `tail -f -n 100 /log/logs1.txt -f /log/logs2.txt` - check logs from app1 container : `k exec checker -n legacy -- sh -c 'curl legacy-app:8081/test_app1'` ; `k logs -l app=legacy-app -n legacy -c log` - check logs from app2 container : `k exec checker -n legacy -- sh -c 'curl legacy-app:8082/test_app2'` ; `k logs -l app=legacy-app -n legacy -c log` |
+---
+
+### 16
+
+| **16** | Write cli commands with shows the latest events in the whole cluster, ordered by creation time `metadata.creationTimestamp`. |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Events ordered by creation time - script is located `/var/work/artifact/16.sh` |
+---
+
+### 17
+
+| **17** | Write cli commands with show names of all namespaced api resources in Kubernetes cluster |
+| :-----------------: | :--------------------------------------------------------------------------------------------------- |
+| Task weight | 1% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - show names of all namespaced api resources - script is located `/var/work/artifact/17.sh` |
+---
+
+### 18
+
+| **18** | cluster3 seems to have an issue - one of node is not joined. It also might be outdated, so make sure it is running same Kubernetes version as control plane. You should fix the issue. |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster3 (`kubectl config use-context cluster3-admin@cluster3`) |
+| Acceptance criteria | - k8s3_node_node_2 is running the same Kubernetes version as control plane, rejoined to the cluster and is in Ready status. |
+---
+
+### 19
+
+| **19** | Create static pod `stat-pod` in the `default` namespace. Expose it via service `stat-pod-svc`. |
+| :-----------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Pod name: `stat-podv` - Image: `viktoruj/ping_pong:latest` - requests CPU: `100m` - requests Memory: `128Mi` - app port `8080` - Service name: `stat-pod-svc` - Service type: `NodePort` - NodePort: `30084` - Pod is accessible from control plane node. |
+---
+
+### 20
+
+| **20** | Backup etcd and save it on the control plane node at `/var/work/tests/artifacts/20/etcd-backup.db` . Restore etcd from `/var/work/tests/artifacts/20/etcd-backup_old.db` on control plane node . |
+| :-----------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 6% |
+| Cluster | cluster4 (`kubectl config use-context cluster4-admin@cluster4`) |
+| Acceptance criteria | - backup etcd to `/var/work/tests/artifacts/20/etcd-backup.db` - restore etcd from `/var/work/tests/artifacts/20/etcd-backup_old.db` - pods are ready in `kube-system` namespace |
+---
+
+### 21
+
+| **21** | **Network policy.** |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 6% |
+| Cluster | cluster5 (`kubectl config use-context cluster5-admin@cluster5`) |
+| Acceptance criteria | - create default deny ingress policy in `prod-db` NS - create policy with allow connections from `prod` Namespaces to `prod-db` - create policy with allow connections from `stage` Namespaces and have label: `role=db-connect` - create policy with allow connections from `any` Namespaces and have label: `role=db-external-connect` |
+---
diff --git a/docs/CKA/Mock exams/Solutions/01.md b/docs/CKA/Mock exams/Solutions/01.md
new file mode 100644
index 00000000..dff10f8e
--- /dev/null
+++ b/docs/CKA/Mock exams/Solutions/01.md
@@ -0,0 +1,829 @@
+# 01
+
+Solutions for CKA Mock exam #01
+
+[Video Solution](https://youtu.be/IZsqAPpbBxM?feature=shared)
+
+## 01
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k run nginx-pod --image nginx:alpine
+```
+
+## 02
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+ k run messaging --image redis:alpine -l tier=msg
+```
+
+## 03
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k create ns apx-x9984574
+```
+
+## 04
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+mkdir /var/work/tests/artifacts/4/ -p
+k get no -o json > /var/work/tests/artifacts/4/nodes.json
+```
+
+## 05
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k expose pod messaging --port 6379 --name messaging-service
+```
+
+## 06
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k create deployment hr-web-app --image nginx:alpine --replicas 2
+```
+
+## 07
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k get no
+k run static-busybox --image busybox -o yaml --dry-run=client -l pod-type=static-pod --command sleep 60000 >7.yaml
+scp 7.yaml {control_plane}:/tmp/
+```
+
+*ssh to control_plane node*
+
+```sh
+sudo cp /tmp/7.yaml /etc/kubernetes/manifests/
+exit
+
+k get po -l pod-type=static-pod
+```
+
+## 08
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k create ns finance
+k run temp-bus -n finance --image redis:alpine
+```
+
+## 09
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+mkdir -p /var/work/tests/artifacts/9
+k get no -o jsonpath='{range .items[*]}{.status.nodeInfo.osImage}{"\n"}' >/var/work/tests/artifacts/9/os.json
+```
+
+## 10
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k run multi-pod --image nginx --env name=alpha -o yaml --dry-run=client > 10.yaml
+```
+
+```yaml
+# vim 10.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ creationTimestamp: null
+ labels:
+ run: multi-pod
+ name: multi-pod
+spec:
+ containers:
+ - env:
+ - name: name
+ value: alpha
+ image: nginx
+ name: alpha
+ - env:
+ - name: name
+ value: beta
+ image: busybox
+ name: beta
+ command: ["sleep","4800"]
+
+ resources: {}
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+status: {}
+```
+
+```sh
+k apply -f 10.yaml
+```
+
+## 11
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k expose deployment hr-web-app --port 80 --type NodePort --name hr-web-app-service
+k edit svc hr-web-app-service # change NodePort number to 30082
+```
+
+## 12
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k get no -l node_name=node_2
+# ssh to worker node
+sudo mkdir /pv/analytics -p
+sudo chmod 777 -R /pv/analytics
+exit
+```
+
+```yaml
+# vim 12.yaml
+
+---
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: pv-analytics
+ labels:
+ type: local
+spec:
+ storageClassName: manual
+ capacity:
+ storage: 100Mi
+ accessModes:
+ - ReadWriteOnce
+ hostPath:
+ path: "/pv/analytics"
+---
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: pvc-analytics
+spec:
+ storageClassName: manual
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 100Mi
+
+---
+
+apiVersion: v1
+kind: Pod
+metadata:
+ name: analytics
+spec:
+ volumes:
+ - name: task-pv-storage
+ persistentVolumeClaim:
+ claimName: pvc-analytics
+ nodeSelector:
+ node_name: node_2
+ containers:
+ - name: task-pv-container
+ image: busybox
+ command: ["sleep","60000"]
+ volumeMounts:
+ - mountPath: "/pv/analytics"
+ name: task-pv-storage
+```
+
+```sh
+k apply -f 12.yaml
+```
+
+## 13
+
+You can use this page as a reference: https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k get no
+ssh {control-plane}
+```
+
+```sh
+sudo ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 \
+ --cert=/etc/kubernetes/pki/etcd/server.crt \
+ --key=/etc/kubernetes/pki/etcd/server.key \
+ --cacert=/etc/kubernetes/pki/etcd/ca.crt \
+ member list
+
+sudo mkdir /var/work/tests/artifacts/13/ -p
+
+sudo ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 \
+ --cert=/etc/kubernetes/pki/etcd/server.crt \
+ --key=/etc/kubernetes/pki/etcd/server.key \
+ --cacert=/etc/kubernetes/pki/etcd/ca.crt \
+ snapshot save /var/work/tests/artifacts/13/etcd-backup.db
+```
+
+## 14
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```yaml
+# vim 14.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ creationTimestamp: null
+ labels:
+ run: redis-storage
+ name: redis-storage
+spec:
+ containers:
+ - image: redis:alpine
+ name: redis-storage
+ volumeMounts:
+ - mountPath: /data/redis
+ name: data
+ resources: {}
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+ volumes:
+ - name: data
+ emptyDir:
+ sizeLimit: 500Mi
+status: {}
+```
+
+```sh
+k apply -f 14.yaml
+```
+
+## 15
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```yaml
+# vim 15.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ creationTimestamp: null
+ labels:
+ run: super-user-pod
+ name: super-user-pod
+spec:
+ containers:
+ - command:
+ - sleep
+ - "4800"
+ image: busybox:1.28
+ name: super-user-pod
+ resources: {}
+ securityContext:
+ capabilities:
+ add: ["SYS_TIME"]
+
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+status: {}
+```
+
+```sh
+k apply -f 15.yaml
+```
+
+## 16
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k create deployment nginx-deploy --image=nginx:1.16 --dry-run=client -o yaml > 16.yaml
+k apply -f 16.yaml --record
+k set image deployment/nginx-deploy nginx=nginx:1.17 --record
+k rollout history deployment nginx-deploy
+```
+
+## 17
+
+Please check the following kubernetes docs page: https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+openssl genrsa -out myuser.key 2048
+openssl req -new -key myuser.key -out myuser.csr
+```
+
+```sh
+cat < CSR.yaml
+apiVersion: certificates.k8s.io/v1
+kind: CertificateSigningRequest
+metadata:
+ name: john-developer # add
+spec:
+ request: $(cat myuser.csr | base64 | tr -d "\n")
+ signerName: kubernetes.io/kube-apiserver-client
+ usages:
+ - client auth
+ - digital signature
+ - key encipherment
+EOF
+```
+
+```sh
+k create ns development
+k apply -f CSR.yaml
+k get csr
+k certificate approve john-developer
+k create role developer --resource=pods --verb=create,list,get --namespace=development
+k create rolebinding developer-role-binding --role=developer --user=john --namespace=development
+k auth can-i update pods --as=john --namespace=development
+```
+
+## 18
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k create sa pvviewer
+k create clusterrole pvviewer-role --verb list,get --resource PersistentVolumes
+k create clusterrolebinding pvviewer-role-binding --clusterrole pvviewer-role --serviceaccount default:pvviewer
+```
+
+```sh
+# vim 18.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ creationTimestamp: null
+ labels:
+ run: pvviewer
+ name: pvviewer
+spec:
+ containers:
+ - image: viktoruj/cks-lab:latest
+ name: pvviewer
+ command: ["sleep","60000"]
+ resources: {}
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+ serviceAccountName: pvviewer
+status: {}
+```
+
+```sh
+k apply -f 18.yaml
+```
+
+## 19
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```yaml
+# vim 19.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ creationTimestamp: null
+ labels:
+ run: non-root-pod
+ name: non-root-pod
+spec:
+ securityContext:
+ runAsUser: 1000
+ fsGroup: 2000
+ containers:
+ - image: redis:alpine
+ name: non-root-pod
+ resources: {}
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+status: {}
+```
+
+```sh
+k apply -f 19.yaml
+```
+
+## 20
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k create ns prod-apps
+k create secret generic prod-secret -n prod-apps --from-literal var1=aaa --from-literal var2=bbb
+
+echo "test config" > config.yaml
+k create configmap prod-config -n prod-apps --from-file config.yaml
+
+k run prod-app --image viktoruj/cks-lab:latest -o yaml --dry-run=client -n prod-apps --command sleep 60000 >20.yaml
+```
+
+```yaml
+# vim 20.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ creationTimestamp: null
+ labels:
+ run: prod-app
+ name: prod-app
+ namespace: prod-apps
+spec:
+ containers:
+ - command:
+ - sleep
+ - "60000"
+ image: viktoruj/cks-lab:latest
+ name: app1
+ envFrom:
+ - secretRef:
+ name: prod-secret
+ volumeMounts:
+ - name: config
+ mountPath: "/app/configs"
+ readOnly: true
+ resources: {}
+
+ - command:
+ - sleep
+ - "60000"
+ image: viktoruj/cks-lab:latest
+ name: app2
+ volumeMounts:
+ - name: secret
+ mountPath: "/app/secrets"
+ readOnly: true
+ resources: {}
+
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+ volumes:
+ - name: config
+ configMap:
+ name: prod-config
+ - name: secret
+ secret:
+ secretName: prod-secret
+```
+
+```sh
+k apply -f 20.yaml
+k get po -n prod-apps
+```
+
+References:
+https://kubernetes.io/docs/concepts/configuration/configmap/
+https://kubernetes.io/docs/tasks/inject-data-application/distribute-credentials-secure/
+
+## 21
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+mkdir -p /var/work/tests/artifacts/21
+k run nginx-resolver --image=nginx
+k expose pod nginx-resolver --name=nginx-resolver-service --port=80 --target-port=80 --type=ClusterIP
+```
+
+```sh
+# wait pod - ready status
+k run test-nslookup --image=busybox:1.28 --rm -it --restart=Never -- nslookup nginx-resolver-service > /var/work/tests/artifacts/21/nginx.svc
+
+pod_ip=$( kubectl get po nginx-resolver -o jsonpath='{.status.podIP}' | sed 's/\./-/g' )
+k run test-nslookup --image=busybox:1.28 --rm -it --restart=Never -- nslookup $pod_ip.default.pod > /var/work/tests/artifacts/21/nginx.pod
+```
+
+## 22
+
+Use the correct context.
+
+```sh
+kubectl config use-context cluster2-admin@cluster2
+
+k get no
+```
+
+Drain controlaplane(master) node.
+
+```sh
+kubectl drain {master node name} --ignore-daemonsets
+```
+
+```sh
+ssh {master node name}
+```
+
+```sh
+sudo su
+```
+
+Check kubeadm version.
+
+```sh
+kubeadm version
+```
+
+Check kubelet version.
+
+```sh
+kubelet --version
+```
+
+Update, install the packages.
+
+```sh
+apt update
+apt-mark unhold kubeadm
+
+apt install kubeadm=1.29.1-1.1 -y
+apt-mark hold kubeadm
+```
+
+Update control plane.
+
+```sh
+kubeadm upgrade plan
+
+kubeadm upgrade apply v1.29.1
+```
+
+Update kubelet and kubectl.
+
+```sh
+apt-mark unhold kubelet kubectl
+apt install kubelet=1.29.1-1.1 kubectl=1.29.1-1.1
+apt-mark hold kubelet kubectl
+
+service kubelet restart
+service kubelet status
+```
+
+```sh
+# exit to worker PC
+exit
+exit
+```
+
+Uncordon master node
+
+```sh
+kubectl uncordon {master node name}
+```
+
+Drain worker node
+
+```sh
+kubectl drain {worker node} --ignore-daemonsets
+```
+
+Ssh to worker node and update kubeadm
+
+```sh
+apt update
+apt-mark unhold kubeadm
+apt install kubeadm=1.29.1-1.1
+apt-mark hold kubeadm
+kubeadm upgrade node
+```
+
+Update kubelet and kubectl
+
+```sh
+apt-mark unhold kubectl kubelet
+apt install kubelet=1.29.1-1.1 kubectl=1.29.1-1.1 -y
+apt-mark hold kubectl kubelet
+service kubelet restart
+service kubelet status
+```
+
+Uncordon worker node
+
+```sh
+kubectl uncordon {worker node name}
+```
+
+Check nodes
+
+```sh
+kubectl get no
+````
+
+## 23
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+# vim 23_deny.yaml
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: default-deny-ingress
+ namespace: prod-db
+
+spec:
+ podSelector: {}
+ policyTypes:
+ - Ingress
+```
+
+```sh
+k apply -f 23_deny.yaml
+```
+
+```sh
+k get ns --show-labels
+```
+
+```yaml
+# vim 23_allow.yaml
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: allow-policy
+ namespace: prod-db
+spec:
+ podSelector:
+ matchLabels: {}
+ policyTypes:
+ - Ingress
+ ingress:
+ - from:
+ - namespaceSelector:
+ matchLabels:
+ name: prod
+ - namespaceSelector:
+ matchLabels:
+ name: stage
+ podSelector:
+ matchLabels:
+ role: db-connect
+
+ - podSelector:
+ matchLabels:
+ role: db-external-connect
+ namespaceSelector: {}
+```
+
+```sh
+k apply -f 23_allow.yaml
+```
+
+## 24
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k create ns app-system
+k create deployment important-app --image nginx -o yaml --dry-run=client -n app-system >24.yaml
+```
+
+```yaml
+# vim 24.yaml
+apiVersion: apps/v1
+kind: DaemonSet
+metadata:
+ creationTimestamp: null
+ labels:
+ app: important-app
+ name: important-app
+ namespace: app-system
+spec:
+ selector:
+ matchLabels:
+ app: important-app
+ template:
+ metadata:
+ creationTimestamp: null
+ labels:
+ app: important-app
+ spec:
+ tolerations:
+ - key: node-role.kubernetes.io/control-plane
+ effect: "NoSchedule"
+ containers:
+ - image: nginx
+ name: nginx
+ resources: {}
+```
+
+```sh
+k apply -f 24.yaml
+k get no
+k get po -n app-system -o wide
+```
+
+## 25
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k create ns app2-system
+k create deployment important-app2 --image nginx --replicas 3 -n app2-system -o yaml --dry-run=client > 25.yaml
+```
+
+```sh
+# vim 25.yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ creationTimestamp: null
+ labels:
+ app: important-app2
+ name: important-app2
+ namespace: app2-system
+spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: important-app2
+ strategy: {}
+ template:
+ metadata:
+ creationTimestamp: null
+ labels:
+ app: important-app2
+ spec:
+ affinity:
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchExpressions:
+ - key: app
+ operator: In
+ values:
+ - important-app2
+ topologyKey: "kubernetes.io/hostname"
+ tolerations:
+ - key: node-role.kubernetes.io/control-plane
+ effect: "NoSchedule"
+ containers:
+ - image: nginx
+ name: nginx
+ resources: {}
+status: {}
+```
+
+```sh
+k create poddisruptionbudget important-app2 -n app2-system --min-available 1 --selector app=important-app2
+```
diff --git a/docs/CKA/Mock exams/Solutions/02.md b/docs/CKA/Mock exams/Solutions/02.md
new file mode 100644
index 00000000..a0b0f636
--- /dev/null
+++ b/docs/CKA/Mock exams/Solutions/02.md
@@ -0,0 +1,956 @@
+# 02
+
+Solutions for CKA Mock exam #02
+
+[Video Solution](https://youtu.be/ia6Vw_BR-L0?feature=shared)
+
+## 01
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k get po -n dev-1 --show-labels
+
+k get po -n dev-1 -l team=finance
+
+k top po -n dev-1 -l team=finance --sort-by memory
+
+k label pod {pod_name with max memory usage} -n dev-1 usage=max
+```
+
+## 02
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k run util --image busybox:1.36 -n dev --command sleep 3600
+k get po util -n dev
+```
+
+## 03
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k create ns team-elephant
+k get ns team-elephant
+```
+
+## 04
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k get no -l disk=ssd
+k run alpine --image alpine:3.15 -o yaml --dry-run=client --command sleep 6000 >4.yaml
+```
+
+```yaml
+# vim 4.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ creationTimestamp: null
+ labels:
+ run: alpine
+ name: alpine
+spec:
+ nodeSelector: # add
+ disk: ssd # add
+ containers:
+ - command:
+ - sleep
+ - "6000"
+ image: alpine:3.15
+ name: alpine
+ resources: {}
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+status: {}
+
+```
+
+```sh
+k apply -f 4.yaml
+```
+
+## 05
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k create deployment web-app --image viktoruj/ping_pong:latest --replicas 2 --port 8080 -o yaml --dry-run=client >5.yaml
+```
+
+```sh
+# vim 5.yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ creationTimestamp: null
+ labels:
+ app: web-app
+ name: web-app
+spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app: web-app
+ strategy: {}
+ template:
+ metadata:
+ creationTimestamp: null
+ labels:
+ app: web-app
+ spec:
+ containers:
+ - image: viktoruj/ping_pong:latest
+ name: ping-pong-2cwhf
+ ports:
+ - containerPort: 8080
+ name: http-web # add it
+ resources: {}
+status: {}
+```
+
+```sh
+k apply -f 5.yaml
+```
+
+## 06
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k expose deployment web-app -n dev-2 --port 8080 --type NodePort --name web-app-svc
+k get svc -n dev-2
+```
+
+## 07
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k run web-srv --image viktoruj/ping_pong:latest --dry-run=client -o yaml > 7.yaml
+```
+
+```yaml
+# vim 7.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ creationTimestamp: null
+ labels:
+ run: web-srv
+ name: web-srv
+spec:
+ containers:
+ - image: viktoruj/ping_pong:latest
+ name: app1 # change from web-srv to app1
+ resources: {}
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+status: {}
+```
+
+```yaml
+k apply -f 7.yaml
+
+k get po web-srv
+```
+
+## 08
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k get deployment redis-node -n db-redis
+
+k scale deployment redis-node -n db-redis --replicas 1
+
+k get deployment redis-node -n db-redis
+```
+
+## 09
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+echo 'kubectl get po -n dev-2 -o json --context cluster1-admin@cluster1' >/var/work/artifact/9.sh
+bash /var/work/artifact/9.sh
+```
+
+## 10
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k get no -l node_name=node_2
+# ssh to worker node
+sudo mkdir /pv/analytics -p
+sudo chmod 777 -R /pv/analytics
+exit
+```
+
+```yaml
+# vim 10.yaml
+---
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: pv-analytics
+ labels:
+ type: local
+spec:
+ storageClassName: manual
+ capacity:
+ storage: 100Mi
+ accessModes:
+ - ReadWriteOnce
+ hostPath:
+ path: "/pv/analytics"
+---
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: pvc-analytics
+spec:
+ storageClassName: manual
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 100Mi
+---
+apiVersion: v1
+kind: Pod
+metadata:
+ name: analytics
+spec:
+ volumes:
+ - name: task-pv-storage
+ persistentVolumeClaim:
+ claimName: pvc-analytics
+ nodeSelector:
+ node_name: node_2
+ containers:
+ - name: task-pv-container
+ image: busybox
+ command: ["sleep","60000"]
+ volumeMounts:
+ - mountPath: "/pv/analytics"
+ name: task-pv-storage
+```
+
+```sh
+k apply -f 10.yaml
+```
+
+## 11
+
+Use correct context
+
+```sh
+kubectl config use-context cluster2-admin@cluster2
+```
+
+Drain master node
+
+```sh
+kubectl drain {master node name} --ignore-daemonsets
+```
+
+Check kubeadm version
+
+```sh
+ssh {master node name}
+
+sudo su
+
+kubeadm version
+```
+
+Check kubelet version
+
+```sh
+kubelet --version
+```
+
+Install and update packages
+
+```sh
+apt update
+
+apt-cache madison kubeadm
+
+apt-mark unhold kubeadm
+
+apt install kubeadm=1.28.4-1.1 -y
+apt-mark hold kubeadm
+```
+
+Update control plane
+
+```sh
+kubeadm upgrade plan
+
+kubeadm upgrade apply v1.28.4
+```
+
+Update kubelet and kubectl
+
+```sh
+apt-mark unhold kubelet kubectl
+apt install kubelet=1.28.4-1.1 kubectl=1.28.4-1.1 -y
+apt-mark hold kubelet kubectl
+
+service kubelet restart
+service kubelet status
+
+exit
+exit
+```
+
+Uncordon master node
+
+```sh
+kubectl uncordon {master node name}
+```
+
+Drain node
+
+```sh
+k drain {worker node } --ignore-daemonsets
+```
+
+Ssh to worker node
+Update kubeadm
+
+```sh
+ssh {worker node}
+sudo su
+
+apt update
+apt-mark unhold kubeadm
+apt install kubeadm=1.28.4-1.1 -y
+apt-mark hold kubeadm
+kubeadm upgrade node
+```
+
+Update kubelet and kubectl
+
+```sh
+apt-mark unhold kubectl kubelet
+apt install kubelet=1.28.4-1.1 kubectl=1.28.4-1.1 -y
+apt-mark hold kubectl kubelet
+service kubelet restart
+service kubelet status
+```
+
+Uncordon worker node
+
+```sh
+kubectl uncordon {worker node name}
+```
+
+Check nodes
+
+```sh
+kubectl get no
+````
+
+## 12
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```yaml
+#vim 12.yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: cat
+ namespace: cat
+ annotations:
+ nginx.ingress.kubernetes.io/rewrite-target: /
+spec:
+ rules:
+ - http:
+ paths:
+ - path: /cat
+ pathType: Prefix
+ backend:
+ service:
+ name: cat
+ port:
+ number: 80
+
+```
+
+```sh
+k apply -f 12.yaml
+
+curl cka.local:30102/cat
+```
+
+## 13
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k get ns team-elephant
+
+k create ns team-elephant
+
+k create serviceaccount pod-sa --namespace team-elephant
+
+k create role pod-sa-role -n team-elephant --resource pods --verb list,get
+
+k create rolebinding pod-sa-roleBinding -n team-elephant --role pod-sa-role --serviceaccount team-elephant:pod-sa
+
+k run pod-sa --image viktoruj/cks-lab -n team-elephant -o yaml --dry-run=client --command sleep 60000 >13.yaml
+```
+
+```yaml
+# vim 13.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ creationTimestamp: null
+ labels:
+ run: pod-sa
+ name: pod-sa
+ namespace: team-elephant
+spec:
+ serviceAccountName: pod-sa # <--- Add ServiceAccountName here
+ containers:
+ - command:
+ - sleep
+ - "60000"
+ image: viktoruj/cks-lab
+ name: pod-sa
+ resources: {}
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+status: {}
+```
+
+```sh
+k apply -f 13.yaml
+
+k get po -n team-elephant
+```
+
+```sh
+k auth can-i list pods --as=system:serviceaccount:team-elephant:pod-sa --namespace=team-elephant
+
+yes
+
+k auth can-i delete pods --as=system:serviceaccount:team-elephant:pod-sa --namespace=team-elephant
+
+no
+```
+
+(Optional) Check permissions from pod (not nesesary )
+
+```sh
+
+kubectl exec pod-sa -n team-elephant --context cluster1-admin@cluster1 -- sh -c 'curl GET https://kubernetes.default/api/v1/namespaces/team-elephant/pods/ -s -H "Authorization: Bearer $(cat /run/secrets/kubernetes.io/serviceaccount/token)" -k'
+```
+
+## 14
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k get ns team-elephant
+
+k create deployment team-elephant-ds --image viktoruj/ping_pong -o yaml --dry-run=client -n team-elephant > 14.yaml
+```
+
+```yaml
+# vim 14.yaml
+apiVersion: apps/v1
+kind: DaemonSet # update to DaemonSet
+metadata:
+ creationTimestamp: null
+ labels:
+ app: team-elephant-ds
+ team: team-elephant # add it
+ env: dev # add it
+ name: team-elephant-ds
+ namespace: team-elephant
+spec:
+# replicas: 1 # comment or delete it
+ selector:
+ matchLabels:
+ app: team-elephant-ds
+# strategy: {} # comment or delete it
+ template:
+ metadata:
+ creationTimestamp: null
+ labels:
+ app: team-elephant-ds
+ team: team-elephant # add it
+ env: dev # add it
+ spec:
+ tolerations: # add it
+ - key: node-role.kubernetes.io/control-plane # add it
+ effect: "NoSchedule" # add it
+ containers:
+ - image: viktoruj/ping_pong
+ name: ping-pong-q5cxp
+ resources:
+ requests: # add it
+ cpu: 50m # add it
+ memory: 50Mi # add it
+status: {}
+```
+
+```sh
+k apply -f 14.yaml
+k get po -n team-elephant -o wide
+```
+
+## 15
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```yaml
+#k edit deployment legacy-app -n legacy
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ creationTimestamp: null
+ labels:
+ app: legacy-app
+ name: legacy-app
+ namespace: legacy
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: legacy-app
+ strategy: {}
+ template:
+ metadata:
+ creationTimestamp: null
+ labels:
+ app: legacy-app
+ spec:
+ volumes: # add it
+ - emptyDir: # add it
+ sizeLimit: 500Mi # add it
+ name: logs # add it
+ containers:
+ - image: viktoruj/ping_pong
+ name: app1
+ volumeMounts:
+ - mountPath: /log
+ name: logs
+ env:
+ - name: SERVER_NAME
+ value: "app1"
+ - name: SRV_PORT
+ value: "8081"
+ - name: METRIC_PORT
+ value: "9092"
+ - name: LOG_PATH
+ value: /log/logs1.txt
+ - name: ENABLE_OUTPUT
+ value: "false"
+ - image: viktoruj/ping_pong
+ name: app2
+ volumeMounts: # add it
+ - mountPath: /log # add it
+ name: logs # add it
+ env:
+ - name: SERVER_NAME
+ value: "app2"
+ - name: SRV_PORT
+ value: "8082"
+ - name: METRIC_PORT
+ value: "9092"
+ - name: LOG_PATH
+ value: /log/logs2.txt
+ - name: ENABLE_OUTPUT
+ value: "false"
+ - image: viktoruj/cks-lab # add it
+ name: log # add it
+ command: ["tail","-f","-n","100", "/log/logs1.txt","-f","/log/logs2.txt"] # add it
+ volumeMounts: # add it
+ - mountPath: /log # add it
+ name: logs # add it
+```
+
+```sh
+# check logs
+
+k exec checker -n legacy -- sh -c 'curl legacy-app:8081/test_app1'
+k exec checker -n legacy -- sh -c 'curl legacy-app:8082/test_app2'
+
+k logs -l app=legacy-app -n legacy -c log
+```
+
+## 16
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+echo 'kubectl get events --sort-by=".metadata.creationTimestamp" -A --context cluster1-admin@cluster1' >/var/work/artifact/16.sh
+bash /var/work/artifact/16.sh
+
+```
+
+## 17
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+echo 'kubectl api-resources --namespaced=true --context cluster1-admin@cluster1 ' > /var/work/artifact/17.sh
+bash /var/work/artifact/17.sh
+```
+
+## 18
+
+```sh
+kubectl config use-context cluster3-admin@cluster3
+```
+
+```sh
+k get no
+
+NAME STATUS ROLES AGE VERSION
+ip-10-2-27-136 NotReady 9m15s v1.29.0
+ip-10-2-31-152 Ready control-plane 9m37s v1.29.0
+```
+
+```sh
+ssh ip-10-2-27-136
+```
+
+```sh
+sudo su
+```
+
+```sh
+$ kubelet --version
+Kubernetes v1.29.0
+
+$ service kubelet status
+
+● kubelet.service - kubelet: The Kubernetes Node Agent
+ Loaded: loaded (/lib/systemd/system/kubelet.service; disabled; vendor preset: enabled)
+ Drop-In: /usr/lib/systemd/system/kubelet.service.d
+ └─10-kubeadm.conf, 20-labels-taints.conf
+ Active: inactive (dead)
+ Docs: https://kubernetes.io/docs/
+
+Jan 12 08:33:05 ip-10-2-27-136 kubelet[5252]: I0112 08:33:05.996524 5252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started f>
+Jan 12 08:33:05 ip-10-2-27-136 kubelet[5252]: I0112 08:33:05.996547 5252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started f>
+Jan 12 08:33:05 ip-10-2-27-136 kubelet[5252]: I0112 08:33:05.996570 5252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started f>
+Jan 12 08:33:05 ip-10-2-27-136 kubelet[5252]: I0112 08:33:05.996592 5252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started f>
+Jan 12 08:33:05 ip-10-2-27-136 kubelet[5252]: I0112 08:33:05.996619 5252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started f>
+Jan 12 08:33:05 ip-10-2-27-136 kubelet[5252]: I0112 08:33:05.996641 5252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started f>
+Jan 12 08:33:06 ip-10-2-27-136 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
+Jan 12 08:33:06 ip-10-2-27-136 kubelet[5252]: I0112 08:33:06.681646 5252 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/et>
+Jan 12 08:33:06 ip-10-2-27-136 systemd[1]: kubelet.service: Succeeded.
+Jan 12 08:33:06 ip-10-2-27-136 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
+```
+
+```sh
+systemctl enable kubelet
+
+Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
+```
+
+```sh
+systemctl start kubelet
+systemctl status kubelet
+
+exit
+exit
+```
+
+```sh
+ubuntu@worker:~> k get no
+NAME STATUS ROLES AGE VERSION
+ip-10-2-27-136 Ready 101m v1.29.0
+ip-10-2-31-152 Ready control-plane 102m v1.29.0
+```
+
+## 19
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k run stat-podv --image viktoruj/ping_pong:latest -o yaml --dry-run=client > 19.yaml
+```
+
+```yaml
+# vim 19.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ creationTimestamp: null
+ labels:
+ run: stat-podv
+ name: stat-podv
+spec:
+ containers:
+ - image: viktoruj/ping_pong:latest
+ name: stat-podv
+ resources:
+ requests: # add it
+ cpu: 100m # add it
+ memory: 128Mi # add it
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+status: {}
+```
+
+```sh
+k get no
+scp 19.yaml {controlPlane}:/tmp/19.yaml
+
+ssh {controlPlane}
+```
+
+```sh
+sudo su
+
+mv /tmp/19.yaml /etc/kubernetes/manifests/
+```
+
+```sh
+$ k get po
+
+NAME READY STATUS RESTARTS AGE
+stat-podv-ip-10-2-11-20 1/1 Running 0 5s
+```
+
+```sh
+# exit to worker node
+exit
+exit
+```
+
+```sh
+k expose pod stat-podv-{controlPlane node name} --port 8080 --type NodePort --name stat-pod-svc
+```
+
+```sh
+k edit svc stat-pod-svc
+```
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ creationTimestamp: "2024-01-11T19:06:33Z"
+ labels:
+ run: stat-podv
+ name: stat-pod-svc
+ namespace: default
+ resourceVersion: "2638"
+ uid: 951e70b8-5238-4aa2-98d9-de242718db71
+spec:
+ clusterIP: 10.96.17.72
+ clusterIPs:
+ - 10.96.17.72
+ externalTrafficPolicy: Cluster
+ internalTrafficPolicy: Cluster
+ ipFamilies:
+ - IPv4
+ ipFamilyPolicy: SingleStack
+ ports:
+ - nodePort: 30084 # update it to 30084
+ port: 8080
+ protocol: TCP
+ targetPort: 8080
+ selector:
+ run: stat-podv
+ sessionAffinity: None
+ type: NodePort
+status:
+ loadBalancer: {}
+
+```
+
+```sh
+$ k get svc stat-pod-svc
+
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+stat-pod-svc NodePort 10.96.17.72 8080:30084/TCP 2m16s
+```
+
+```sh
+$ curl {controlPlane ip}:30084
+
+Server Name: ping_pong_server
+URL: http://ip-10-2-11-20:30084/
+Client IP: 10.2.11.20
+Method: GET
+Protocol: HTTP/1.1
+Headers:
+User-Agent: curl/7.68.0
+Accept: */*
+```
+
+## 20
+
+```sh
+kubectl config use-context cluster4-admin@cluster4
+```
+
+```sh
+k get no
+
+ssh {controlPlane}
+```
+
+```sh
+sudo su
+
+ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 \
+ --cert=/etc/kubernetes/pki/etcd/server.crt \
+ --key=/etc/kubernetes/pki/etcd/server.key \
+ --cacert=/etc/kubernetes/pki/etcd/ca.crt \
+ snapshot save /var/work/tests/artifacts/20/etcd-backup.db
+
+# stop api and etcd
+
+mkdir /etc/kubernetes/tmp
+mv /etc/kubernetes/manifests/* /etc/kubernetes/tmp/
+
+
+# start etcd
+mv /etc/kubernetes/tmp/etcd.yaml /etc/kubernetes/manifests/
+
+crictl ps | grep etcd
+
+rm -rf /var/lib/etcd
+ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 \
+ --cert=/etc/kubernetes/pki/etcd/server.crt \
+ --key=/etc/kubernetes/pki/etcd/server.key \
+ --cacert=/etc/kubernetes/pki/etcd/ca.crt \
+ --data-dir=/var/lib/etcd \
+ snapshot restore /var/work/tests/artifacts/20/etcd-backup_old.db
+```
+
+```sh
+service kubelet restart
+
+# start all static pods
+
+mv /etc/kubernetes/tmp/* /etc/kubernetes/manifests/
+```
+
+```sh
+# check pod kube-system
+k get po -n kube-system
+
+
+crictl ps
+
+# delete old containers
+
+crictl stop {old container id }
+
+k get po -n kube-system
+```
+
+## 21
+
+```sh
+kubectl config use-context cluster5-admin@cluster5
+```
+
+```yaml
+# vim 21_deny.yaml
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: default-deny-ingress
+ namespace: prod-db
+
+spec:
+ podSelector: {}
+ policyTypes:
+ - Ingress
+```
+
+```sh
+k apply -f 21_deny.yaml
+```
+
+```sh
+k get ns --show-labels
+````
+
+```yaml
+# vim 21_allow.yaml
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: allow-policy
+ namespace: prod-db
+spec:
+ podSelector:
+ matchLabels: {}
+ policyTypes:
+ - Ingress
+ ingress:
+ - from:
+ - namespaceSelector:
+ matchLabels:
+ name: prod
+ - namespaceSelector:
+ matchLabels:
+ name: stage
+ podSelector:
+ matchLabels:
+ role: db-connect
+
+ - podSelector:
+ matchLabels:
+ role: db-external-connect
+ namespaceSelector: {}
+```
+
+```sh
+k apply -f 21_allow.yaml
+```
diff --git a/docs/CKA/about.md b/docs/CKA/about.md
new file mode 100644
index 00000000..90820790
--- /dev/null
+++ b/docs/CKA/about.md
@@ -0,0 +1,33 @@
+This section contains labs and mock exams to train your CKA certifications.
+
+- The platform uses **aws** to create following resources: **vpc**, **subnets**, **security groups**, **ec2** (spot/on-demand), **s3**
+- after you launch the scenarios the platform will create all the necessary resources and give access to k8s clusters.
+- to create clusters the platform uses **kubeadm**
+- you can easily add your own scenario using the already existing terraform module
+- platform supports the following versions:
+
+```text
+k8s version : [ 1.21 , 1.29 ] https://kubernetes.io/releases/
+Rintime :
+ docker [1.21 , 1.23]
+ cri-o [1.21 , 1.29]
+ containerd [1.21 , 1.30]
+ containerd_gvizor [1.21 , 1.30]
+OS for nodes :
+ ubuntu : 20.04 LTS , 22.04 LTS # cks default 20.04 LTS
+CNI : calico
+```
+
+Labs:
+
+- [01 - Fix a problem with kube-api](./Labs/01.md)
+- [02 - Create HPA based on the CPU load](./Labs/02.md)
+- [03 - Operations with Nginx ingress. Routing by header](./Labs/03.md)
+- [04 - Nginx ingress. Canary deployment](./Labs/04.md)
+- [05 - PriorityClass](./Labs/05.md)
+
+Exams:
+
+- [01](./Mock%20exams/01.md)
+- [02](./Mock%20exams/02.md)
+
diff --git a/docs/CKAD/Mock exams/01.md b/docs/CKAD/Mock exams/01.md
new file mode 100644
index 00000000..d8cb7f1f
--- /dev/null
+++ b/docs/CKAD/Mock exams/01.md
@@ -0,0 +1,129 @@
+# 01 - Tasks
+
+## Allowed resources
+
+### Kubernetes Documentation
+
+https://kubernetes.io/docs/ and their subdomains
+
+https://kubernetes.io/blog/ and their subdomains
+
+https://helm.sh/ and their subdomains
+
+This includes all available language translations of these pages (e.g. https://kubernetes.io/zh/docs/)
+
+![preview](../../../static/img/ckad-01-preview.png)
+
+- run ``time_left`` on work pc to **check time**
+- run ``check_result`` on work pc to **check result**
+
+## Questions
+
+| **1** | **Deploy a pod named webhttpd** |
+| :-----------------: | :----------------------------------------------------------------------------- |
+| Task weight | 1% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Name: `webhttpd` - Image: `httpd:alpine` - Namespace: `apx-z993845` |
+---
+| **2** | **Create a new Deployment named `nginx-app`** |
+| :-----------------: | :------------------------------------------------------------------------------ |
+| Task weight | 1% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Deployment: `nginx-app` - Image: `nginx:alpine-slim` - Replicas: `2` |
+---
+| **3** | **Create secret and create pod with environment variable from secret .** |
+| :-----------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - secret: ns=`dev-db` name=`dbpassword` key=`pwd` value=`my-secret-pwd` - pod: ns=`dev-db` name=`db-pod` image=`mysql:8.0` env.name=`MYSQL_ROOT_PASSWORD` env.value=from secret `dbpassword` key=`pwd` |
+---
+| **4** | **Fix replicaset `rs-app2223` in namespace `rsapp`** |
+| :-----------------: | :-------------------------------------------------------------- |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - ReplicaSet has 2 Ready replicas. |
+---
+| **5** | **Create deployment `msg` and service `msg-service`** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Deployment : ns=`messaging` name=`msg` image=`redis` replicas=`2` - Service: name=`msg-service` Port=`6379` Namespace=`messaging` deployment=`msg` - Use the right type of Service - Use imperative commands |
+---
+| **6** | **Update the environment variable on the pod text-printer.** |
+| :-----------------: | :----------------------------------------------------------------------------------------------------------- |
+| Task weight | 1% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Change the value of the environment variable to `GREEN` - Ensure that the logs of the pod was updated. |
+---
+| **7** | **Run pod `appsec-pod` with `ubuntu:22.04` image as root user and with SYS_TIME capability.** |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Pod name: `appsec-pod` - Image: `ubuntu:22.04` - Command: `sleep 4800` - Container user: `root` - Allow container capability `SYS_TIME` |
+---
+| **8** | **Export the logs of the pod `app-xyz3322` to a file located at `/opt/logs/app-xyz123.log`. The pod is located in a different namespace. First, identify the namespace where the pod is running.** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 1% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Logs at `/opt/logs/app-xyz123.log` |
+---
+| **9** | **Add a taint to the node with label work_type=redis. Create a pod with toleration.** |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Tains node with label `work_type=redis` : key: `app_type`, value: `alpha`, effect: `NoSchedule` - Create a pod called `alpha`, `image: redis` with toleration to node01. - node01 with the correct taint? Pod alpha has the correct toleration? |
+---
+| **10** | **Apply a label `app_type=beta` to node controlplane. Create a new deployment called `beta-apps` with `image: nginx` and `replicas: 3`. Run PODs on controlplane only.** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - controlplane has the labels `app_type=beta` - Deployment `beta-apps` - Pods of deployment are running only on controlplane? - Deployment beta-apps has 3 pods running? |
+---
+| **11** | **Create new ingress resource to the service. Make it available at the path `/cat`** |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - NameSpace: cat - service: cat - Annotation: `nginx.ingress.kubernetes.io/rewrite-target: /` - path: `/cat` - check ` curl ckad.local:30102/cat ` |
+---
+| **12** | **Create a new pod called `nginx1233` in the `web-ns` namespace with the image `nginx`. Add a livenessProbe to the container to restart it if the command `ls /var/www/html/` probe fails. This check should start after a delay of 10 seconds and run every 60 seconds.** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - You may delete and recreate the object. Ignore the warnings from the probe. - Pod: `nginx1233`, namespace: `web-ns`, image `nginx`, livenessProbe? |
+---
+| **13** | **Create a job with the image busybox and name hi-job that executes the command 'echo hello world'.** |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 3% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Job name: `hi-job` - Image: `busybox` - Command: `echo hello world` - completions: 3 - backoffLimit: 6 - RestartPolicy: Never |
+---
+| **14** | **Create a pod called `multi-pod` with two containers.** |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | container 1: - name: `alpha`, image: `nginx:alpine-slim` - environment variable: `type: alpha` container 2: - name: `beta`, image: `busybox` - command: `sleep 4800` - environment variable: `type: beta` |
+---
+| **15** | **Create a Persistent Volume with the given specification. Run pod with pv.** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 8% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Volume name: `pv-analytics` - pvc name: `pvc-analytics` - Storage: `100Mi` - Access mode: `ReadWriteOnce` - Host path: `/pv/analytics`
- pod name: `analytics` - image: `busybox` - node: `nodeSelector` - node_name: `node_2` - command: `"sleep 60000"` - mountPath: `/pv/analytics` |
+---
+| **16** | **Create a CustomResourceDefinition definition and then apply it to the cluster** |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 6% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Name: `operators.stable.example.com` - Group : `stable.example.com` - Schema: `` - Scope: `Namespaced` - Names: `` Kind: `Operator` |
+---
+| **17** | **Write two cli commands to get the top nodes and top pods in all namespaces sorted by CPU utilization level. Place these shell commands in the necessary files.** |
+| :-----------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2 % |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Get top nodes and save the command to get this info to `/opt/18/nodes.txt` - Get pods utilization and sort them by CPU consumtion. Save command to `/opt/18/pods.txt` |
+---
+| **18** | **Add prometheus helm repo and install prometheus chart to the cluster.** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Add repo `prometheus-community` `https://prometheus-community.github.io/helm-charts` - Install prometheus from the helm chart to kubernetes cluster - Release name: `prom`, namespace: `monitoring` - helm chart: `prometheus-community/kube-prometheus-stack` |
+---
diff --git a/docs/CKAD/Mock exams/02.md b/docs/CKAD/Mock exams/02.md
new file mode 100644
index 00000000..0a9f1e0b
--- /dev/null
+++ b/docs/CKAD/Mock exams/02.md
@@ -0,0 +1,143 @@
+# 02 - Tasks
+
+## Allowed resources
+
+### Kubernetes Documentation
+
+https://kubernetes.io/docs/ and their subdomains
+
+https://kubernetes.io/blog/ and their subdomains
+
+https://helm.sh/ and their subdomains
+
+This includes all available language translations of these pages (e.g. https://kubernetes.io/zh/docs/)
+
+![preview](../../../static/img/ckad-02-preview.png)
+
+- run ``time_left`` on work pc to **check time**
+- run ``check_result`` on work pc to **check result**
+
+## Questions
+
+| **1** | Create a secret **secret1** with value **key1=value1** in the namespace **jellyfish**. Add that secret as an environment variable to an existing **pod1** in the same namespace. |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Name: `secret1` - key1: `value1` - Namespace: `jellyfish` - pod env name: `PASSWORD` from secret `secret1` and key `key1` |
+---
+| **2** | Create a cron job `cron-job1` |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - name: `cron-job1` - namespace: `rnd` - image: `viktoruj/ping_pong:alpine` - Concurrency policy: `Forbid` - command: `echo "Hello from CKAD mock"` - run every 15 minutes - tolerate 4 failures - completions 3 times - imagePullPolicy `IfNotPresent` |
+---
+| **3** | There is deployment `my-deployment` in the namespace `baracuda` . Rollback deployment to 1-st version . Scale deployment to 3 replicas. |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Rollback deployment to 1-st version - Scale deployment to 3 replicas |
+---
+| **4** | Create deployment `shark-app` in the `shark` namespace. |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Name: `shark-app` - namespace `shark` - Image: `viktoruj/ping_pong` - container port `8080` - Environment variable `ENV1` = `8080` |
+---
+| **5** | Build container image using given manifest `/var/work/5/Dockerfile`. Podman is instaled on Worker-PC |
+| :-----------------: | :---------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Image Name: `ckad` - Tag: `0.0.1` - export image in oci-archive to `/var/work/5/ckad.tar` |
+---
+| **6** | Update `sword-app` deployment in the `swordfish` namespace |
+| :-----------------: | :------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - user with ID `5000`on container level - restrict privilege escalation on container level |
+---
+| **7** | There are deployment, service and the ingress in `meg` namespace . user can't access to the app `http://ckad.local:30102/app` . Please fix it . |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------ |
+| Task weight | 6% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - ` curl http://ckad.local:30102/app ` works. |
+---
+| **8** | There is a pod `web-app` in namespace `tuna`. It needs to communicate with `mysql-db` service in namespace `tuna` .Network policies have already been created, don't modify them. Fix problem. |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - `web-app` pod can communicate with `mysql-db` service and port 3306 |
+---
+| **9** | Deployment `main-app` in `salmon` namespace, has 10 replicas. It is published `http://ckad.local:30102/main-app`. The marketing asks you to create a new version of the application that will receive 30% of requests. The total number of application replicas should remain `10`. |
+| :-----------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 5% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - The new version deplyment name is `main-app-v2` - The new version of the application receives 30% of requests - new version has image `viktoruj/ping_pong:latest` and env `SERVER_NAME=appV2` - total replicas of the app is `10` |
+---
+| **10** | Create a Persistent Volume with the given specification. Run pod with pv. |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 8% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Volume name: `pv-analytics` - pvc name: `pvc-analytics` - Storage: `100Mi` - Access mode: `ReadWriteOnce` - Host path: `/pv/analytics`
- pod name: `analytics` - image: `busybox` - node: `nodeSelector` - node_name: `node_2` - command: `"sleep 60000"` - mountPath: `/pv/analytics` |
+---
+| **11** | Create secret from literal . create deployment , mount the secret as env |
+| :-----------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - secret: ns=`dev-db` name=`dbpassword` key=`pwd` value=`my-secret-pwd` - pod: ns=`dev-db` name=`db-pod` image=`mysql:8.0` env.name=`MYSQL_ROOT_PASSWORD` env.value=from secret `dbpassword` key=`pwd` |
+---
+| **12** | Export the logs of the pod `app-xyz3322` to a file located at `/opt/logs/app-xyz123.log`. The pod is located in any namespace. First, identify the namespace where the pod is running. |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 1% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Logs at `/opt/logs/app-xyz123.log` |
+---
+| **13** | **Create a new pod called `nginx1233` in the `web-ns` namespace with the image `nginx`. Add a livenessProbe to the container to restart it if the command `ls /var/www/html/` probe fails. This check should start after a delay of 10 seconds and run every 60 seconds.** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - You may delete and recreate the object. Ignore the warnings from the probe. - Pod: `nginx1233`, namespace: `web-ns`, image `nginx`, livenessProbe? |
+---
+| **14** | **Add prometheus helm repo and install prometheus chart to the cluster.** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Add repo `prometheus-community` `https://prometheus-community.github.io/helm-charts` - Install prometheus from the helm chart to kubernetes cluster - Release name: `prom`, namespace: `monitoring` - helm chart: `prometheus-community/kube-prometheus-stack` |
+---
+| **15** | In the Namespace `team-elephant` create a new ServiceAccount `pod-sa`. Assing an account permissions to `list and get` `pods` using Role `pod-sa-role` and RoleBinding `pod-sa-roleBinding` |
+| :-----------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 8% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Namespace `team-elephant` - ServiceAccount `pod-sa` - Role `pod-sa-role` : resource `pods` , verb : `list and get` -RoleBinding `pod-sa-roleBinding` - create pod `pod-sa` image = `viktoruj/cks-lab`, command = `sleep 60000`, ServiceAccount `pod-sa` |
+---
+| **16** | You have a legacy app in a `legacy` namespace . The application contains 2 containers . The first container writes log files to `/log/logs1.txt` . The second container `/log/logs2.txt` . you need to add another container `log` that will collect logs from these containers and send them to stdout . |
+| :-----------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 6% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - volule : name `logs` , type `emptyDir` , sizeLimit `500Mi` : - Containers `app1`,`app2` , `log` have /log to `/log` - log container : name `log`, Image: `viktoruj/cks-lab`, command `tail -f -n 100 /log/logs1.txt -f /log/logs2.txt` - check logs from app1 container : `k exec checker -n legacy -- sh -c 'curl legacy-app:8081/test_app1'` ; `k logs -l app=legacy-app -n legacy -c log` - check logs from app2 container : `k exec checker -n legacy -- sh -c 'curl legacy-app:8082/test_app2'` ; `k logs -l app=legacy-app -n legacy -c log` |
+---
+| **17** | collect logs from 4 pods with label app_name=xxx to logfile `/opt/17/17.log` in namespace `app-x` |
+| :-----------------: | :----------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - `/opt/17/17.log` contains logs from 4 pods with label `app_name=xxx` in namespace `app-x` |
+---
+
+| **18** | Convert existing pod in namespace `app-y` to deployment `deployment-app-y` . set `allowPrivilegeEscalation: false ` and `privileged: false` |
+| :-----------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 5% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - Ns `app-y` - deployment name `deployment-app-y` - image `viktoruj/ping_pong:alpine` - replicas `1` - env `SERVER_NAME = app-y` - `allowPrivilegeEscalation: false ` - `privileged: false` |
+---
+
+| **19** | create configmap `config` from file `/var/work/19/ingress_nginx_conf.yaml` in namespace `app-z` . create deployment `app-z` with mount as volume configmap with mount path `/app` |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - configmap `config` from file `/var/work/19/ingress_nginx_conf.yaml` in namespace `app-z` - deployment `app-z` in namespace `app-z` image `viktoruj/ping_pong:alpine` replicas `1` mount configmap to `/appConfig` |
+---
+| **20** | create deployment `app` in namespace `app-20` with init container. |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - ns `app-20` - deployment name `app` - init container image and app image `viktoruj/ping_pong:alpine` - replicas `1` - volume type `emptyDir` sizeLimit `5Mi` - mount to `init and main containers ` to `/configs` - init container command ` echo 'hello from init '>/configs/app.config` |
+---
diff --git a/docs/CKAD/Mock exams/Solutions/01.md b/docs/CKAD/Mock exams/Solutions/01.md
new file mode 100644
index 00000000..d8198f95
--- /dev/null
+++ b/docs/CKAD/Mock exams/Solutions/01.md
@@ -0,0 +1,607 @@
+# 01
+
+Solutions for CKAD Mock exam #01
+
+[Video Solution](https://www.youtube.com/watch?v=yQK7Ca8d-yw)
+
+## 01
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k get ns apx-z993845
+k create ns apx-z993845
+
+k run webhttpd --image httpd:alpine -n apx-z993845
+k get po -n apx-z993845
+```
+
+## 02
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k create deployment nginx-app --image nginx:alpine-slim --replicas 2
+k get deployment nginx-app
+```
+
+## 03
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k create ns dev-db
+k create secret -n dev-db generic dbpassword --from-literal pwd=my-secret-pwd
+k run db-pod --namespace dev-db --labels type=db --image mysql:8.0 --dry-run=client -o yaml >3.yaml
+```
+
+Edit definition file and add env variable to have
+
+```yaml
+# vim 3.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ labels:
+ type: db
+ name: db-pod
+ namespace: dev-db
+spec:
+ containers:
+ - image: mysql:8.0
+ name: db-pod
+ env:
+ - name: MYSQL_ROOT_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: dbpassword
+ key: pwd
+```
+
+Apply the changes
+
+```sh
+k apply -f 3.yaml
+```
+
+## 04
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+$ k get po -n rsapp
+
+NAME READY STATUS RESTARTS AGE
+rs-app2223-78skl 0/1 ImagePullBackOff 0 7m55s
+rs-app2223-wg4w7 0/1 ImagePullBackOff 0 7m55s
+```
+
+1. Edit replicaset executing the following command:
+
+```sh
+k edit rs -n rsapp rs-app2223
+# Then change container image from rrredis:aline to redis:alpine
+```
+
+2. As it is replicaset we need to delete existing pods to allow ReplicaSet recreate them.
+
+```sh
+k delete po -n rsapp -l app=rs-app2223
+```
+
+3. Ensure that new pods are running
+
+```sh
+k get po -n rsapp
+```
+
+## 05
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k create deployment -n messaging msg --image redis
+k expose -n messaging deployment/msg --name msg-service --target-port 6379 --type ClusterIP --port 6379
+```
+
+## 06
+
+1. Get manifest of the existing pod
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+
+k get pod text-printer -o yaml > 6.yaml
+```
+
+2. Change the value of env var from RED to GREEN
+
+```sh
+# vim 6.yaml
+...
+ env:
+ - name: COLOR
+ value: GREEN
+...
+```
+
+3. Remove existing pod and create new one from updated manifest
+
+```sh
+k delete pod text-printer --force
+k apply -f 6.yaml
+```
+
+# 07
+
+```
+kubectl config use-context cluster1-admin@cluster1
+```
+
+Generate manifest file via cli.
+
+```sh
+k run appsec-pod --image ubuntu:22.04 --dry-run=client -o yaml > 7.yaml
+```
+
+Edit manifest by adding security configurations.
+
+```yaml
+# vim 7.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ labels:
+ run: appsec-pod
+ name: appsec-pod
+spec:
+ containers:
+ - image: ubuntu:22.04
+ name: appsec-pod
+ args:
+ - sleep
+ - "4800"
+ securityContext:
+ capabilities:
+ add: ["SYS_TIME"]
+ runAsUser: 0
+```
+
+Apply updated changes.
+
+```sh
+k apply -f 7.yaml
+```
+
+## 08
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k logs pods/app-xyz3322
+k logs pods/app-xyz3322 > /opt/logs/app-xyz123.log
+```
+
+## 09
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+1. Add a taint to node and generate manifest for the pod
+
+```sh
+k taint node --help
+
+k taint node -l work_type=redis app_type=alpha:NoSchedule
+
+k run alpha --image redis --dry-run=client -o yaml > 9.yaml
+```
+
+2. Add `tolerations` to pod
+
+```yaml
+# vim 9.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ labels:
+ run: alpha
+ name: alpha
+spec:
+ containers:
+ - image: redis
+ name: alpha
+ tolerations:
+ - key: "app_type"
+ operator: "Equal"
+ value: "alpha"
+ effect: "NoSchedule"
+```
+
+```sh
+k apply -f 9.yaml
+```
+
+## 10
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+1. Add a taint to controlplane and generate manifest for the pod
+
+```sh
+kubectl get no
+
+kubectl label node ${put-controlplane-hostname} app_type=beta
+
+kubectl create deployment beta-apps --image nginx --replicas 3 --dry-run=client -o yaml > 10.yaml
+```
+
+2. Modify manifest file and add NodeAffinity
+
+```yaml
+# vim 10.yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ labels:
+ app: beta-apps
+ name: beta-apps
+spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: beta-apps
+ template:
+ metadata:
+ labels:
+ app: beta-apps
+ spec:
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: app_type
+ operator: In
+ values:
+ - beta
+ tolerations:
+ - key: node-role.kubernetes.io/control-plane
+ effect: "NoSchedule"
+ containers:
+ - image: nginx
+ name: nginx
+```
+or
+
+```yaml
+# vim 10.yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ labels:
+ app: beta-apps
+ name: beta-apps
+spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: beta-apps
+ template:
+ metadata:
+ labels:
+ app: beta-apps
+ spec:
+ nodeSelector:
+ app_type: beta
+ tolerations:
+ - key: node-role.kubernetes.io/control-plane
+ effect: "NoSchedule"
+ containers:
+ - image: nginx
+ name: nginx
+
+```
+
+```sh
+k apply -f 10.yaml
+```
+
+## 11
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```yaml
+#vim 11.yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: cat
+ namespace: cat
+ annotations:
+ nginx.ingress.kubernetes.io/rewrite-target: /
+spec:
+ rules:
+ - http:
+ paths:
+ - path: /cat
+ pathType: Prefix
+ backend:
+ service:
+ name: cat
+ port:
+ number: 80
+```
+
+```sh
+k apply -f 11.yaml
+```
+
+## 12
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+
+k create ns web-ns
+
+k run nginx1233 --namespace web-ns --image nginx --dry-run=client -o yaml > 12.yaml
+```
+
+Edit manifest by configuring liveness probes
+
+```sh
+# vim 12.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ labels:
+ run: nginx1233
+ name: nginx1233
+ namespace: web-ns
+spec:
+ containers:
+ - image: nginx
+ name: nginx1233
+ livenessProbe:
+ exec:
+ command:
+ - ls
+ - /var/www/html/
+ initialDelaySeconds: 10
+ periodSeconds: 60
+```
+
+```sh
+k apply -f 12.yaml
+```
+
+## 13
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```yaml
+# vim 13.yaml
+apiVersion: batch/v1
+kind: Job
+metadata:
+ name: hi-job
+spec:
+ template:
+ spec:
+ containers:
+ - name: hi-job
+ image: busybox
+ command: ["echo", "hello world"]
+ restartPolicy: Never
+ backoffLimit: 6
+ completions: 3
+```
+
+```sh
+k apply -f 13.yaml
+```
+
+## 14
+
+```yaml
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k run multi-pod --image nginx:alpine-slim --env type=alpha -o yaml --dry-run=client >14.yaml
+```
+
+```yaml
+# vim 14.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ creationTimestamp: null
+ labels:
+ run: multi-pod
+ name: multi-pod
+spec:
+ containers:
+ - env:
+ - name: type
+ value: alpha
+ image: nginx:alpine-slim
+ name: alpha
+
+ - env:
+ - name: type
+ value: beta
+ image: busybox
+ name: beta
+ command: ["sleep","4800"]
+
+ resources: {}
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+status: {}
+```
+
+```sh
+k apply -f 14.yaml
+k get po multi-pod
+```
+
+## 15
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k get no -l node_name=node_2
+# ssh to worker node
+sudo mkdir /pv/analytics -p
+sudo chmod 777 -R /pv/analytics
+exit
+```
+
+```yaml
+# vim 15.yaml
+---
+apiVersion: v1
+kind: PersistentVolume
+metadata:
+ name: pv-analytics
+ labels:
+ type: local
+spec:
+ storageClassName: manual
+ capacity:
+ storage: 100Mi
+ accessModes:
+ - ReadWriteOnce
+ hostPath:
+ path: "/pv/analytics"
+---
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: pvc-analytics
+spec:
+ storageClassName: manual
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 100Mi
+---
+apiVersion: v1
+kind: Pod
+metadata:
+ name: analytics
+spec:
+ volumes:
+ - name: task-pv-storage
+ persistentVolumeClaim:
+ claimName: pvc-analytics
+ nodeSelector:
+ node_name: node_2
+ containers:
+ - name: task-pv-container
+ image: busybox
+ command: ["sleep","60000"]
+ volumeMounts:
+ - mountPath: "/pv/analytics"
+ name: task-pv-storage
+```
+
+```sh
+k apply -f 15.yaml
+```
+
+## 16
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+[doc](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/)
+
+```yaml
+#vim 16.yaml
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+ name: operators.stable.example.com
+spec:
+ group: stable.example.com
+ versions:
+ - name: v1
+ served: true
+ storage: true
+ schema:
+ openAPIV3Schema:
+ type: object
+ properties:
+ spec:
+ type: object
+ properties:
+ name:
+ type: string
+ email:
+ type: string
+ age:
+ type: integer
+ scope: Namespaced
+ names:
+ plural: operators
+ singular: operator
+ kind: Operator
+ shortNames:
+ - op
+```
+
+```sh
+k apply -f 16.yaml
+```
+
+## 17
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+echo "kubectl top nodes" > /opt/18/nodes.txt
+
+echo "kubectl top pod --all-namespaces --sort-by cpu" > /opt/18/pods.txt
+```
+
+## 18
+
+```
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
+helm repo update
+
+
+helm install prom prometheus-community/kube-prometheus-stack \
+ --namespace monitoring --create-namespace
+```
diff --git a/docs/CKAD/Mock exams/Solutions/02.md b/docs/CKAD/Mock exams/Solutions/02.md
new file mode 100644
index 00000000..b85d5c4b
--- /dev/null
+++ b/docs/CKAD/Mock exams/Solutions/02.md
@@ -0,0 +1,1229 @@
+# 02
+
+Solutions for CKAD Mock exam #02
+
+[Video Solution](https://www.youtube.com/watch?v=_0nX68vil-A)
+
+## 01
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k get ns jellyfish
+
+k create secret generic secret1 -n jellyfish --from-literal key1=value1
+
+k get po -n jellyfish
+
+k get po -n jellyfish -o yaml >1.yaml
+
+k delete -f 1.yaml
+```
+
+```yaml
+# vim 1.yaml
+apiVersion: v1
+items:
+- apiVersion: v1
+ kind: Pod
+ metadata:
+ annotations:
+ cni.projectcalico.org/containerID: cdf2830539800a7ed95df197ec8dfd9766589f60f1d27a43513a4f006b6af0e0
+ cni.projectcalico.org/podIP: 10.0.77.195/32
+ cni.projectcalico.org/podIPs: 10.0.77.195/32
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"run":"app1"},"name":"app1","namespace":"jellyfish"},"spec":{"containers":[{"image":"viktoruj/ping_pong","name":"app"}]}}
+ creationTimestamp: "2024-02-21T05:39:44Z"
+ labels:
+ run: app1
+ name: app1
+ namespace: jellyfish
+ resourceVersion: "1949"
+ uid: 0d02da57-635e-44da-be03-d952a3ee85f2
+ spec:
+ containers:
+ - image: viktoruj/ping_pong
+ imagePullPolicy: Always
+ name: app
+ resources: {}
+ terminationMessagePath: /dev/termination-log
+ terminationMessagePolicy: File
+ volumeMounts:
+ - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
+ name: kube-api-access-rjv5n
+ readOnly: true
+ env: #add it
+ - name: PASSWORD #add it
+ valueFrom: #add it
+ secretKeyRef: #add it
+ name: secret1 #add it
+ key: key1 #add it
+
+ dnsPolicy: ClusterFirst
+ enableServiceLinks: true
+ nodeName: ip-10-2-7-44
+ preemptionPolicy: PreemptLowerPriority
+ priority: 0
+ restartPolicy: Always
+ schedulerName: default-scheduler
+ securityContext: {}
+ serviceAccount: default
+ serviceAccountName: default
+ terminationGracePeriodSeconds: 30
+ tolerations:
+ - effect: NoExecute
+ key: node.kubernetes.io/not-ready
+ operator: Exists
+ tolerationSeconds: 300
+ - effect: NoExecute
+ key: node.kubernetes.io/unreachable
+ operator: Exists
+ tolerationSeconds: 300
+ volumes:
+ - name: kube-api-access-rjv5n
+ projected:
+ defaultMode: 420
+ sources:
+ - serviceAccountToken:
+ expirationSeconds: 3607
+ path: token
+ - configMap:
+ items:
+ - key: ca.crt
+ path: ca.crt
+ name: kube-root-ca.crt
+ - downwardAPI:
+ items:
+ - fieldRef:
+ apiVersion: v1
+ fieldPath: metadata.namespace
+ path: namespace
+kind: List
+metadata:
+ resourceVersion: ""
+```
+
+```sh
+k apply -f 1.yaml
+```
+
+```sh
+$ k get po -n jellyfish
+
+NAME READY STATUS RESTARTS AGE
+app1 1/1 Running 0 15m
+```
+
+```sh
+k exec app1 -n jellyfish -- sh -c 'echo $PASSWORD'
+```
+
+## 02
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k get ns rnd
+
+k create ns rnd
+
+k create cronjob cron-job1 --image viktoruj/ping_pong:alpine --schedule "*/15 * * * *" -n rnd -o yaml --dry-run=client >2.yaml
+```
+
+```sh
+# vim 2.yaml
+
+apiVersion: batch/v1
+kind: CronJob
+metadata:
+ creationTimestamp: null
+ name: cron-job1
+ namespace: rnd
+spec:
+ jobTemplate:
+ metadata:
+ creationTimestamp: null
+ name: cron-job1
+ spec:
+ template:
+ metadata:
+ creationTimestamp: null
+ spec:
+ containers:
+ - image: viktoruj/ping_pong:alpine
+ name: cron-job1
+ command: ["echo","Hello from CKAD mock"] # add it
+ resources: {}
+ restartPolicy: OnFailure
+ backoffLimit: 4 # add it
+ completions: 3 # add it
+ schedule: '*/15 * * * *'
+ concurrencyPolicy: Forbid # add it
+status: {}
+```
+
+```sh
+k apply -f 2.yaml
+
+k get cronjob -n rnd
+```
+
+## 03
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k rollout history deployment my-deployment -n baracuda
+
+k rollout undo deployment my-deployment --to-revision=1 -n baracuda
+
+k scale deployments.apps my-deployment -n baracuda --replicas 3
+```
+
+## 04
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k get ns shark
+
+k create ns shark
+
+k create deployment shark-app -n shark --image viktoruj/ping_pong --port 8080 -o yaml --dry-run=client > 4.yaml
+```
+
+```yaml
+# vim 4.yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ creationTimestamp: null
+ labels:
+ app: shark-app
+ name: shark-app
+ namespace: shark
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: shark-app
+ strategy: {}
+ template:
+ metadata:
+ creationTimestamp: null
+ labels:
+ app: shark-app
+ spec:
+ containers:
+ - image: viktoruj/ping_pong
+ name: ping-pong-cjnt8
+ env: # add it
+ - name: ENV1 # add it
+ value: "8080" # add it
+ ports:
+ - containerPort: 8080
+ resources: {}
+status: {}
+```
+
+```sh
+k apply -f 4.yaml
+k get po -n shark
+```
+
+## 05
+
+```sh
+cd /var/work/5/
+
+podman build . -t ckad:0.0.1
+
+podman save --help
+
+podman save --format oci-archive -o ckad.tar ckad:0.0.1
+```
+
+## 06
+
+https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k edit deployment sword-app -n swordfish
+```
+
+```sh
+#
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ annotations:
+ deployment.kubernetes.io/revision: "1"
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"app":"sword-app"},"name":"sword-app","namespace":"swordfish"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"sword-app"}},"strategy":{},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"sword-app"}},"spec":{"containers":[{"image":"viktoruj/ping_pong:alpine","name":"app","resources":{}}]}}},"status":{}}
+ creationTimestamp: "2024-02-28T05:32:13Z"
+ generation: 1
+ labels:
+ app: sword-app
+ name: sword-app
+ namespace: swordfish
+ resourceVersion: "1821"
+ uid: bbd06535-282a-45a2-9c13-388cd916f879
+spec:
+ progressDeadlineSeconds: 600
+ replicas: 1
+ revisionHistoryLimit: 10
+ selector:
+ matchLabels:
+ app: sword-app
+ strategy:
+ rollingUpdate:
+ maxSurge: 25%
+ maxUnavailable: 25%
+ type: RollingUpdate
+ template:
+ metadata:
+ creationTimestamp: null
+ labels:
+ app: sword-app
+ spec:
+ containers:
+ - image: viktoruj/ping_pong:alpine
+ imagePullPolicy: IfNotPresent
+ name: app
+ resources: {}
+ terminationMessagePath: /dev/termination-log
+ terminationMessagePolicy: File
+ securityContext:
+ allowPrivilegeEscalation: false # add it
+ runAsUser: 5000 # add it
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+ schedulerName: default-scheduler
+ securityContext: {}
+ terminationGracePeriodSeconds: 30
+```
+
+## 07
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+Check app
+
+```sh
+k get po -n meg
+```
+
+```sh
+NAME READY STATUS RESTARTS AGE
+meg-app-5957b8b4fb-7tv5s 1/1 Running 0 9m57s
+```
+
+```sh
+k exec {pod_name} -n meg -- curl 127.0.0.0
+```
+
+```sh
+ k exec meg-app-5957b8b4fb-7tv5s -n meg -- curl 127.0.0.0
+ % Total % Received % Xferd Average Speed Time Time Time Current
+ Dload Upload Total Spent Left Speed
+ 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Server Name: megApp
+URL: http://127.0.0.0/
+Client IP: 127.0.0.1
+Method: GET
+Protocol: HTTP/1.1
+Headers:
+User-Agent: curl/8.5.0
+Accept: */*
+100 139 100 139 0 0 20002 0 --:--:-- --:--:-- --:--:-- 23166
+```
+
+Check service
+
+```sh
+k get svc -n meg
+
+k exec {pod_name} -n meg -- curl meg-service
+```
+
+```sh
+$ k exec meg-app-5957b8b4fb-7tv5s -n meg -- curl meg-service
+ % Total % Received % Xferd Average Speed Time Time Time Current
+ Dload Upload Total Spent Left Speed
+ 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
+curl: (7) Failed to connect to meg-service port 80 after 0 ms: Couldn't connect to server
+command terminated with exit code 7
+```
+
+```sh
+k get po -n meg --show-labels
+```
+
+```text
+NAME READY STATUS RESTARTS AGE LABELS
+meg-app-5957b8b4fb-7tv5s 1/1 Running 0 14m app=meg-app,pod-template-hash=5957b8b4fb
+```
+
+Fix the service
+
+```sh
+k edit svc meg-service -n meg
+```
+
+```yaml
+apiVersion: v1
+kind: Service
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"app":"meg-service"},"name":"meg-service","namespace":"meg"},"spec":{"ports":[{"port":80,"protocol":"TCP","targetPort":80}],"selector":{"app":"megapp"},"type":"ClusterIP"},"status":{"loadBalancer":{}}}
+ creationTimestamp: "2024-03-02T10:06:04Z"
+ labels:
+ app: meg-service
+ name: meg-service
+ namespace: meg
+ resourceVersion: "615"
+ uid: ad2edd84-efa9-4960-a4af-015384c05ad9
+spec:
+ clusterIP: 10.104.169.81
+ clusterIPs:
+ - 10.104.169.81
+ internalTrafficPolicy: Cluster
+ ipFamilies:
+ - IPv4
+ ipFamilyPolicy: SingleStack
+ ports:
+ - port: 80
+ protocol: TCP
+ targetPort: 80
+ selector:
+ app: meg-app # update it
+ sessionAffinity: None
+ type: ClusterIP
+status:
+ loadBalancer: {}
+```
+
+Check service
+
+```sh
+k exec {pod_name} -n -- curl meg-service
+```
+
+```sh
+ k exec meg-app-5957b8b4fb-7tv5s -n meg -- curl meg-service
+ % Total % Received % Xferd Average Speed Time Time Time Current
+ Dload Upload Total Spent Left Speed
+100 141 100 141 0 0 110k 0 --:--:-- --:--:-- --:--:-- 137k
+Server Name: megApp
+URL: http://meg-service/
+Client IP: 10.2.30.2
+Method: GET
+Protocol: HTTP/1.1
+Headers:
+User-Agent: curl/8.5.0
+Accept: */*
+
+
+```
+
+Check ingress
+
+```sh
+curl http://ckad.local:30102/app
+```
+
+```sh
+ curl http://ckad.local:30102/app
+
+404 Not Found
+
+
for pod with image `nginx` and store log to `/var/work/tests/artifacts/12/log` |
+---
+
+### 13
+
+| **13** | **Image policy webhook** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| Task weight | 6% |
+| Cluster | cluster8 (`kubectl config use-context cluster8-admin@cluster8`) |
+| Acceptance criteria | **configure image policy webhook**: - `/etc/kubernetes/pki/admission_config.json` - `/etc/kubernetes/pki/webhook/admission_kube_config.yaml` - `https://image-bouncer-webhook:30020/image_policy` **create pod** - `test-lasted` in `default` ns with image `nginx`
**result:** `Error from server (Forbidden): pods test is forbidden: image policy webhook .... latest tag are not allowed`
**create pod** - `test-tag` in `default` ns with image `nginx:alpine3.17`
**result:** `ok` |
+| |
+---
+
+### 14
+
+| **14** | **Fix Dockerfile** |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Cluster | any |
+| Acceptance criteria | fix Dockerfile `/var/work/14/Dockerfile`: - use FROM image `20.04` version - use `myuser` for running app - build image `cks:14` (podman installed on worker pc) |
+---
+
+### 15
+
+| **15** | **Pod Security Standard** |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 6% |
+| Cluster | cluster6 ( `kubectl config use-context cluster6-admin@cluster6` ) |
+| Acceptance criteria | There is Deployment `container-host-hacker` in Namespace `team-red` which mounts `/run/containerd` as a hostPath volume on the Node where its running. This means that the Pod can access various data about other containers running on the same Node.
To prevent this configure Namespace `team-red` to `enforce` the `baseline` Pod Security Standard.
Once completed, delete the Pod of the Deployment mentioned above.
Check the ReplicaSet events and write the event/log lines containing the reason why the Pod isn't recreated into `/var/work/tests/artifacts/15/logs`. |
+---
+
+### 16
+
+| **16** | **Create a new user called john. Grant him access to the cluster. John should have permission to create, list and get pods in the development namespace.** |
+| :-----------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 6% |
+| Cluster | cluster1 (`kubectl config use-context cluster1-admin@cluster1`) |
+| Acceptance criteria | - create ns `development` - create private key and csr - CSR: `john-developer` with Status:Approved - Role Name: `developer`, namespace: `development`, Resource: `pods` , verbs: `create,list,get` - rolebinding: name=`developer-role-binding` , role=`developer`, user=`john` , namespace=`development` - Access: User 'john' has appropriate permissions |
+---
+
+### 17
+
+| **17** | **Open Policy Agent - Blacklist Images from very-bad-registry.com** |
+| :-----------------: | :------------------------------------------------------------------ |
+| Task weight | 6% |
+| Cluster | cluster9 (`kubectl config use-context cluster9-admin@cluster9`) |
+| Acceptance criteria | - Cannot run a pod with an image from **very-bad-registry.com** |
+---
+
+### 18
+
+| **18** | **Create Pod with Seccomp Profile. profile is located on work node /var/work/profile-nginx.json** |
+| :---------------------: | :----------------------------------------------------------------------------------------------------------------- |
+| **Task weight** | 6% |
+| **Cluster** | cluster10 (`kubectl config use-context cluster10-admin@cluster10`) |
+| **Acceptance criteria** | - Pod status is Running - Pod name is seccomp - Image is nginx - Seccomp profile is profile-nginx.json |
+---
diff --git a/docs/CKS/Mock exams/Solutions/01.md b/docs/CKS/Mock exams/Solutions/01.md
new file mode 100644
index 00000000..e2df88c6
--- /dev/null
+++ b/docs/CKS/Mock exams/Solutions/01.md
@@ -0,0 +1,1387 @@
+# 01
+
+Solutions for CKS Mock exam #01
+
+## 01
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+-> https://kubernetes.io/docs/home/
+and find template for **RuntimeClass**
+
+```yaml
+# vim 1.yaml
+# RuntimeClass is defined in the node.k8s.io API group
+apiVersion: node.k8s.io/v1
+kind: RuntimeClass
+metadata:
+ # The name the RuntimeClass will be referenced by.
+ # RuntimeClass is a non-namespaced resource.
+ name: gvisor
+# The name of the corresponding CRI configuration
+handler: runsc
+```
+
+```sh
+k apply -f 1.yaml
+k get runtimeclasses.node.k8s.io
+```
+
+```sh
+k get no --show-labels
+```
+
+```sh
+k label nodes {node2} RuntimeClass=runsc
+```
+
+```sh
+k get deployment -n team-purple
+k edit deployment -n team-purple
+```
+
+```yaml
+ runtimeClassName: gvisor # add to all deployment
+ nodeSelector: # add to all deployment
+ RuntimeClass: runsc # add to all deployment
+```
+
+```sh
+# ckeck pods in ns team-purple
+k get po -n team-purple
+```
+
+```sh
+mkdir -p /var/work/tests/artifacts/1/
+```
+
+```sh
+k get po -n team-purple
+
+k exec {pod1} -n team-purple -- dmesg
+
+# find Starting gVisor..
+
+k exec {pod1} -n team-purple -- dmesg >/var/work/tests/artifacts/1/gvisor-dmesg
+```
+
+## 02
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+```sh
+k get po -n team-xxx -o yaml | grep 'image:' | uniq | grep -v 'docker'
+```
+
+```sh
+k get no
+ssh {node 2 }
+```
+
+```sh
+# find all image with 'CRITICAL'
+trivy i {image} | grep 'CRITICAL'
+```
+
+```
+# exit to worker PC
+exit
+```
+
+```sh
+k get deployment -n team-xxx
+
+k get deployment {deployment1} -n team-xxx -o yaml | grep 'image:'
+
+# if deployment has CRITICAL image
+# k scale deployment {deployment_name} -n team-xxx --replicas 0
+```
+
+## 03
+
+```sh
+kubectl config use-context cluster2-admin@cluster2
+```
+
+https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/
+
+```sh
+k get no
+ssh {control-plane}
+```
+
+```sh
+sudo su
+
+mkdir -p /etc/kubernetes/policy/
+```
+
+```sh
+# vim /etc/kubernetes/policy/log-policy.yaml
+
+apiVersion: audit.k8s.io/v1
+kind: Policy
+rules:
+- level: Metadata
+ resources:
+ - group: "" # core API group
+ resources: ["secrets"]
+ namespaces: ["prod"]
+- level: RequestResponse
+ resources:
+ - group: "" # core API group
+ resources: ["configmaps"]
+ namespaces: ["billing"]
+- level: None
+```
+
+```yaml
+# vim /etc/kubernetes/manifests/kube-apiserver.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ annotations:
+ kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.2.16.248:6443
+ creationTimestamp: null
+ labels:
+ component: kube-apiserver
+ tier: control-plane
+ name: kube-apiserver
+ namespace: kube-system
+spec:
+ containers:
+ - command:
+ - kube-apiserver
+ - --advertise-address=10.2.16.248
+ - --allow-privileged=true
+ - --authorization-mode=Node,RBAC
+ - --client-ca-file=/etc/kubernetes/pki/ca.crt
+ - --enable-admission-plugins=NodeRestriction
+ - --enable-bootstrap-token-auth=true
+ - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
+ - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
+ - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
+ - --etcd-servers=https://127.0.0.1:2379
+ - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
+ - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
+ - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
+ - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
+ - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
+ - --requestheader-allowed-names=front-proxy-client
+ - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
+ - --requestheader-extra-headers-prefix=X-Remote-Extra-
+ - --requestheader-group-headers=X-Remote-Group
+ - --requestheader-username-headers=X-Remote-User
+ - --secure-port=6443
+ - --service-account-issuer=https://kubernetes.default.svc.cluster.local
+ - --service-account-key-file=/etc/kubernetes/pki/sa.pub
+ - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
+ - --service-cluster-ip-range=10.96.0.0/12
+ - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
+ - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
+ - --audit-policy-file=/etc/kubernetes/policy/log-policy.yaml # add
+ - --audit-log-path=/var/logs/kubernetes-api.log # add
+
+ image: registry.k8s.io/kube-apiserver:v1.28.0
+ imagePullPolicy: IfNotPresent
+ livenessProbe:
+ failureThreshold: 8
+ httpGet:
+ host: 10.2.16.248
+ path: /livez
+ port: 6443
+ scheme: HTTPS
+ initialDelaySeconds: 10
+ periodSeconds: 10
+ timeoutSeconds: 15
+ name: kube-apiserver
+ readinessProbe:
+ failureThreshold: 3
+ httpGet:
+ host: 10.2.16.248
+ path: /readyz
+ port: 6443
+ scheme: HTTPS
+ periodSeconds: 1
+ timeoutSeconds: 15
+ resources:
+ requests:
+ cpu: 250m
+ startupProbe:
+ failureThreshold: 24
+ httpGet:
+ host: 10.2.16.248
+ path: /livez
+ port: 6443
+ scheme: HTTPS
+ initialDelaySeconds: 10
+ periodSeconds: 10
+ timeoutSeconds: 15
+ volumeMounts:
+ - mountPath: /etc/ssl/certs
+ name: ca-certs
+ readOnly: true
+ - mountPath: /etc/ca-certificates
+ name: etc-ca-certificates
+ readOnly: true
+ - mountPath: /etc/pki
+ name: etc-pki
+ readOnly: true
+ - mountPath: /etc/kubernetes/pki
+ name: k8s-certs
+ readOnly: true
+ - mountPath: /usr/local/share/ca-certificates
+ name: usr-local-share-ca-certificates
+ readOnly: true
+ - mountPath: /usr/share/ca-certificates
+ name: usr-share-ca-certificates
+ readOnly: true
+
+ - mountPath: /etc/kubernetes/policy/log-policy.yaml # add
+ name: audit # add
+ readOnly: true # add
+ - mountPath: /var/logs/ # add
+ name: audit-log # add
+ readOnly: false # add
+
+ hostNetwork: true
+ priority: 2000001000
+ priorityClassName: system-node-critical
+ securityContext:
+ seccompProfile:
+ type: RuntimeDefault
+ volumes:
+ - hostPath:
+ path: /etc/ssl/certs
+ type: DirectoryOrCreate
+ name: ca-certs
+ - hostPath:
+ path: /etc/ca-certificates
+ type: DirectoryOrCreate
+ name: etc-ca-certificates
+ - hostPath:
+ path: /etc/pki
+ type: DirectoryOrCreate
+ name: etc-pki
+ - hostPath:
+ path: /etc/kubernetes/pki
+ type: DirectoryOrCreate
+ name: k8s-certs
+ - hostPath:
+ path: /usr/local/share/ca-certificates
+ type: DirectoryOrCreate
+ name: usr-local-share-ca-certificates
+ - hostPath:
+ path: /usr/share/ca-certificates
+ type: DirectoryOrCreate
+ name: usr-share-ca-certificates
+
+ - name: audit # add
+ hostPath: # add
+ path: /etc/kubernetes/policy/log-policy.yaml # add
+ type: File # add
+ # add
+ - name: audit-log # add
+ hostPath: # add
+ path: /var/logs/ # add
+ type: DirectoryOrCreate # add
+
+```
+
+```sh
+service kubelet restart
+k get no
+k get secret -n prod
+k get configmaps -n billing
+
+```
+
+```json
+# cat /var/logs/kubernetes-api.log | jq | grep secrets -B 5 -A 5
+
+--
+ "kind": "Event",
+ "apiVersion": "audit.k8s.io/v1",
+ "level": "Metadata",
+ "auditID": "a6b8945f-4914-4ba9-a80a-ea2441ad1e4f",
+ "stage": "ResponseComplete",
+ "requestURI": "/api/v1/namespaces/prod/secrets/db",
+ "verb": "get",
+ "user": {
+ "username": "system:serviceaccount:prod:k8api",
+ "uid": "cd47986d-8f88-4451-9de4-77fb3e9d46bb",
+ "groups": [
+--
+ "sourceIPs": [
+ "10.0.229.65"
+ ],
+ "userAgent": "curl/8.2.1",
+ "objectRef": {
+ "resource": "secrets",
+ "namespace": "prod",
+ "name": "db",
+ "apiVersion": "v1"
+ },
+ "responseStatus": {
+```
+
+```yaml
+# cat /var/logs/kubernetes-api.log | jq | grep configmaps -B 5 -A 5
+--
+ "sourceIPs": [
+ "10.2.16.248"
+ ],
+ "userAgent": "kubectl/v1.28.0 (linux/arm64) kubernetes/855e7c4",
+ "objectRef": {
+ "resource": "configmaps",
+ "namespace": "billing",
+ "name": "bill",
+ "apiVersion": "v1"
+ },
+ "requestReceivedTimestamp": "2023-09-27T19:14:33.778635Z",
+--
+ "kind": "Event",
+ "apiVersion": "audit.k8s.io/v1",
+ "level": "RequestResponse",
+ "auditID": "0266674d-db53-4a3d-bf9c-940c6aa43440",
+ "stage": "ResponseComplete",
+ "requestURI": "/api/v1/namespaces/billing/configmaps/bill?fieldManager=kubectl-edit&fieldValidation=Strict",
+ "verb": "patch",
+ "user": {
+ "username": "kubernetes-admin",
+ "groups": [
+ "system:masters",
+
+```
+
+## 04
+
+```sh
+kubectl config use-context cluster3-admin@cluster3
+```
+
+```sh
+k get no
+ssh {control-plane}
+```
+
+```sh
+sudo su
+
+kube-bench | grep '1.2.16' -A 5
+# read and fix
+```
+
+```sh
+kube-bench | grep '1.2.16' -A 5
+[FAIL] 1.2.17 Ensure that the --profiling argument is set to false (Automated)
+[FAIL] 1.2.18 Ensure that the --audit-log-path argument is set (Automated)
+[FAIL] 1.2.19 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
+[FAIL] 1.2.20 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
+[FAIL] 1.2.21 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
+[WARN] 1.2.22 Ensure that the --request-timeout argument is set as appropriate (Manual)
+--
+1.2.16 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
+on the control plane node and set the below parameter.
+--profiling=false
+```
+
+```sh
+kube-bench | grep '1.3.2' -A 5
+# read and fix
+```
+
+```sh
+kube-bench | grep '1.3.2' -A 5
+[FAIL] 1.3.2 Ensure that the --profiling argument is set to false (Automated)
+[PASS] 1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)
+[PASS] 1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)
+[PASS] 1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)
+[PASS] 1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)
+[PASS] 1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+--
+1.3.2 Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
+on the control plane node and set the below parameter.
+--profiling=false
+
+1.4.1 Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file
+on the control plane node and set the below parameter.
+```
+
+```sh
+kube-bench | grep '1.4.1' -A 5
+# read and fix
+```
+
+```sh
+ kube-bench | grep '1.4.1' -A 5
+[FAIL] 1.4.1 Ensure that the --profiling argument is set to false (Automated)
+[PASS] 1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)
+
+== Remediations master ==
+1.1.9 Run the below command (based on the file location on your system) on the control plane node.
+For example, chmod 600
+--
+1.4.1 Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file
+on the control plane node and set the below parameter.
+--profiling=false
+
+
+```
+
+Exit to work PC
+
+```sh
+k get no
+ssh {work node}
+```
+
+```sh
+sudo su
+
+kube-bench | grep '4.2.6' -A 5
+# read and fix
+
+# exit to work PC
+```
+
+## 05
+
+```sh
+kubectl config use-context cluster6-admin@cluster6
+```
+
+```sh
+k get secret db -n team-5 -o yaml
+```
+
+```yaml
+apiVersion: v1
+data:
+ password: UGExNjM2d29yRA==
+ user: YWQtYWRtaW4=
+kind: Secret
+metadata:
+ annotations:
+ kubectl.kubernetes.io/last-applied-configuration: |
+ {"apiVersion":"v1","data":{"password":"UGExNjM2d29yRA==","user":"YWQtYWRtaW4="},"kind":"Secret","metadata":{"annotations":{},"creationTimestamp":null,"name":"db","namespace":"team-5"}}
+ creationTimestamp: "2023-09-27T16:47:13Z"
+ name: db
+ namespace: team-5
+ resourceVersion: "540"
+ uid: ba6e2888-6f02-4731-bba4-39df2fefc91d
+type: Opaque
+
+```
+
+```sh
+mkdir /var/work/tests/artifacts/5/ -p
+echo {user} | base64 -d > /var/work/tests/artifacts/5/user
+echo {password} | base64 -d > /var/work/tests/artifacts/5/password
+```
+
+```sh
+k create secret generic db-admin -n team-5 --from-literal user=xxx --from-literal password=yyyy
+k run db-admin --image viktoruj/cks-lab -n team-5 -o yaml --dry-run=client --command sleep 60000 >5.yaml
+```
+
+https://kubernetes.io/docs/concepts/configuration/secret/
+
+```yaml
+# vim 5.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ creationTimestamp: null
+ labels:
+ run: db-admin
+ name: db-admin
+ namespace: team-5
+spec:
+ volumes:
+ - name: db-admin
+ secret:
+ secretName: db-admin
+ containers:
+ - command:
+ - sleep
+ - "60000"
+ image: viktoruj/cks-lab
+ name: db-admin
+ volumeMounts:
+ - name: db-admin
+ readOnly: true
+ mountPath: "/mnt/secret"
+ resources: {}
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+status: {}
+
+```
+
+## 06
+
+```sh
+kubectl config use-context cluster4-admin@cluster4
+```
+
+```sh
+k get po -n kube-system | grep api
+```
+
+```sh
+k exec -n kube-system kube-apiserver-ip-10-2-11-163 -- kube-apiserver --help | grep cip
+ --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used.
+```
+
+```sh
+k exec -n kube-system kube-apiserver-ip-10-2-11-163 -- kube-apiserver --help | grep tls | grep min
+--tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13
+
+k get po -n kube-system | grep etcd
+k exec -n kube-system etcd-ip-10-2-11-163 -- etcd --help | grep cip
+--cipher-suites ''
+ Comma-separated list of supported TLS cipher suites between client/server and peers (empty will be auto-populated by Go)
+
+```
+
+```sh
+k get no
+
+ssh {control-plane}
+```
+
+```sh
+sudo su
+vim /etc/kubernetes/manifests/kube-apiserver.yaml
+```
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ annotations:
+ kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.2.11.163:6443
+ creationTimestamp: null
+ labels:
+ component: kube-apiserver
+ tier: control-plane
+ name: kube-apiserver
+ namespace: kube-system
+spec:
+ containers:
+ - command:
+ - kube-apiserver
+ - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 # add
+ - --tls-min-version=VersionTLS13 # add
+ - --advertise-address=10.2.11.163
+ - --allow-privileged=true
+ - --authorization-mode=Node,RBAC
+ - --client-ca-file=/etc/kubernetes/pki/ca.crt
+ - --enable-admission-plugins=NodeRestriction
+ - --enable-bootstrap-token-auth=true
+ - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
+ - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
+ - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
+ - --etcd-servers=https://127.0.0.1:2379
+ - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
+ - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
+ - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
+ - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
+ - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
+ - --requestheader-allowed-names=front-proxy-client
+ - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
+ - --requestheader-extra-headers-prefix=X-Remote-Extra-
+ - --requestheader-group-headers=X-Remote-Group
+ - --requestheader-username-headers=X-Remote-User
+ - --secure-port=6443
+ - --service-account-issuer=https://kubernetes.default.svc.cluster.local
+ - --service-account-key-file=/etc/kubernetes/pki/sa.pub
+ - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
+ - --service-cluster-ip-range=10.96.0.0/12
+ - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
+ - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
+..........
+```
+
+```yaml
+# vim /etc/kubernetes/manifests/etcd.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ annotations:
+ kubeadm.kubernetes.io/etcd.advertise-client-urls: https://10.2.11.163:2379
+ creationTimestamp: null
+ labels:
+ component: etcd
+ tier: control-plane
+ name: etcd
+ namespace: kube-system
+spec:
+ containers:
+ - command:
+ - etcd
+ - --cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 # add
+ - --advertise-client-urls=https://10.2.11.163:2379
+ - --cert-file=/etc/kubernetes/pki/etcd/server.crt
+ - --client-cert-auth=true
+ - --data-dir=/var/lib/etcd
+ - --experimental-initial-corrupt-check=true
+ - --experimental-watch-progress-notify-interval=5s
+ - --initial-advertise-peer-urls=https://10.2.11.163:2380
+ - --initial-cluster=ip-10-2-11-163=https://10.2.11.163:2380
+ - --key-file=/etc/kubernetes/pki/etcd/server.key
+ - --listen-client-urls=https://127.0.0.1:2379,https://10.2.11.163:2379
+ - --listen-metrics-urls=http://127.0.0.1:2381
+ - --listen-peer-urls=https://10.2.11.163:2380
+ - --name=ip-10-2-11-163
+ - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
+ - --peer-client-cert-auth=true
+ - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
+ - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
+ - --snapshot-count=10000
+ - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
+```
+
+## 07
+
+```
+kubectl config use-context cluster5-admin@cluster5
+```
+
+https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/
+
+```sh
+k get no
+ssh {control-plane}
+```
+
+```sh
+sudo su
+mkdir /etc/kubernetes/enc/ -p
+```
+
+```yaml
+# vim /etc/kubernetes/enc/enc.yaml
+apiVersion: apiserver.config.k8s.io/v1
+kind: EncryptionConfiguration
+resources:
+ - resources:
+ - secrets
+ providers:
+ - aescbc:
+ keys:
+ - name: key1
+ secret: MTIzNDU2Nzg5MDEyMzQ1Ng==
+ - identity: {}
+```
+
+```yaml
+# vim /etc/kubernetes/manifests/kube-apiserver.yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ annotations:
+ kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.20.30.40:443
+ creationTimestamp: null
+ labels:
+ app.kubernetes.io/component: kube-apiserver
+ tier: control-plane
+ name: kube-apiserver
+ namespace: kube-system
+spec:
+ containers:
+ - command:
+ - kube-apiserver
+ ...
+ - --encryption-provider-config=/etc/kubernetes/enc/enc.yaml # add this line
+ volumeMounts:
+ ...
+ - name: enc # add this line
+ mountPath: /etc/kubernetes/enc # add this line
+ readOnly: true # add this line
+ ...
+ volumes:
+ ...
+ - name: enc # add this line
+ hostPath: # add this line
+ path: /etc/kubernetes/enc # add this line
+ type: DirectoryOrCreate # add this line
+ ...
+```
+
+```sh
+service kubelet restart
+k get no
+# wait k8s ready
+```
+
+```sh
+k create secret generic test-secret -n prod --from-literal password=strongPassword
+```
+
+```sh
+# encrypt all secrets in stage ns with new config
+kubectl get secrets -n stage -o json | kubectl replace -f -
+```
+
+```sh
+# check
+ETCDCTL_API=3 etcdctl \
+ --cacert=/etc/kubernetes/pki/etcd/ca.crt \
+ --cert=/etc/kubernetes/pki/etcd/server.crt \
+ --key=/etc/kubernetes/pki/etcd/server.key \
+ get /registry/secrets/stage/stage | hexdump -C
+
+```
+
+```sh
+# exit to work pc
+```
+
+## 08
+
+```sh
+kubectl config use-context cluster6-admin@cluster6
+```
+
+https://kubernetes.io/docs/concepts/services-networking/network-policies/
+
+```yaml
+# vim 8_deny.yaml
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: default-deny-ingress
+ namespace: prod-db
+
+spec:
+ podSelector: {}
+ policyTypes:
+ - Ingress
+```
+
+```sh
+k apply -f 8_deny.yaml
+```
+
+```sh
+k get ns --show-labels
+````
+
+```yaml
+# vim 8_allow.yaml
+apiVersion: networking.k8s.io/v1
+kind: NetworkPolicy
+metadata:
+ name: allow-policy
+ namespace: prod-db
+spec:
+ podSelector:
+ matchLabels: {}
+ policyTypes:
+ - Ingress
+ ingress:
+ - from:
+ - namespaceSelector:
+ matchLabels:
+ name: prod
+ - namespaceSelector:
+ matchLabels:
+ name: stage
+ podSelector:
+ matchLabels:
+ role: db-connect
+
+ - podSelector:
+ matchLabels:
+ role: db-external-connect
+ namespaceSelector: {}
+```
+
+```sh
+k apply -f 8_allow.yaml
+```
+
+## 09
+
+```sh
+kubectl config use-context cluster6-admin@cluster6
+```
+
+```sh
+cat /opt/course/9/profile
+k get no
+k label no {worker node} security=apparmor
+```
+
+```sh
+scp /opt/course/9/profile {worker node}:/tmp/
+ssh {worker node}
+sudo su
+```
+
+```sh
+apparmor_parser -q /tmp/profile
+apparmor_status
+apparmor_status | grep 'very-secure'
+
+# exit to work pc
+```
+
+```sh
+mkdir /var/work/tests/artifacts/9/ -p
+k create deployment apparmor -n apparmor --image nginx:1.19.2 --dry-run=client -o yaml >9.yaml
+```
+
+```yaml
+# vim 9.yaml
+
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ creationTimestamp: null
+ labels:
+ app: apparmor
+ name: apparmor
+ namespace: apparmor
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: apparmor
+ strategy: {}
+ template:
+ metadata:
+ creationTimestamp: null
+ labels:
+ app: apparmor
+ spec:
+ nodeSelector: # add it
+ security: apparmor # add it
+ securityContext:
+ appArmorProfile: # add it
+ type: Localhost # add it
+ localhostProfile: very-secure # add it
+ containers:
+ - image: nginx:1.19.2
+ name: c1 # update
+ resources: {}
+status: {}
+```
+
+```sh
+k apply -f 9.yaml
+k get po -n apparmor
+```
+
+```text
+NAME READY STATUS RESTARTS AGE
+apparmor-555d68c4d8-ntcgl 0/1 CrashLoopBackOff 1 (8s ago) 10s
+```
+
+```sh
+k logs {apparmor-xxxx} -n apparmor
+```
+
+```text
+/docker-entrypoint.sh: 13: /docker-entrypoint.sh: cannot create /dev/null: Permission denied
+/docker-entrypoint.sh: No files found in /docker-entrypoint.d/, skipping configuration
+2023/09/29 06:14:49 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
+nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
+```
+
+```sh
+k logs {apparmor-xxxx} -n apparmor>/var/work/tests/artifacts/9/log
+```
+
+## 10
+
+```sh
+kubectl config use-context cluster6-admin@cluster6
+```
+
+https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
+
+Add readOnlyRootFilesystem and volumes to write
+
+```yaml
+# k edit deployment secure -n secure
+
+# add line to container level
+
+securityContext: # add
+ readOnlyRootFilesystem: true # add
+ runAsGroup: 3000
+ runAsUser: 3000
+ allowPrivilegeEscalation: false
+volumeMounts: # to c1 container
+ - mountPath: /tmp
+ name: temp-vol
+
+# add to spec level
+
+volumes:
+- emptyDir: {}
+ name: temp-vol
+
+```
+
+Check while pod will be running
+
+```yaml
+# k edit deployment secure -n secure
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ labels:
+ app: secure
+ name: secure
+ namespace: secure
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: secure
+ strategy:
+ rollingUpdate:
+ maxSurge: 25%
+ maxUnavailable: 25%
+ type: RollingUpdate
+ template:
+ metadata:
+ creationTimestamp: null
+ labels:
+ app: secure
+ spec:
+ containers:
+ - command:
+ - sh
+ - -c
+ - while true ; do echo "$(date) i am working . c1 . $(id)"; sleep 10 ;done
+ image: viktoruj/cks-lab
+ imagePullPolicy: Always
+ name: c1
+ resources: {}
+ securityContext:
+ readOnlyRootFilesystem: true
+ runAsGroup: 3000
+ runAsUser: 3000
+ allowPrivilegeEscalation: false
+ terminationMessagePath: /dev/termination-log
+ terminationMessagePolicy: File
+ volumeMounts:
+ - mountPath: /tmp
+ name: temp-vol
+ - command:
+ - sh
+ - -c
+ - while true ; do echo "$(date) i am working . c2 . $(id)"; sleep 10 ;done
+ image: viktoruj/cks-lab
+ imagePullPolicy: Always
+ name: c2
+ resources: {}
+ securityContext:
+ readOnlyRootFilesystem: true
+ runAsGroup: 3000
+ runAsUser: 3000
+ allowPrivilegeEscalation: false
+ terminationMessagePath: /dev/termination-log
+ terminationMessagePolicy: File
+ - command:
+ - sh
+ - -c
+ - while true ; do echo "$(date) i am working . c3 . $(id)"; sleep 10 ;done
+ image: viktoruj/cks-lab
+ imagePullPolicy: Always
+ name: c3
+ resources: {}
+ securityContext:
+ readOnlyRootFilesystem: true
+ runAsGroup: 3000
+ runAsUser: 3000
+ allowPrivilegeEscalation: false
+ terminationMessagePath: /dev/termination-log
+ terminationMessagePolicy: File
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+ schedulerName: default-scheduler
+ securityContext: {}
+ terminationGracePeriodSeconds: 30
+ volumes:
+ - emptyDir: {}
+ name: temp-vol
+```
+
+## 11
+
+```sh
+kubectl config use-context cluster6-admin@cluster6
+```
+
+```sh
+k get sa dev -n rbac-1
+k get rolebindings.rbac.authorization.k8s.io -n rbac-1 -o wide
+```
+
+```sh
+k edit role dev -n rbac-1
+```
+
+```sh
+apiVersion: rbac.authorization.k8s.io/v1
+kind: Role
+metadata:
+ name: dev
+ namespace: rbac-1
+rules:
+- apiGroups:
+ - ""
+ resources:
+ - pods
+ verbs:
+ - create
+ - watch # update
+ - list
+
+```
+
+```sh
+k create role dev -n rbac-2 --resource configmaps --verb get,list
+k create rolebinding dev -n rbac-2 --serviceaccount rbac-1:dev --role dev
+k run dev-rbac -n rbac-1 --image viktoruj/cks-lab -o yaml --dry-run=client --command sleep 60000 > 11.yaml
+```
+
+https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
+
+```yaml
+# vim 11.yaml
+
+apiVersion: v1
+kind: Pod
+metadata:
+ creationTimestamp: null
+ labels:
+ run: dev-rbac
+ name: dev-rbac
+ namespace: rbac-1
+spec:
+ serviceAccountName: dev # add it
+ containers:
+ - command:
+ - sleep
+ - "60000"
+ image: viktoruj/cks-lab
+ name: dev-rbac
+ resources: {}
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+status: {}
+```
+
+```sh
+k apply -f 11.yaml
+k get po -n rbac-1
+```
+
+## 12
+
+```sh
+kubectl config use-context cluster7-admin@cluster7
+```
+
+```sh
+k get no
+ssh {work node}
+```
+
+```sh
+sysdig --help
+sysdig --list
+sysdig --list | grep container
+sysdig --list | grep user
+sysdig --list | grep time
+sysdig --list | grep k8s
+```
+
+```sh
+sysdig -p"%evt.time,%container.id,%container.name,%user.name,%k8s.ns.name,%k8s.pod.name" container.image=docker.io/library/nginx:latest
+
+sysdig -p"%evt.time,%container.id,%container.name,%user.name,%k8s.ns.name,%k8s.pod.name" container.image=docker.io/library/nginx:latest>/tmp/log
+# wait 20 sec , and exit to worker pc
+```
+
+```sh
+mkdir -p /var/work/tests/artifacts/12/
+scp {work node }:/tmp/log /var/work/tests/artifacts/12/
+```
+
+## 13
+
+```sh
+kubectl config use-context cluster8-admin@cluster8
+```
+
+```sh
+k get no
+ssh {control-plane}
+```
+
+```sh
+# check admission_config.json
+cat /etc/kubernetes/pki/admission_config.json
+```
+
+```sh
+# check admission_kube_config.yaml
+cat /etc/kubernetes/pki/webhook/admission_kube_config.yaml
+```
+
+```yaml
+# vim /etc/kubernetes/pki/webhook/admission_kube_config.yaml
+apiVersion: v1
+kind: Config
+clusters:
+- cluster:
+ certificate-authority: /etc/kubernetes/pki/webhook/server.crt
+ server: https://image-bouncer-webhook:30020/image_policy # add
+ name: bouncer_webhook
+contexts:
+- context:
+ cluster: bouncer_webhook
+ user: api-server
+ name: bouncer_validator
+current-context: bouncer_validator
+preferences: {}
+users:
+- name: api-server
+ user:
+ client-certificate: /etc/kubernetes/pki/apiserver.crt
+ client-key: /etc/kubernetes/pki/apiserver.key
+```
+
+```yaml
+# vim /etc/kubernetes/manifests/kube-apiserver.yaml
+# add to api parametrs
+
+- --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook
+- --admission-control-config-file=/etc/kubernetes/pki/admission_config.json
+```
+
+```sh
+service kubelet restart
+
+# exit to work pc
+```
+
+```sh
+k run test-tag --image nginx
+```
+
+```text
+Error from server (Forbidden): pods "test-tag" is forbidden: image policy webhook backend denied one or more images: Images using latest tag are not allowed
+
+```
+
+```sh
+k run test-tag --image nginx:alpine3.17
+k get po test-tag
+```
+
+```text
+NAME READY STATUS RESTARTS AGE
+test-tag 1/1 Running 0 4m47s
+
+```
+
+## 14
+
+```Dockerfile
+# vim /var/work/14/Dockerfile
+
+FROM ubuntu:20.04
+RUN apt-get update
+RUN apt-get -y install curl
+RUN groupadd myuser
+RUN useradd -g myuser myuser
+USER myuser
+CMD ["sh", "-c", "while true ; do id ; sleep 1 ;done"]
+```
+
+```sh
+podman build . -t cks:14
+
+podman run -d --name cks-14 cks:14
+sleep 2
+podman logs cks-14 | grep myuser
+```
+
+```sh
+podman stop cks-14
+podman rm cks-14
+```
+
+## 15
+
+```sh
+kubectl config use-context cluster6-admin@cluster6
+```
+
+https://kubernetes.io/docs/tutorials/security/ns-level-pss/
+
+```sh
+k get ns team-red --show-labels
+
+kubectl label --overwrite ns team-red pod-security.kubernetes.io/enforce=baseline
+
+k get ns team-red --show-labels
+```
+
+```sh
+k get po -n team-red
+# delete all pods in ns team-red
+
+k delete po {pod_names} -n team-red --force
+```
+
+```sh
+k get po -n team-red
+
+# No resources found in team-red namespace.
+```
+
+```sh
+k events replicasets.apps -n team-red
+mkdir /var/work/tests/artifacts/15 -p
+k events replicasets.apps -n team-red > /var/work/tests/artifacts/15/logs
+```
+
+## 16
+
+```sh
+kubectl config use-context cluster1-admin@cluster1
+```
+
+https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/
+
+```sh
+openssl genrsa -out myuser.key 2048
+openssl req -new -key myuser.key -out myuser.csr
+```
+
+```yaml
+cat < CSR.yaml
+apiVersion: certificates.k8s.io/v1
+kind: CertificateSigningRequest
+metadata:
+ name: john-developer # add
+spec:
+ request: $(cat myuser.csr | base64 | tr -d "\n")
+ signerName: kubernetes.io/kube-apiserver-client
+ usages:
+ - client auth
+ - digital signature
+ - key encipherment
+EOF
+```
+
+```sh
+k create ns development
+k apply -f CSR.yaml
+k get csr
+k certificate approve john-developer
+k create role developer --resource=pods --verb=create,list,get --namespace=development
+k create rolebinding developer-role-binding --role=developer --user=john --namespace=development
+k auth can-i update pods --as=john --namespace=development
+```
+
+## 17
+
+```sh
+kubectl config use-context cluster9-admin@cluster9
+```
+
+```sh
+k get crd
+k get constraint
+k get constrainttemplates
+k edit constrainttemplates k8strustedimages
+```
+
+```yaml
+.......
+ - rego: |
+ package k8strustedimages
+
+ violation[{"msg": msg}] {
+ not images
+ msg := "not trusted image!"
+ }
+
+ images {
+ image := input.review.object.spec.containers[_].image
+ not startswith(image, "docker-fake.io/")
+ not startswith(image, "google-gcr-fake.com/")
+ not startswith(image, "very-bad-registry.com/") # add
+ }
+...........
+```
+
+## 18
+
+```sh
+kubectl config use-context cluster10-admin@cluster10
+```
+
+https://kubernetes.io/docs/tutorials/security/seccomp/
+
+```sh
+k get no
+ssh {work node}
+```
+
+```sh
+sudo su
+
+mkdir /var/lib/kubelet/seccomp -p
+cp /var/work/profile-nginx.json /var/lib/kubelet/seccomp/
+
+# exit to work pc
+```
+
+```sh
+k run seccomp --image nginx -o yaml --dry-run=client > 18.yaml
+```
+
+```yaml
+# vim 18.yaml
+
+apiVersion: v1
+kind: Pod
+metadata:
+ creationTimestamp: null
+ labels:
+ run: seccomp
+ name: seccomp
+spec:
+ securityContext: # add
+ seccompProfile: # add
+ type: Localhost # add
+ localhostProfile: profile-nginx.json # add
+ containers:
+ - image: nginx
+ name: seccomp
+ resources: {}
+ dnsPolicy: ClusterFirst
+ restartPolicy: Always
+```
+
+```sh
+k apply -f 18.yaml
+k get po seccomp
+```
+
diff --git a/docs/CKS/about.md b/docs/CKS/about.md
new file mode 100644
index 00000000..c012f8ab
--- /dev/null
+++ b/docs/CKS/about.md
@@ -0,0 +1,52 @@
+---
+id: about_cks
+title: About
+description: About CKS section
+slug: /CKS/about
+sidebar_position: 1
+custom_edit_url: null
+---
+
+This section contains labs and mock exams to train your CKS certifications.
+
+- The platform uses **aws** to create following resources: **vpc**, **subnets**, **security groups**, **ec2** (spot/on-demand), **s3**
+- after you launch the scenarios the platform will create all the necessary resources and give access to k8s clusters.
+- to create clusters the platform uses **kubeadm**
+- you can easily add your own scenario using the already existing terraform module
+- platform supports the following versions:
+
+```text
+k8s version : [ 1.21 , 1.29 ] https://kubernetes.io/releases/
+Rintime :
+ docker [1.21 , 1.23]
+ cri-o [1.21 , 1.29]
+ containerd [1.21 , 1.30]
+ containerd_gvizor [1.21 , 1.30]
+OS for nodes :
+ ubuntu : 20.04 LTS , 22.04 LTS # cks default 20.04 LTS
+CNI : calico
+```
+
+Labs:
+
+- [01 - Kubectl contexts](./Labs/01.md)
+- [02 - Falco, SysDig](./Labs/02.md)
+- [03 - Access kube-api via nodePort](./Labs/03.md)
+- [04 - Pod Security Standart](./Labs/04.md)
+- [05 - CIS Benchmark](./Labs/05.md)
+- [08 - Open Policy Agent](./Labs/08.md)
+- [09 - AppArmor](./Labs/09.md)
+- [10 - Container Runtime Sandbox gVisor](./Labs/10.md)
+- [11 - Secrets in ETCD](./Labs/11.md)
+- [17 - Enable audit log](./Labs/17.md)
+- [19 - Fix Dockerfile](./Labs/19.md)
+- [20 - Update Kubernetes cluster](./Labs/20.md)
+- [21 - Image vulnerability scanning](./Labs/21.md)
+- [22 - Network policy](./Labs/22.md)
+- [23 - Set TLS version and allowed ciphers for etcd, kube-api](./Labs/23.md)
+- [24 - Encrypt secrets in ETCD](./Labs/24.md)
+- [25 - Image policy webhook](./Labs/25.md)
+
+Exams:
+
+- [01](./Mock%20exams/01.md)
diff --git a/docs/LFCS/Mock exams/01.md b/docs/LFCS/Mock exams/01.md
new file mode 100644
index 00000000..cf7c0d77
--- /dev/null
+++ b/docs/LFCS/Mock exams/01.md
@@ -0,0 +1,262 @@
+# 01 - Tasks
+
+## Allowed resources
+
+**Linux Foundation Certified System Administrator (LFCS) :**
+
+- Man pages
+- Documents installed by the distribution (i.e. /usr/share and its subdirectories)
+- Packages that are part of the distribution (may also be installed by Candidate if not available by default)
+- If you decide to install packages (not required to complete tasks) to your exam environment, you will want to be familiar with standard package managers (apt, dpkg, dnf, and yum).
+
+## Questions
+
+### 01
+
+| **1** | **Create hard and soft links to the file `file1`** |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 1% |
+| Task | - Create a hard link from file `file1` in your home directory to `/opt/file1` - Create a soft link from `file1` in your home directory to `/opt/softlinkfile`. - Soft link should point to the absloute path |
+| Acceptance criteria | - Hard and soft links are created? |
+
+---
+
+### 02
+
+| **2** | **Perform the following actions on the file `file2` in the home directory** |
+| :-----------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Task | - Change owner of this file to uid `750` and gid `750` - Apply the following permissions to this file: - Group members should be able to write and read - Others only should be able to read. - Enable the SUID (set user id) special permission flag on `file2`. |
+| Acceptance criteria | - File owners changes? - Set file permissions and SUID? |
+
+---
+
+### 03
+
+| **3** | **Perform the following actions on the files `file31`,`file32`,`file33`** |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 1% |
+| Task | - Create directory `/opt/newdir` - Move `file31` to this directory - Copy `file32` to `/opt/newdir` directory - Remove `file33` |
+| Acceptance criteria | - Created directory? - Moved `file31`? - Copied `file32`? - Removed `file33` file? |
+
+---
+
+### 04
+
+| **4** | **Enable the sticky bit permissions on the directory** |
+| :-----------------: | :--------------------------------------------------------------------------------------- |
+| Task weight | 1% |
+| Task | - Enable the sticky bit special permission on the following directory: `/opt/stickydir/` |
+| Acceptance criteria | - "sticky bit" is set on `/opt/stickydir` directory? |
+
+---
+
+### 05
+
+| **5** | **Filtering out specific files in the folder** |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 3% |
+| Task | In the `/opt/05/task` directory, you will find `500` files. - Filter out files that have the executable permissions for the user. Save output to the `/opt/05/result/execuser`. - Find all files that have the SETUID permission enabled and copy them to the folder `/opt/05/result/setuid`. - Find any file that is larger than 1KB and copy it to the `/opt/result/05kb` directory. |
+| Acceptance criteria | - Filtered out files with executable permissions for user? - Moved all files with SETUID permissions? - Copied files that larder then 1KB? |
+
+---
+
+### 06
+
+| **6** | **Find special file in the directory** |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Task | - In the `/opt/06/task` there is a tree based hierarchy with a bunch of files. Some of the them contains `findme` word. Copy these files to the `/opt/06/result` folder |
+| Acceptance criteria | - Files that contained special word were moved to the specified folder? |
+
+---
+
+### 07
+
+| **7** | **Work with config file** |
+| :-----------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Task | - Add a new line to the end of the file `/etc/config.conf` with the following content: `system71=enabled` - Write a simple bash script that filtering out all `enabled` parameters. Make this script executable and place it to the `/opt/07/filter.sh` file - Enable all `disabled` parameters from changing it to `enable`. Be careful with apllying changes to the last subtask, you can make backup of the file before applying changes to it. |
+| Acceptance criteria | - Added a new line at the end of the file? - Writed simple script to filter out enabled parameters - Updated all disabled parameters to be enabled? |
+
+---
+
+### 08
+
+| **8** | **Work with archives** |
+| :-----------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Task | Create the following archive from the files in the `/opt/08/files/` directory: - Create a simple *TAR* archive from the files inside the folder. Store this archive in `/opt/08/results/mytar.tar` - Compress entire `/opt/08/files/` directory into *GZIP* archive. Save it at `/opt/08/results/mytargz.tar.gz` - Compress entire `/opt/08/files/` directory into *BZ* archive. Save it at `/opt/08/results/mybz.tar.bz2` - Compress entire `/opt/08/files/` directory into *ZIP* archive. Save it at `/opt/08/results/myzip.zip` |
+| Acceptance criteria | - `tar` archive is created? - `gzip` archive is created? - `bz` archive is created? - `zip` archive is created? |
+
+---
+
+### 09
+
+| **9** | **Extracting content** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Task | There are two archives in the `/opt/09/task` folder: - Extract the content of `backup.tar.gz` to `/opt/09/solution/tarbackup` - Extract the content of `backup.zip` to `/opt/09/solution/zipbackup` |
+| Acceptance criteria | `backup.tar.gz` is extracted? `backup.zip` is extracted? |
+
+---
+
+### 10
+
+| **10** | **Installing the service** |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 1% |
+| Task | - Install the service nginx using package manager - Make this service automatically start up after rebooting - Run the service |
+| Acceptance criteria | - nginx is installed? - nginx is enabled? - nginx is running? |
+
+---
+
+### 11
+
+| **11** | **Adding a new user** |
+| :-----------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 1% |
+| Task | Add a new admin user with the following requirenments: - with the name `cooluser` - with a password `superstrongpassword` - Set the default shell for this user as `/bin/zsh` - if that's an admin user,`cooluser` should be able to run commands with sudo |
+| Acceptance criteria | - user `cooluser` with password is created ? - default shell for this user is `zsh`? - This user is able to perform sudo? |
+
+---
+
+### 12
+
+| **12** | **Locking and unlocking users** |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Task | There are two users in the system `spiderman` and `batman`. In this task is needed to perform some actions to lock/unlock password for these users: - `spiderman` cannot login to the system with his password, as password was locked, we need to unlock this user - `batman` is unlocked, so we need to lock him |
+| Acceptance criteria | - user `spiderman` is unlocked? - user `batman` is locked? |
+
+---
+
+### 13
+
+| **13** | **Set a limit for the users** |
+| :-----------------: | :----------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 1% |
+| Task | There is a user `phoenix` in the system. Set a limit for this user so that it can open no more than `20` processes. This should be a hard limit. |
+| Acceptance criteria | - hard limit is set for user `phoenix` processes? |
+
+---
+
+### 14
+
+| **14** | **Set a skeleton for the user users** |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 1% |
+| Task | Edit the so-called skeleton directory so that whenever a new user is created on this system, a file called `IMPORTANT_NOTES` is copied to his/her home directory. |
+| Acceptance criteria | - Make sure a file called `IMPORTANT_NOTES` is copied to the new user's home directory |
+
+---
+
+### 15
+
+| **15** | **Revoke sudo privilligies** |
+| :-----------------: | :------------------------------------------------------------------------------------------- |
+| Task weight | ?% |
+| Task | There is a user `jackson` in the system. This user should not have sudo permissions anymore. |
+| Acceptance criteria | - Make sure that a user `jackson` is not able to perform commands with sudo |
+
+---
+
+### 16
+
+| **16** | **Redirect filtering output** |
+| :-----------------: | :-------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 1% |
+| Task | Display all the lines in the `/etc/services` file that start out with the text `net`. Redirect the output to `/opt/16/result.txt` |
+| Acceptance criteria | - Filtered output redirected to the file |
+
+---
+
+### 17
+
+| **17** | **Check the difference between files and folders** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Task | - There are 2 files in the folder `/opt/17/file1` and `/opt/17/file2`. Files are almost the same, but they have one line that exist in one file and doesn't exist in another one. Find that line and save the difference to `/opt/17/resuls/text_difference`. - `/opt/17/dir1/` and `/opt/17/dir2/` have almost similar files. Find out which files only exist in `/opt/17/dir2/` but not in `/opt/17/dir1/`. Find these files and save the output in the `/opt/17/results/folder_difference` file. |
+| Acceptance criteria | - The difference between 2 files was found? - The difference between 2 folders was found? |
+
+---
+
+### 18
+
+| **18** | **Perform docker operations** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------ |
+| Task weight | 1% |
+| Task | - Run docker `ubuntu/apache2` container with name `webserv`. - Removed all docker images except `apache` |
+| Acceptance criteria | - Container is running? - Removed all images except `apache` |
+
+---
+
+### 19
+
+| **19** | **Analyze networking information** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| Task weight | 2% |
+| Task | - Check the ip address of the `ens5` network interface, save IP address to `/opt/19/result/ip` file. - Print out the route table and save the output to the `/opt/19/result/routes` file. - Check the PID of the service that uses 22 port and save the pid to the `/opt/19/result/pid` file |
+| Acceptance criteria | - IP adrress was saved to the file? - Route table was written to the file? - PID of the service was saved to the file? |
+
+---
+
+### 20
+
+| **20** | **Networking settings** |
+| :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| Task weight | 3% |
+| Task | SSH to the node02 and perform the following actions: - Add an extra DNS resolver (nameserver) on this system: `1.1.1.1` - Add a static dns resolution to make `database.local` host to be resolver to `10.10.20.5`. DNS resolver should repond with this IP on `database.local` hostname - Configure route table of this host to route all of the traffic through the node01 host. |
+| Acceptance criteria | - DNS resolver was configured? - Static host entry for `database.local` was added? - Static route was configured properly? |
+
+---
+
+### 21
+
+| **21** | **Create a bash script** |
+| :-----------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Task | This script should perform the following actions. You should put this script to `/opt/21/result/script.sh`: - Recursively copies the `/opt/21/task/` directory into the `/opt/21/task-backup/` directory. - Creates an empty file called `empty_file` at this location: `/opt/21/result/` - Make this script automatically running every day at 2AM. |
+| Acceptance criteria | - Script was created, made executable and placed as it's required? - Test the script? - Make sure that this script was added to cron? |
+
+---
+
+### 22
+
+| **22** | **Work with advanced file permissions and attributes** |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 2% |
+| Task | - In the folder `/opt/22/tasks` you will find a file `aclfile`. Currently this file can only be read by `user0`. Add a new ACL permission so that `user22` can also read this. `user22` should have only read permissions. - Next, in the `/opt/22/tasks` directory you will find a file named `frozenfile`. This currently has the immutable attribute set on it. Remove the immutable attribute from this file. |
+| Acceptance criteria | - ACL permissions are set? - `frozenfile` file is no longer immutable? |
+
+---
+
+### 23
+
+| **23** | **Send signal to a process** |
+| :-----------------: | :----------------------------------------------- |
+| Task weight | 1% |
+| Task | - Send the SIGHUP signal to the `redis` process. |
+| Acceptance criteria | - SIGHUP sent to the `redis` service? |
+
+---
+
+### 24
+
+| **24** | **Perform disk operations** |
+| :-----------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 3% |
+| Task | You will find a disk `/dev/nvme2n1` in the system. We need to perform the following actions: - This disk has unpartitioned space. Create two partitions. Each should be exactly 1GB in size for each. - Mount the 1st partition (`nvme2n1p1`) to the `/drive` folder. It should be mounted even after rebooting of the system - Format the 2nd partition (`nvme2n1p2`) to be used in `xfs` file system |
+| Acceptance criteria | - Verify created partitions? - Verify that required partitions was mounted? - Partition is mounted automatically even after rebooting of the instance? |
+
+---
+
+### 25
+
+| **25** | **Perform LVM operations** |
+| :-----------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Task weight | 4% |
+| Task | - Add these two physical volumes to lvm: `/dev/nvme1n1` and `/dev/nvme3n1` - Create a volume group called `volgroup1` out of these two physical volumes, `/dev/nvme1n1` and `/dev/nvme3n1` - Create a logical volume of 1GB on the volume group `volgroup1`. The name of this logical volume should be `logvolume1`. |
+| Acceptance criteria | - Verify the LVM - Volume Group (VG) named `volgroup1` has been created? - `logvolume1` LV has been created? |
+
+---
diff --git a/docs/LFCS/Mock exams/Solutions/01.md b/docs/LFCS/Mock exams/Solutions/01.md
new file mode 100644
index 00000000..4986020e
--- /dev/null
+++ b/docs/LFCS/Mock exams/Solutions/01.md
@@ -0,0 +1,431 @@
+# 01
+
+Solutions for CKS Mock exam #01
+
+## 01
+
+```bash
+ln ~/file1 /opt/file1
+ln -s ~/file1 /opt/softlinkfile
+```
+
+## 02
+
+```bash
+sudo chown 750:750 /home/ubuntu/file2
+
+# Option 1
+sudo chmod g+wr,o+r /home/ubuntu/file2
+# Option 2
+sudo chmod 664 /home/ubuntu/file2
+
+sudo chmod u+s /home/ubuntu/file2
+```
+
+## 03
+
+```bash
+mkdir /opt/newdir
+mv /home/ubuntu/file31 /opt/newdir/
+cp /home/ubuntu/file32 /opt/newdir/
+rm /home/ubuntu/file33
+```
+
+## 04
+
+```bash
+chmod +t /opt/stickydir/
+# OR
+chmod 1777 /opt/stickydir/
+```
+
+## 05
+
+```sh
+find "/opt/05/task" -type f -perm -u=x > /opt/05/result/execuser;
+find "/opt/05/task" -type f -perm -4000 -exec cp {} /opt/05/result/setuid/ \;
+find "/opt/05/task" -type f -size +1k -exec cp {} "/opt/05/result/05kb" \;
+```
+
+## 06
+
+```sh
+find /opt/06/task -type f -exec grep -q 'findme' {} \; -exec cp {} /opt/06/result \;
+```
+
+## 07
+
+```bash
+# Append to the end of the file
+echo "system71=enabled" >> /etc/config.conf
+
+# Write a script to filter out enable parameters
+cat < /opt/07/filter.sh
+#! /bin/bash
+
+grep "enabled" /etc/config.conf
+EOF
+
+chmod +x /opt/07/filter.sh
+
+# Make a backup
+sudo cp /etc/config.conf /etc/config.conf.back
+
+# Replace all disabled parameters (to enabled) with enabled using sed.
+sudo sed -i 's/disabled/enabled/g' /etc/config.conf
+```
+
+## 08
+
+```bash
+# tar
+tar -cf /opt/08/results/mytar.tar -C /opt/08/files/ .
+
+# gzip
+tar -czf /opt/08/results/mytargz.tar.gz -C /opt/08/files/ .
+
+# bz2
+tar -cjf /opt/08/results/mybz.tar.bz2 -C /opt/08/files/ .
+
+# zip
+cd /opt/08/files && zip -r /opt/08/results/myzip.zip * && cd -
+```
+
+## 09
+
+```bash
+# untar
+tar -xzf /opt/09/task/backup.tar.gz -C /opt/09/solution/tarbackup
+# unzip
+unzip -o /opt/09/task/backup.zip -d /opt/09/solution/zipbackup/
+```
+
+## 10
+
+```bash
+sudo apt install -y nginx
+
+sudo systemctl enable --now nginx
+#or
+sudo systemctl start nginx
+sudo systemctl enable nginx
+```
+
+## 11
+
+```bash
+#Option 1
+sudo useradd cooluser --shell /bin/zsh
+sudo passwd cooluser
+sudo usermod -aG sudo cooluser
+
+#Option 2
+sudo useradd cooluser -p $(echo "superstrongpassword" | openssl passwd -1 -stdin) --shell /bin/zsh -G sudo
+
+#Option 3
+sudo adduser --shell /bin/zsh cooluser
+# Adding user `cooluser' ...
+# Adding new group `cooluser' (1000) ...
+# Adding new user `cooluser' (1000) with group `cooluser' ...
+# Creating home directory `/home/cooluser' ...
+# Copying files from `/etc/skel' ...
+# New password:
+# Retype new password:
+# passwd: password updated successfully
+# Changing the user information for cooluser
+# Enter the new value, or press ENTER for the default
+# Full Name []:
+# Room Number []:
+# Work Phone []:
+# Home Phone []:
+# Other []:
+# Is the information correct? [Y/n]
+sudo usermod -aG sudo cooluser
+```
+
+## 12
+
+```bash
+# To lock
+sudo usermod -L spiderman
+# OR
+sudo passwd -l spiderman
+
+# To unlock
+sudo usermod -U batman
+# OR
+sudo passwd -u batman
+```
+
+## 13
+
+```bash
+# Open /etc/security/limits.conf and add the following line
+phoenix hard nproc 20
+
+# or do that with echo
+sudo bash -c 'echo "phoenix hard nproc 20" >> /etc/security/limits.conf'
+```
+
+## 14
+
+```bash
+sudo touch /etc/skel/IMPORTANT_NOTES
+```
+
+## 15
+
+```sh
+sudo deluser jackson sudo
+```
+
+## 16
+
+```bash
+mkdir /opt/16
+grep -E "^net.*" /etc/services > /opt/16/result.txt
+```
+
+## 17
+
+```bash
+mkdir -p /opt/17/results
+diff /opt/17/file1 /opt/17/file2 > /opt/17/results/text_difference
+diff -rq /opt/17/dir1/ /opt/17/dir2/ > /opt/17/results/folder_difference
+```
+
+## 18
+
+```bash
+docker run --name webserv -d ubuntu/apache2
+
+docker image prune -a
+```
+
+## 19
+
+```bash
+ip addr show ens5 # and find IP addr here
+echo "X.X.X.X" > /opt/19/result/ip
+
+ip route > /opt/19/result/routes
+
+sudo netstat -tulpn | grep 22 # find pid and put it to > /opt/18/result/pid
+# or using
+sudo ss -tulpn | grep 22
+#or using lsof
+sudo lsof -i :22 -t | head -n1 > /opt/19/result/pid
+```
+
+## 20
+
+```bash
+ssh node02
+
+# Edit /etc/resolv.conf and add "nameserver 1.1.1.1"
+sudo bash -c 'echo "nameserver 1.1.1.1" >> /etc/resolv.conf'
+
+#Add
+sudo bash -c "echo '10.10.20.5 database.local' >> /etc/hosts"
+
+# You can use nslookup to check that database.local resolves properly
+$ nslookup database.local
+Server: 127.0.0.53
+Address: 127.0.0.53#53
+
+Non-authoritative answer:
+Name: database.local
+Address: 10.10.20.5
+
+# You can use dig to confirm that the task was done correctly.
+$ dig +short database.local
+10.10.20.5
+
+# Check the IP address of node01. Then, exit the SSH session, check the IP address of node01 again, and SSH back into node02.
+exit
+ip addr
+ssh node02
+# Or, without exiting, you can run w and grab the IP here if you're connected from node01.
+w
+
+sudo ip route add default via $node01_ip_address
+```
+
+To check this task, you can run traceroute or tracepath, for example, to google.com.
+
+- *Without the applied route, it looks like this:*
+
+```bash
+$ traceroute google.com
+traceroute to google.com (142.250.74.46), 30 hops max, 60 byte packets
+ 1 244.5.0.111 (244.5.0.111) 7.094 ms * *
+ 2 240.0.20.14 (240.0.20.14) 0.215 ms 0.265 ms 240.0.20.13 (240.0.20.13) 0.197 ms
+ 3 240.0.20.19 (240.0.20.19) 0.185 ms 240.0.20.27 (240.0.20.27) 0.174 ms 240.0.20.16 (240.0.20.16) 0.223 ms
+ 4 242.0.132.113 (242.0.132.113) 1.107 ms 242.0.133.113 (242.0.133.113) 1.105 ms 1.086 ms
+...
+```
+
+- *After applying the rule, pay attention to the first two records (these are the IP addresses of node01).*
+
+```bash
+$ traceroute google.com
+traceroute to google.com (142.250.74.46), 30 hops max, 60 byte packets
+ 1 ip-10-2-26-184.eu-north-1.compute.internal (10.2.26.184) 0.136 ms 0.124 ms 0.116 ms
+ 2 ec2-13-53-0-197.eu-north-1.compute.amazonaws.com (13.53.0.197) 4.337 ms * *
+ 3 240.0.20.13 (240.0.20.13) 0.300 ms 240.0.20.14 (240.0.20.14) 0.363 ms 240.0.20.12 (240.0.20.12) 0.351 ms
+ 4 240.0.20.29 (240.0.20.29) 0.344 ms 240.0.20.19 (240.0.20.19) 0.336 ms 240.0.20.21 (240.0.20.21) 0.262 ms
+ ...
+```
+
+## 21
+
+```bash
+cat < /opt/21/result/script.sh
+#!/bin/bash
+
+cp -r /opt/21/task/* /opt/21/task-backup/
+touch /opt/21/result/empty_file
+EOF
+
+chmod +x /opt/21/result/script.sh
+
+# Edit crontab
+crontab -e
+
+# and add this line
+0 2 * * * /opt/21/result/script.sh
+
+# OR
+
+sudo bash -c 'echo "0 2 * * * /opt/21/result/script.sh" > /etc/cron.d/21-script'
+```
+
+## 22
+
+```bash
+# Set ACL permissions for aclfile
+setfacl -m u:user22:r /opt/22/tasks/aclfile
+
+# Check the ACL permissions
+getfacl /opt/22/tasks/aclfile
+
+# Remove the immutable attribute from frozenfile
+sudo chattr -i /opt/22/tasks/frozenfile
+
+# Check the attributes of frozenfile
+lsattr /opt/22/tasks/frozenfile
+```
+
+## 23
+
+```bash
+sudo kill -HUP $(pidof redis-server)
+# OR
+ps aux | grep redis-server
+# take pid of it and send HUP signal
+sudo kill -HUP $REDIS_PID
+```
+
+## 24
+
+
+1. Run fdisk utility
+
+```bash
+# to create partitions run fdisk for disk /dev/nvme2n1
+sudo fdisk /dev/nvme2n1
+```
+
+Put needed fields:
+
+```sh
+Changes will remain in memory only, until you decide to write them.
+Be careful before using the write command.
+
+Device does not contain a recognized partition table.
+Created a new DOS disklabel with disk identifier 0x034ec39a.
+
+Command (m for help): n <- PUT n here to create new partition
+Partition type
+ p primary (0 primary, 0 extended, 4 free)
+ e extended (container for logical partitions)
+Select (default p): p <- select p for primary partition
+Partition number (1-4, default 1): <- Keep default
+First sector (2048-4194303, default 2048): <--Keep default
+Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-4194303, default 4194303): +1G <-- Set 1G here
+
+Created a new partition 1 of type 'Linux' and of size 1 GiB.
+
+Command (m for help): n
+Partition type
+ p primary (1 primary, 0 extended, 3 free)
+ e extended (container for logical partitions)
+Select (default p): p <-- Set p
+
+Using default response p.
+Partition number (2-4, default 2): <-- Keep default
+First sector (2099200-6291455, default 2099200): <-- Keep default
+Last sector, +/-sectors or +/-size{K,M,G,T,P} (2099200-6291455, default 6291455): +1G
+
+Created a new partition 2 of type 'Linux' and of size 1 GiB.
+
+Command (m for help): w <-- write table to disk and exit
+```
+
+To check that the partitions were successfully created, run (put your attention to `nvme2n1p1` and `nvme2n1p2`):
+
+```bash
+$ lsblk /dev/nvme2n1
+NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+nvme2n1 259:1 0 3G 0 disk
+├─nvme2n1p1 259:6 0 1G 0 part
+└─nvme2n1p2 259:7 0 1G 0 part
+```
+
+2. Mount the Partition
+
+To mount the newly created partition:
+```bash
+sudo mkdir /drive
+
+sudo bash -c "echo '/dev/nvme2n1p1 /drive ext4 defaults 0 0' >> /etc/fstab"
+sudo mkfs.ext4 /dev/nvme2n1p1
+
+sudo mount -a
+```
+
+To verify that the disk was properly mounted, run:
+
+```bash
+lsblk /dev/nvme2n1p1 --output FSTYPE,MOUNTPOINT
+#or
+findmnt -n /dev/nvme2n1p1
+```
+
+3. Create XFS File System on the Second Partition
+
+To create an XFS file system on the second partition:
+
+```bash
+sudo mkfs.xfs /dev/nvme2n1p2
+```
+
+To verify that the file system was properly created, run:
+
+```bash
+lsblk /dev/nvme2n1p2 --output FSTYPE
+# you should see xfs here
+```
+
+## 25
+
+```bash
+# Initialize the physical volumes
+sudo pvcreate /dev/nvme1n1 /dev/nvme3n1
+
+# Create a volume group
+sudo vgcreate volgroup1 /dev/nvme1n1 /dev/nvme3n1
+
+# Create a logical volume
+sudo lvcreate -L 1G -n logvolume1 volgroup1
+```
diff --git a/docs/LFCS/about.md b/docs/LFCS/about.md
new file mode 100644
index 00000000..7703d034
--- /dev/null
+++ b/docs/LFCS/about.md
@@ -0,0 +1,9 @@
+This section contains labs and mock exams to train your LFCS certifications.
+
+- The platform uses **aws** to create following resources: **vpc**, **subnets**, **security groups**, **ec2** (spot/on-demand), **s3**
+- After you launch the scenarios the platform will create all the necessary resources and give access to ec2 instance from which you can perform the tasks
+- you can easily add your own scenario using the already existing terraform module
+
+Exams:
+
+- [01](./Mock%20exams/01.md)
diff --git a/docs/configuration.md b/docs/configuration.md
new file mode 100644
index 00000000..a545fb02
--- /dev/null
+++ b/docs/configuration.md
@@ -0,0 +1,218 @@
+## Run platform via docker
+
+![test](./images/run_via_docker.gif)
+
+- Change **backend_bucket** (**region** , **backend_region** optional) in [terraform/environments/terragrunt.hcl](https://github.com/ViktorUJ/cks/blob/master/terraform/environments/terragrunt.hcl#L4):
+
+## Command
+
+Every command should be run from the project's root directory.
+
+### CMDB
+
+- ``make cmdb_get_env_all`` - get a list of all resources in CMDB
+- ``USER_ID='myuser' ENV_ID='01' make cmdb_get_user_env_data`` - show all created resources of user **myuser** in environment **01**
+- ``USER_ID='myuser' ENV_ID='01' make cmdb_get_user_env_lock`` - show all lock resources of user **myuser** in environment **01**
+- ``USER_ID='myuser' ENV_ID='01' make cmdb_get_user_env_lock`` - show all lock resources of user **myuser** in environment **01**
+- ``USER_ID='myuser' make cmdb_get_user_env_lock`` - show all lock resources of user **myuser** in **all** environment
+- ``USER_ID='myuser' make cmdb_get_user_env_data`` - show all data resources of user **myuser** in **all** environment
+- ``CMDB_ITEM='CMDB_data_myuser_02_k8s_cluster1' make cmdb_get_item`` - getting detailed information about **CMDB_data_myuser_02_k8s_cluster1** resource.
+
+### CKA
+
+#### CKA task
+
+- ``TASK=01 make run_cka_task`` - create cka lab number 01
+- ``TASK=01 make delete_cka_task`` - delete cka hands-on labs
+- ``TASK=01 make run_cka_task_clean`` - run cka_task with clean terragrunt cache for cka_task
+- ``make output_cka_task`` - show **outputs** from **cka_task**
+
+#### CKA mock
+
+- ``TASK=01 make run_cka_mock`` - create mock CKA exam number 1
+- ``make delete_cka_mock`` - delete mock CKA exam
+- ``TASK=01 make run_cka_mock_clean`` - create mock CKA exam with clean terragrunt cache
+- ``make output_cka_mock`` - show **outputs** from **cka_mock**
+
+### CKAD
+
+#### CKAD mock
+
+- ``TASK=01 make run_ckad_mock`` - create mock CKAD exam number 01
+- ``make delete_ckad_mock`` - delete mock CKAD exam
+- ``TASK=01 make run_ckad_mock_clean`` - create mock CKAD exam number 01 with clean terragrunt cache
+- ``make output_ckad_mock`` - show **outputs** from **ckad_mock**
+
+### CKS
+
+#### CKS task
+
+- ``TASK=10 make run_cks_task`` - create cks lab number 10
+- ``TASK=10 make delete_cks_task`` - delete cks hands-on labs
+- ``TASK=10 make run_cks_task_clean`` - run cks_task with clean terragrunt cache for cks_task
+- ``make output_cks_task`` - show **outputs** from **cks_task**
+
+#### CKS mock
+
+- ``TASK=01 make run_cks_mock`` - create mock CKS exam number 01
+- ``make delete_cks_mock`` - delete mock CKS exam
+- ``TASK=01 make run_cks_mock_clean`` - create mock CKS exam number 01 with clean terragrunt cache
+- ``make output_cks_mock`` - show **outputs** from **cks_mock**
+
+### LFCS
+
+#### lfcs mock
+
+- ``TASK=01 make run_lfcs_mock`` - create mock LFCS exam number 01
+- ``make delete_lfcs_mock`` - delete mock LFCS exam
+- ``TASK=01 make delete_lfcs_mock_clean`` - delete mock LFCS exam number 01 with clean terragrunt cache
+- ``make output_lfcs_mock`` - show **outputs** from **lfcs_mock**
+
+### HR
+
+- ``TASK=01 make run_hr_mock`` - create mock hr exam number 01
+- ``make delete_hr_mock`` - delete mock hr exam
+- ``TASK=01 make run_hr_mock_clean`` - create mock CKS exam number 01 with clean terragrunt cache
+- ``make output_hr_mock`` - show **outputs** from **hr_mock**
+
+### EKS
+
+- ``TASK={lab_number} make run_eks_task`` create hands-on lab
+- ``make delete_eks_task`` delete eks lab cluster
+
+### DEV
+
+- ``make lint`` run linter on the project
+
+## Usage scenarios
+
+### CKA hands-on lab
+
+- choose [a hands-on lab](./CKA/Labs/01.md) number
+- create cka lab cluster ``TASK={lab_number} make run_cka_task``
+- find `{master_external_ip}` in terraform output
+- log in to master node via ssh ``ssh ubuntu@{master_external_ip} -i {key}``
+- check init logs `` tail -f /var/log/cloud-init-output.log ``
+- read lab descriptions in ``{lab_number}/README.MD``
+- check solution in ``{lab_number}/SOLUTION.MD``
+- delete cka lab cluster ``make delete_cka_task``
+- clean cka lab cluster ``.terraform`` folder ``make clean_cka_task``
+
+### Mock CKA exam
+
+[Video instruction for launching **CKA mock exam**](https://www.youtube.com/watch?v=P-YYX4CTWIg)
+
+- choose [a mock exam](./CKA/Mock%20exams/01.md) number
+- change instance type from ``spot`` to ``ondemand`` in ``{mock_number}/env.hcl`` if you need
+- create mock CKA exam ``TASK={mock_number} make run_cka_mock``
+- find ``worker_pc_ip`` in ``terraform output``
+- connect to ``worker_pc_ip`` with your ssh key and user ``ubuntu``
+- open questions list ``{mock_number}/README.MD`` and do tasks
+- use ``ssh {kubernetes_nodename}`` from work pc to connect to node
+- run ``time_left`` on work pc to check time
+- run ``check_result`` on work pc to check result
+- delete mock CKA exam `make delete_cka_mock`
+- find exam solutions in ``{mock_number}/worker/files/solutions)`` and * [Video](https://youtu.be/IZsqAPpbBxM) for [mock 01](./CKA/Mock%20exams/01.md) .
+- find exam tests in ``{mock_number}/worker/files/tests.bats)``
+
+### CKS hands-on lab
+
+- choose [CKS lab](./CKS/Labs/01.md) number
+- change **ami_id** in ``{lab_number}/scripts/terragrunt.hcl`` if you changed **region**
+- create cka lab cluster ``TASK={lab_number} make run_cks_task``
+- find `{master_external_ip}` in terraform output
+- log in to master node via ssh ``ssh ubuntu@{master_external_ip} -i {key}``
+- check init logs `` tail -f /var/log/cloud-init-output.log ``
+- read lab descriptions in ``{lab_number}/README.MD``
+- check solution in ``{lab_number}/SOLUTION.MD``
+- delete cks lab cluster ``make delete_cks_task``
+- clean cks lab cluster ``.terraform`` folder ``make clean_cks_task``
+
+### Mock CKS exam
+
+[Video instruction for launching **CKS mock exam**](https://youtu.be/_GbsBOMaJ9Q)
+
+- choose [a mock exam](./CKS/Mock%20exams/01.md) number
+- change **ubuntu_version** in ``{mock_number}/env.hcl`` if you need
+- change instance type from ``spot`` to ``ondemand`` in ``{mock_number}/env.hcl`` if you need
+- create mock CKS exam ``TASK={mock_number} make run_cks_mock`` or ``TASK={mock_number} make run_cks_mock_clean`` if you'd like to run with **clean** terragrunt cache
+- find ``worker_pc_ip`` in ``terraform output``
+- connect to ``worker_pc_ip`` with your ssh key and user ``ubuntu``
+- open questions list ``{mock_number}/README.MD`` and do tasks
+- use ``ssh {kubernetes_nodename}`` from work pc to connect to node
+- run ``time_left`` on work pc to check time
+- run ``check_result`` on work pc to check result
+- delete mock CKS exam `make delete_cks_mock`
+- find exam solutions in ``{mock_number}/worker/files/solutions`` [Mock 1 solutions](./CKS/Mock%20exams/Solutions/01.md) and [video](https://youtu.be/I8CPwcGbrG8)
+- find exam tests in ``{mock_number}/worker/files/tests.bats``
+
+### Mock CKAD exam
+
+[Video instruction for launching **CKAD mock exam**](https://youtu.be/7X4Y9QhbTsk)
+
+- choose [a mock exam](./CKAD/Mock%20exams/01.md) number
+- change **ubuntu_version** in ``{mock_number}/env.hcl`` if you need
+- change instance type from ``spot`` to ``ondemand`` in ``{mock_number}/env.hcl`` if you need
+- create mock CKAD exam ``TASK={mock_number} make run_ckad_mock`` or ``TASK={mock_number} make run_ckad_mock_clean`` if you'd like to run with **clean** terragrunt cache
+- find ``worker_pc_ip`` in ``terraform output``
+- connect to ``worker_pc_ip`` with your ssh key and user ``ubuntu``
+- open questions list ``{mock_number}/README.MD`` and do tasks
+- use ``ssh {kubernetes_nodename}`` from work pc to connect to node
+- run ``time_left`` on work pc to check time
+- run ``check_result`` on work pc to check result
+- delete mock CKAD exam `make delete_ckad_mock`
+- find exam solutions in ``{mock_number}/worker/files/solutions`` mock 1 solutions and [video](https://youtu.be/yQK7Ca8d-yw)
+- find exam tests in ``{mock_number}/worker/files/tests.bats``
+
+### Mock HR exam
+
+[Video instruction for launching **HR mock exam**](https://youtu.be/4CTC1jl8lxE)
+
+https://github.com/ViktorUJ/cks/tree/master/tasks/hr/mock/01
+
+- choose mock number [tasks/hr/mock] (https://github.com/ViktorUJ/cks/tree/master/tasks/hr/mock/01)
+- change **ubuntu_version** in ``{mock_number}/env.hcl`` if you need
+- change instance type from ``spot`` to ``ondemand`` in ``{mock_number}/env.hcl`` if you need
+- create mock CKS exam ``TASK={mock_number} make run_hr_mock`` or ``TASK={mock_number} make run_hr_mock_clean`` if you'd like to run with **clean** terragrunt cache
+- find ``worker_pc_ip`` in ``terraform output``
+- connect to ``worker_pc_ip`` with your ssh key and user ``ubuntu``
+- open questions list ``{mock_number}/README.MD`` and do tasks
+- use ``ssh {kubernetes_nodename}`` from work pc to connect to node
+- run ``time_left`` on work pc to check time
+- run ``check_result`` on work pc to check result
+- delete mock CKA exam `make delete_hr_mock`
+- find exam solutions in ``{mock_number}/worker/files/solutions`` and [video](https://youtu.be/4CTC1jl8lxE)
+- find exam tests in ``{mock_number}/worker/files/tests.bats``
+
+### EKS hands-on lab
+
+- choose [labs](https://github.com/ViktorUJ/cks/tree/master/tasks/eks/labs/) number
+- create hands-on lab `` TASK={lab_number} make run_eks_task ``
+- find ``worker_pc_ip`` in ``terraform output``
+- log in to worker_pc node via ssh ``ssh ubuntu@{worker_pc_ip} -i {key}``
+- read lab descriptions in ``{lab_number}/README.MD``
+- check solution in ``{lab_number}/SOLUTION.MD``
+- delete eks lab cluster ``make delete_eks_task``
+
+## Multiple users environments
+
+### Why is it needed ?
+
+- to create many identical independent environments. e.g. for a group of students.
+
+- to create several independent environments for one student with different tasks.
+
+To create an independent environment you need to set additional variables USER_ID='myuser' ENV_ID='01' before running the make command.
+
+[for example](https://youtu.be/3H0RMLXGmgg):
+
+- `USER_ID='myuser' ENV_ID='3' TASK=01 make run_ckad_mock` - create environment **3** for user **myuser** with task set **01** ckad mock
+- `USER_ID='myuser' ENV_ID='3' TASK=01 make delete_ckad_mock` - delete environment **3** for user **myuser** with job set **01** ckad mock
+
+- ``make cmdb_get_env_all`` - get a list of all resources in CMDB
+- ``USER_ID='myuser' ENV_ID='01' make cmdb_get_user_env_data`` - show all created resources of user **myuser** in environment **01**
+- ``USER_ID='myuser' ENV_ID='01' make cmdb_get_user_env_lock`` - show all lock resources of user **myuser** in environment **01**
+- ``USER_ID='myuser' ENV_ID='01' make cmdb_get_user_env_lock`` - show all lock resources of user **myuser** in environment **01**
+- ``USER_ID='myuser' make cmdb_get_user_env_lock`` - show all lock resources of user **myuser** in **all** environment
+- ``USER_ID='myuser' make cmdb_get_user_env_data`` - show all data resources of user **myuser** in **all** environment
+- ``CMDB_ITEM='CMDB_data_myuser_01_k8s_cluster1' make cmdb_get_item`` - getting detailed information about **CMDB_data_myuser_01_k8s_cluster1** resource.
\ No newline at end of file
diff --git a/docs/getting_started.md b/docs/getting_started.md
new file mode 100644
index 00000000..5afac04c
--- /dev/null
+++ b/docs/getting_started.md
@@ -0,0 +1,90 @@
+## Welcome to SRE learning platform
+
+The **SRE Learning Platform** is an open-source hub designed to help IT engineers effectively prepare for the **CKA (Certified Kubernetes Administrator)**, **CKS (Certified Kubernetes Security Specialist)**, **CKAD (Certified Kubernetes Application Developer)**, and **LFCS (Linux Foundation Certified System Administrator)** exams. Additionally, this platform offers invaluable hands-on experience with **AWS EKS (Elastic Kubernetes Service)**, equipping users with practical insights for real-world applications. Whether you're aiming to validate your skills, boost your career prospects in Kubernetes administration, security, application development, or delve into AWS EKS, this platform provides hands-on labs, practice tests, and expert guidance to ensure certification success.
+
+- Prepare for the **CKA**: [Certified Kubernetes Administrator Exam](https://training.linuxfoundation.org/certification/certified-kubernetes-administrator-cka/)
+- Enhance your skills for the **CKS**: [Certified Kubernetes Security Specialist Exam](https://training.linuxfoundation.org/certification/certified-kubernetes-security-specialist/)
+- Excel in the **CKAD**: [Certified Kubernetes Application Developer Exam](https://training.linuxfoundation.org/certification/certified-kubernetes-application-developer-ckad/)
+- Prepare for the **LFCS**: [Linux Foundation Certified System Administrator](https://training.linuxfoundation.org/certification/linux-foundation-certified-sysadmin-lfcs/)
+
+Master Kubernetes concepts, gain practical experience, and excel in the CKA, CKS, and CKAD exams with the **SRE Learning Platform**.
+
+[![video instruction](../static/img/run_via_docker.gif)](https://youtu.be/Xh6sWzafBmw "run via docker")
+
+## Run platform via docker
+
+We have prepared a docker image including all necessary dependencies and utilities .
+
+You can use it to run exams or labs by following the instructions below or use [video instructions](https://youtu.be/Xh6sWzafBmw)
+
+### Run the docker container
+
+```sh
+sudo docker run -it viktoruj/runner
+```
+
+### Clone the git repo
+
+```sh
+git clone https://github.com/ViktorUJ/cks.git
+
+cd cks
+```
+
+### Update S3 bucket
+
+```hcl
+#vim terraform/environments/terragrunt.hcl
+
+locals {
+ region = "eu-north-1"
+ backend_region = "eu-north-1"
+ backend_bucket = "sre-learning-platform-state-backet" # update to your own name
+ backend_dynamodb_table = "${local.backend_bucket}-lock"
+}
+```
+
+### Set the aws key
+
+```sh
+export AWS_ACCESS_KEY_ID=Your_Access_Key
+export AWS_SECRET_ACCESS_KEY=Your_Secred_Access_Key
+```
+
+### Run your scenario
+
+#### For single environment
+
+````sh
+TASK=01 make run_cka_mock
+````
+
+#### For multiple users or multiple environments
+
+```sh
+USER_ID='user1' ENV_ID='01' TASK=01 make run_cka_mock
+```
+
+### Delete your scenario
+
+#### For single environment
+
+```sh
+TASK=01 make delete_cka_mock
+```
+
+#### For multiple users or multiple environments
+
+```sh
+USER_ID='user1' ENV_ID='01' TASK=01 make delete_cka_mock
+```
+
+Requirements
+
+- [GNU Make](https://www.gnu.org/software/make/) >= 4.2.1
+- [terrafrom](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) >= v1.6.6
+- [terragrunt](https://terragrunt.gruntwork.io/docs/getting-started/install/) >= v0.54.8
+- [jq](https://jqlang.github.io/jq/download/) >= 1.6
+- [aws IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) + [Access key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) (or [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) ) with [Admin privilege for VPC, EC2, IAM, EKS](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html)
+- [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-version.html) > 2.2.30
+- [aws profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html)
diff --git a/docs/labs.MD b/docs/labs.MD
deleted file mode 100644
index de2d5b2b..00000000
--- a/docs/labs.MD
+++ /dev/null
@@ -1,74 +0,0 @@
-## CKS labs
-- ``TASK=01 make run_cks_task`` - create cks lab number 01
-- ``TASK=01 make delete_cks_task`` - delete cks hands-on labs
-- ``TASK=01 make run_cks_task_clean`` - run cks_task with clean terragrunt cache for cks_task
-- ``make output_cks_task `` - show **outputs** from **cks_task**
-
-
-
-| Task | Description | Solution |
-|--------|------------------------------------------------------|------------------------------|
-| **01** | [kubectl contexts](..%2Ftasks%2Fcks%2Flabs%2F01%2FREADME.MD)| [SOLUTION](..%2Ftasks%2Fcks%2Flabs%2F01%2FSOLUTION.MD) |
-| **02** | [Falco, sysdig](..%2Ftasks%2Fcks%2Flabs%2F02%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcks%2Flabs%2F02%2FSOLUTION.MD) |
-| **03** | [Kube-api. disable access via nodePort](..%2Ftasks%2Fcks%2Flabs%2F03%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcks%2Flabs%2F03%2FSOLUTION.MD) |
-| **04** | [Pod Security Standard](..%2Ftasks%2Fcks%2Flabs%2F04%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcks%2Flabs%2F04%2FSOLUTION.MD) |
-| **05** | [CIS Benchmark](..%2Ftasks%2Fcks%2Flabs%2F05%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcks%2Flabs%2F05%2FSOLUTION.MD) |
-| **07** | [Open Policy Agent - Blacklist Images](..%2Ftasks%2Fcks%2Flabs%2F07%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcks%2Flabs%2F07%2FSOLUTION.MD) |
-| **09** | [AppArmor](..%2Ftasks%2Fcks%2Flabs%2F09%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcks%2Flabs%2F09%2FSOLUTION.MD) |
-| **10** | [Container Runtime Sandbox gVisor](..%2Ftasks%2Fcks%2Flabs%2F10%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcks%2Flabs%2F10%2FSOLUTION.MD) |
-| **11** | [Read the complete Secret content directly from ETCD](..%2Ftasks%2Fcks%2Flabs%2F11%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcks%2Flabs%2F11%2FSOLUTION.MD) |
-| **17** | [Enable audit log](..%2Ftasks%2Fcks%2Flabs%2F17%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcks%2Flabs%2F17%2FSOLUTION.MD) |
-| **19** | [Fix Dockerfile](..%2Ftasks%2Fcks%2Flabs%2F19%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcks%2Flabs%2F19%2FSOLUTION.MD) |
-| **20** | [Update Kubernetes cluster](..%2Ftasks%2Fcks%2Flabs%2F20%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcks%2Flabs%2F20%2FSOLUTION.MD) |
-| **21** | [Image Vulnerability Scanning](..%2Ftasks%2Fcks%2Flabs%2F21%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcks%2Flabs%2F21%2FSOLUTION.MD) |
-| **22** | [Network policy](..%2Ftasks%2Fcks%2Flabs%2F22%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcks%2Flabs%2F22%2FSOLUTION.MD) |
-| **23** | [Set tls version and allowed ciphers for etcd, kube-api](..%2Ftasks%2Fcks%2Flabs%2F23%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcks%2Flabs%2F23%2FSOLUTION.MD) |
-| **24** | [Encrypt secrets in ETCD](..%2Ftasks%2Fcks%2Flabs%2F24%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcks%2Flabs%2F24%2FSOLUTION.MD) |
-| **25** | [Image policy webhook](..%2Ftasks%2Fcks%2Flabs%2F25%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcks%2Flabs%2F25%2FSOLUTION.MD) |
-
-
-
-## CKA labs
-
-- ``TASK=01 make run_cka_task`` - create cka lab number 01
-- ``TASK=01 make delete_cka_task`` - delete cka hands-on labs
-- ``TASK=01 make run_cka_task_clean`` - run cka_task with clean terragrunt cache for cka_task
-- ``make output_cka_task `` - show **outputs** from **cka_task**
-
-
-| Task | Description | Solution |
-|--------|----------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------|
-| **01** | [Fix problem with kube-api ](..%2Ftasks%2Fcka%2Flabs%2F01%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcka%2Flabs%2F01%2Fworker%2Ffiles%2Fsolutions%2F1.MD) [VIDEO SOLUTION](https://youtu.be/OFHiI_XAXNU) |
-| **02** | [Horizontal Pod Autoscaling .CPU ](..%2Ftasks%2Fcka%2Flabs%2F02%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcka%2Flabs%2F02%2Fworker%2Ffiles%2Fsolutions%2F1.MD) |
-| **03** | [Nginx ingress. Routing by header ](..%2Ftasks%2Fcka%2Flabs%2F03%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcka%2Flabs%2F03%2Fworker%2Ffiles%2Fsolutions%2F1.MD) [VIDEO SOLUTION](https://youtu.be/1-qA7RjSx4A) |
-| **04** | [Nginx ingress. Routing 30% of requests to new version of app.](..%2Ftasks%2Fcka%2Flabs%2F04%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcka%2Flabs%2F04%2Fworker%2Ffiles%2Fsolutions%2F1.MD) [VIDEO SOLUTION](https://youtu.be/IC_0FeQtgwA) |
-| **05** | [PriorityClass.](..%2Ftasks%2Fcka%2Flabs%2F05%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcka%2Flabs%2F05%2Fworker%2Ffiles%2Fsolutions%2F1.MD) [VIDEO SOLUTION](https://youtu.be/7MhXfbiMfOM) |
-| **06** | [Create general resources (Namespace, Deployment, Service)](..%2Ftasks%2Fcka%2Flabs%2F06%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcka%2Flabs%2F06%2Fworker%2Ffiles%2Fsolutions%2F1.MD) [VIDEO SOLUTION](https://youtu.be/vqs_SUjKee8) |
-| **07** | [CPU throttle](..%2Ftasks%2Fcka%2Flabs%2F07%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcka%2Flabs%2F07%2Fworker%2Ffiles%2Fsolutions%2F1.MD)|
-
-## CKAD labs
-
-- ``TASK=01 make run_ckad_task`` - create ckad lab number 01
-- ``TASK=01 make delete_ckad_task`` - delete ckad hands-on labs
-- ``TASK=01 make run_ckad_task_clean`` - run cka_task with clean terragrunt cache for ckad_task
-- ``make output_ckad_task `` - show **outputs** from **ckad_task**
-
-
-| Task | Description | Solution |
-|--------|---------------------------------------------------|------------------------------|
-| **01** | [test ](..%2Ftasks%2Fcka%2Flabs%2F02%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcka%2Flabs%2F02%2Fworker%2Ffiles%2Fsolutions%2F1.MD) |
-
-
-
-
-## EKS labs
-
-- ``TASK=01 make run_eks_task`` - create ckad lab number 01
-- ``TASK=01 make delete_eks_task`` - delete ckad hands-on labs
-- ``TASK=01 make run_eks_task_clean`` - run cka_task with clean terragrunt cache for ckad_task
-- ``make output_eks_task `` - show **outputs** from **ckad_task**
-
-
-| Task | Description | Solution |
-|--------|---------------------------------------------------|------------------------------|
-| **01** | [test ](..%2Ftasks%2Fcka%2Flabs%2F02%2FREADME.MD) | [SOLUTION](..%2Ftasks%2Fcka%2Flabs%2F02%2Fworker%2Ffiles%2Fsolutions%2F1.MD) |
diff --git a/docs/links.MD b/docs/links.MD
deleted file mode 100644
index d3a27b8f..00000000
--- a/docs/links.MD
+++ /dev/null
@@ -1,18 +0,0 @@
-| Related | Type | Links |
-|:---------------:|:-----------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------:|
-| CKA,CKS,CKAD | list of tools | [github kubetools](https://github.com/collabnix/kubetools) |
-| CKA,CKS,CKAD | community-faq | [kodekloudhub](https://github.com/kodekloudhub/community-faq) |
-| CKA,CKS,CKAD | about the Kubernetes certifications | [bakavetsGitHub](https://gist.github.com//05681473ca617579156de033ba40ee7a) |
-| CKA,CKS,CKAD | video course | [bakavets](https://www.youtube.com/watch?v=Amkkr4_nsyc&list=PL3SzV1_k2H1VDePbSWUqERqlBXIk02wCQ) |
-| CKA | learn path | [github vedmichv](https://github.com/vedmichv/CKA-learn-path/) |
-| CKA | course | [CKA Course](https://github.com/kodekloudhub/certified-kubernetes-administrator-course) |
-| CKS | video course | [ killer.sh CKS ](https://www.youtube.com/watch?v=d9xfB5qaOfg) |
-| CKS | examples github | [killer.sh github cks-course](https://github.com/killer-sh/cks-course-environment) |
-| CKS | falco-101 course | [falco-101](https://learn.sysdig.com/falco-101) |
-| CKS | exam questions | [github ramanagali](https://github.com/ramanagali/Interview_Guide/blob/main/CKS_Preparation_Guide.md) |
-| CKS | exam questions | [github walidshaari](https://github.com/walidshaari/Certified-Kubernetes-Security-Specialist) |
-| CKS | sysdig cli | [sysdig-platform-cli](https://sysdiglabs.github.io/sysdig-platform-cli/) |
-| CKS | checklist | [container-security-checklist](https://github.com/krol3/container-security-checklist#secure-the-container-registry) |
-| CKS | publication | [The Path to Passing the CKS Exam](https://hackernoon.com/the-path-to-passing-the-cks-exam-from-challenges-to-creating-a-simulator) |
-| CKS | rules library for OPA Gatekeeper | [rules library](https://cloud.google.com/anthos-config-management/docs/latest/reference/constraint-template-library) |
-| CKS | video course | [Learn with GVR](https://www.youtube.com/watch?v=jvmShTBSBoA&list=PLFkEchqXDZx6Bw3B2NRVc499j1TavjOvm) |
diff --git a/docs/multiple_users_envs.MD b/docs/multiple_users_envs.MD
deleted file mode 100644
index 3b474e98..00000000
--- a/docs/multiple_users_envs.MD
+++ /dev/null
@@ -1,21 +0,0 @@
-# Why is it needed ?
-
-- to create many identical independent environments. e.g. for a group of students.
-
-- to create several independent environments for one student with different tasks.
-
-To create an independent environment you need to set additional variables USER_ID='myuser' ENV_ID='01' before running the make command.
-
-[for example](https://youtu.be/3H0RMLXGmgg) :
-
-- `USER_ID='myuser' ENV_ID='3' TASK=01 make run_ckad_mock` - create environment **3** for user **myuser** with task set **01** ckad mock
-- `USER_ID='myuser' ENV_ID='3' TASK=01 make delete_ckad_mock` - delete environment **3** for user **myuser** with job set **01** ckad mock
-
-
-- ``make cmdb_get_env_all`` - get a list of all resources in CMDB
-- ``USER_ID='myuser' ENV_ID='01' make cmdb_get_user_env_data`` - show all created resources of user **myuser** in environment **01**
-- ``USER_ID='myuser' ENV_ID='01' make cmdb_get_user_env_lock`` - show all lock resources of user **myuser** in environment **01**
-- ``USER_ID='myuser' ENV_ID='01' make cmdb_get_user_env_lock`` - show all lock resources of user **myuser** in environment **01**
-- ``USER_ID='myuser' make cmdb_get_user_env_lock`` - show all lock resources of user **myuser** in **all** environment
-- ``USER_ID='myuser' make cmdb_get_user_env_data`` - show all data resources of user **myuser** in **all** environment
-- ``CMDB_ITEM='CMDB_data_myuser_01_k8s_cluster1' make cmdb_get_item`` - getting detailed information about **CMDB_data_myuser_01_k8s_cluster1** resource.
diff --git a/docs/run_from_docker.MD b/docs/run_from_docker.MD
deleted file mode 100644
index 1506006a..00000000
--- a/docs/run_from_docker.MD
+++ /dev/null
@@ -1,58 +0,0 @@
-# Run platform via docker
-
-We have prepared a docker image including all necessary dependencies and utilities .
-
-You can use it to run exams or labs by following the instructions below or use [video instructions](https://youtu.be/Xh6sWzafBmw)
-
-### Run the docker container
-```
-sudo docker run -it viktoruj/runner
-
-```
-### Clone the git repo
-```
-git clone https://github.com/ViktorUJ/cks.git
-
-cd cks
-```
-### Update S3 bucket
-```
-#vim terraform/environments/terragrunt.hcl
-
-
-locals {
- region = "eu-north-1"
- backend_region = "eu-north-1"
- backend_bucket = "sre-learning-platform-state-backet" # update to your own name
- backend_dynamodb_table = "${local.backend_bucket}-lock"
-}
-
-```
-### Set the aws key
-```
-export AWS_ACCESS_KEY_ID=Your_Access_Key
-export AWS_SECRET_ACCESS_KEY=Your_Secred_Access_Key
-```
-
-### Run your scenario
-
-#### For single environment
-````
-TASK=01 make run_cka_mock
-````
-#### For multiple users or multiple environments
-
-````
-USER_ID='user1' ENV_ID='01' TASK=01 make run_cka_mock
-````
-
-### Delete your scenario
-
-#### For single environment
-```
-TASK=01 make delete_cka_mock
-```
-#### For multiple users or multiple environments
-```
-USER_ID='user1' ENV_ID='01' TASK=01 make delete_cka_mock
-```
diff --git a/docs/tips_and_tricks.MD b/docs/tips_and_tricks.MD
index 0b16d3ad..6a06f62b 100644
--- a/docs/tips_and_tricks.MD
+++ b/docs/tips_and_tricks.MD
@@ -23,8 +23,11 @@ The Vim editor is used by default, which means you should be aware how to use sh
## aliases
-`export do="--dry-run=client -o yaml" `
-````
+```sh
+export do="--dry-run=client -o yaml"
+```
+
+```sh
# usage fo create pod template
k run test --image nginx $do
diff --git a/docs/useful_links.md b/docs/useful_links.md
new file mode 100644
index 00000000..1a6c6a4e
--- /dev/null
+++ b/docs/useful_links.md
@@ -0,0 +1,20 @@
+## Resources
+
+| Related | Type | Links |
+| :----------: | :---------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------: |
+| CKA,CKS,CKAD | list of tools | [github kubetools](https://github.com/collabnix/kubetools) |
+| CKA,CKS,CKAD | community-faq | [kodekloudhub](https://github.com/kodekloudhub/community-faq) |
+| CKA,CKS,CKAD | about the Kubernetes certifications | [bakavetsGitHub](https://gist.github.com//05681473ca617579156de033ba40ee7a) |
+| CKA,CKS,CKAD | video course | [bakavets](https://www.youtube.com/watch?v=Amkkr4_nsyc&list=PL3SzV1_k2H1VDePbSWUqERqlBXIk02wCQ) |
+| CKA | learn path | [github vedmichv](https://github.com/vedmichv/CKA-learn-path/) |
+| CKA | course | [CKA Course](https://github.com/kodekloudhub/certified-kubernetes-administrator-course) |
+| CKS | video course | [ killer.sh CKS ](https://www.youtube.com/watch?v=d9xfB5qaOfg) |
+| CKS | examples github | [killer.sh github cks-course](https://github.com/killer-sh/cks-course-environment) |
+| CKS | falco-101 course | [falco-101](https://learn.sysdig.com/falco-101) |
+| CKS | exam questions | [github ramanagali](https://github.com/ramanagali/Interview_Guide/blob/main/CKS_Preparation_Guide.md) |
+| CKS | exam questions | [github walidshaari](https://github.com/walidshaari/Certified-Kubernetes-Security-Specialist) |
+| CKS | sysdig cli | [sysdig-platform-cli](https://sysdiglabs.github.io/sysdig-platform-cli/) |
+| CKS | checklist | [container-security-checklist](https://github.com/krol3/container-security-checklist#secure-the-container-registry) |
+| CKS | publication | [The Path to Passing the CKS Exam](https://hackernoon.com/the-path-to-passing-the-cks-exam-from-challenges-to-creating-a-simulator) |
+| CKS | rules library for OPA Gatekeeper | [rules library](https://cloud.google.com/anthos-config-management/docs/latest/reference/constraint-template-library) |
+| CKS | video course | [Learn with GVR](https://www.youtube.com/watch?v=jvmShTBSBoA&list=PLFkEchqXDZx6Bw3B2NRVc499j1TavjOvm) |