Replies: 3 comments
-
You didn't include any info on what you're using for your datastore, but since you've got a 3-node cluster I'm guessing you're using managed etcd? Etcd is far more demanding of your storage than sqlite or external sql, as it triggers a full fsync to disk every time there is a write to the datastore. Since Kubernetes is constantly updating internal status for things like keepalives, health checks, component leases, etc. you will see a fairly high base IO load even with nothing going on in the cluster. For more information, see: |
Beta Was this translation helpful? Give feedback.
-
Your assumption is 100% correct: I'm using managed etcd.
Now, I can consider a different setup, means High Availability with an External DB. Or would External DB reduce the IO load? |
Beta Was this translation helpful? Give feedback.
-
In general, IO for SQL backends is lower than etcd. Every write to etcd needs to be written to disk on each node, so if you have three etcd servers then that's three nodes writing a copy of every change. If you have a single database server, then the write only needs to be performed once. Database servers are also usually a bit less eager to write things to disk, and won't force a full fsync for every insert. |
Beta Was this translation helpful? Give feedback.
-
Hello,
I have made this observation on my host with k3s setup running with 3 master nodes; any master node is in a KVM running Alpine Linux.
There's a constant IO load on the disk caused by the relevant
k3s-server-<n>
; the IO load is low, but it's noticable.This is the output of iotop:
Question:
Is it normal for a k3s cluster setup that the processes are causing IO load although there's no "productive" pod?
THX
Beta Was this translation helpful? Give feedback.
All reactions