diff --git a/src/components/App.js b/src/components/App.js
index 20021e18..de343307 100644
--- a/src/components/App.js
+++ b/src/components/App.js
@@ -218,7 +218,7 @@ cat install.sh | sudo bash -s airgap
{installerData && installerData.spec.flannel && installerData.spec.flannel.version &&
- TCP ports 2379, 2380, 6443, 10250, 10251 and 10252 open between cluster nodes
+ TCP ports 2379, 2380, 6443, 10250, 10257 and 10259 open between cluster nodes
}
{installerData && installerData.spec.flannel && installerData.spec.flannel.version &&
@@ -226,7 +226,7 @@ cat install.sh | sudo bash -s airgap
}
{installerData && installerData.spec.weave && installerData.spec.weave.version &&
- TCP ports 2379, 2380, 6443, 6783, 10250, 10251 and 10252 open between cluster nodes
+ TCP ports 2379, 2380, 6443, 6783, 10250, 10257 and 10259 open between cluster nodes
}
{installerData && installerData.spec.weave && installerData.spec.weave.version &&
@@ -236,10 +236,10 @@ cat install.sh | sudo bash -s airgap
{installerData.spec.antrea.isEncryptionDisabled ?
- TCP ports 2379, 2380, 6443, 8091, 10250, 10251 and 10252 open between cluster nodes
+ TCP ports 2379, 2380, 6443, 8091, 10250, 10257 and 10259 open between cluster nodes
:
- TCP ports 2379, 2380, 6443, 8091, 10250, 10251, 10252 and 51820 open between cluster nodes
+ TCP ports 2379, 2380, 6443, 8091, 10250, 10257, 10259 and 51820 open between cluster nodes
}
}
{installerData && installerData.spec.antrea && installerData.spec.antrea.version &&
diff --git a/src/markdown-pages/add-ons/rook.md b/src/markdown-pages/add-ons/rook.md
index e3385492..71c15b46 100644
--- a/src/markdown-pages/add-ons/rook.md
+++ b/src/markdown-pages/add-ons/rook.md
@@ -28,6 +28,16 @@ spec:
flags-table
+## System Requirements
+
+The following ports must be open between nodes for multi-node clusters:
+
+| Protocol | Direction | Port Range | Purpose | Used By |
+| ------- | --------- | ---------- | ----------------------- | ------- |
+| TCP | Inbound | 9090 | CSI RBD Plugin Metrics | All |
+
+The `/var/lib/rook/` directory requires at least 10 GB space available for Ceph monitor metadata.
+
## Block Storage
Rook versions 1.4.3 and later require a dedicated block device attached to each node in the cluster.
@@ -60,7 +70,7 @@ Additionally, `blockDeviceFilter` instructs Rook to use only block devices that
For more information about the available options, see [Advanced Install Options](#advanced-install-options) above.
The Rook add-on waits for the dedicated disk that you attached to your node before continuing with installation.
-If you attached a disk to your node, but the installer is waiting at the Rook add-on installation step, see [OSD pods are not created on my devices](https://rook.io/docs/rook/v1.0/ceph-common-issues.html#osd-pods-are-not-created-on-my-devices) in the Rook documentation for troubleshooting information.
+If you attached a disk to your node, but the installer is waiting at the Rook add-on installation step, see [OSD pods are not created on my devices](https://rook.io/docs/rook/v1.10/Troubleshooting/ceph-common-issues/#osd-pods-are-not-created-on-my-devices) in the Rook documentation for troubleshooting information.
## Filesystem Storage
@@ -68,21 +78,17 @@ By default, for Rook versions earlier than 1.4.3, the cluster uses the filesyste
However, block storage is recommended for Rook in production clusters.
For more information, see [Block Storage](#block-storage) above.
-When using the filesystem for storage, each node in the cluster has a single OSD backed by a directory in `/opt/replicated/rook`.
-Nodes with a Ceph Monitor also use `/var/lib/rook`.
-
-Sufficient disk space must be available to `/var/lib/rook` for the Ceph Monitors and other configs. For disk requirements, see [Add-on Directory Disk Space Requirements](/docs/install-with-kurl/system-requirements/#add-on-directory-disk-space-requirements).
-
-We recommend a separate partition to prevent a disruption in Ceph's operation as a result of `/var` or the root partition running out of space.
+When using the filesystem for storage, each node in the cluster has a single OSD backed by a directory in `/opt/replicated/rook/`.
+We recommend a separate disk or partition at `/opt/replicated/rook/` to prevent a disruption in Ceph's operation as a result the root partition running out of space.
**Note**: All disks used for storage in the cluster should be of similar size.
A cluster with large discrepancies in disk size may fail to replicate data to all available nodes.
## Shared Filesystem
-The [Ceph filesystem](https://rook.io/docs/rook/v1.4/ceph-filesystem.html) is supported with version 1.4.3+.
+The [Ceph filesystem](https://rook.io/docs/rook/v1.10/Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/) is supported with version 1.4.3+.
This allows the use of PersistentVolumeClaims with access mode `ReadWriteMany`.
-Set the storage class to `rook-cephfs` in the pvc spec to use this feature.
+Set the storage class to `rook-cephfs` in the PVC spec to use this feature.
```yaml
apiVersion: v1
@@ -98,14 +104,6 @@ spec:
storageClassName: rook-cephfs
```
-## System Requirements
-
-The following additional ports must be open between nodes for multi-node clusters:
-
-| Protocol | Direction | Port Range | Purpose | Used By |
-| ------- | --------- | ---------- | ----------------------- | ------- |
-| TCP | Inbound | 9090 | CSI RBD Plugin Metrics | All |
-
## Upgrades
It is now possible to upgrade multiple minor versions of the Rook add-on at once.
@@ -121,10 +119,10 @@ For example:
curl https://k8s.kurl.sh/latest/tasks.sh | sudo bash -s rook-upgrade to-version=1.10
```
-Rook upgrades from 1.0.x migrate data off of any hostpath-based OSDs in favor of block device-based OSDs.
+Rook upgrades from 1.0.x migrate data off of any filesystem-based OSDs in favor of block device-based OSDs.
The upstream Rook project introduced a requirement for block storage in versions 1.3.x and later.
-## Monitor Rook Ceph
+## Monitoring
For Rook version 1.9.12 and later, when you install with both the Rook add-on and the Prometheus add-on, kURL enables Ceph metrics collection and creates a Ceph cluster statistics Grafana dashboard.
diff --git a/src/markdown-pages/install-with-kurl/managing-nodes.md b/src/markdown-pages/install-with-kurl/managing-nodes.md
index b0981e85..30e4b3b5 100644
--- a/src/markdown-pages/install-with-kurl/managing-nodes.md
+++ b/src/markdown-pages/install-with-kurl/managing-nodes.md
@@ -104,7 +104,7 @@ Complete the following prerequisites before you remove one or more nodes from a
* Upgrade Rook Ceph to v1.4 or later.
- The two latest minor releases of Rook Ceph are actively maintained. It is recommended to upgrade to the latest stable release available. For more information, see [Release Cycle](https://rook.io/docs/rook/latest/Getting-Started/release-cycle/) in the Rook Ceph documentation.
+ The two latest minor releases of Rook Ceph are actively maintained. It is recommended to upgrade to the latest stable release available. For more information, see [Release Cycle](https://rook.io/docs/rook/v1.10/Getting-Started/release-cycle/) in the Rook Ceph documentation.
Attempting to remove a node from a cluster that uses a Rook Ceph version earlier than v1.4 can cause Ceph to enter an unhealthy state. For example, see [Rook Ceph v1.0.4 is Unhealthy with Mon Pods Not Rescheduled](#rook-ceph-v104-is-unhealthy-with-mon-pods-not-rescheduled) under _Troubleshoot Node Removal_ below.
@@ -115,7 +115,7 @@ Complete the following prerequisites before you remove one or more nodes from a
* (Recommended) Use the `rook-ceph-tools` Pod to access the ceph CLI.
Use the same version of the Rook toolbox as the version of Rook Ceph that is installed in the cluster.
By default, the `rook-ceph-tools` Pod is included on kURL clusters with Rook Ceph v1.4 and later.
- For more information about `rook-ceph-tools` Pods, see [Rook Toolbox](https://rook.io/docs/rook/v1.5/ceph-toolbox.html) in the Rook Ceph documentation.
+ For more information about `rook-ceph-tools` Pods, see [Rook Toolbox](https://rook.io/docs/rook/v1.10/Troubleshooting/ceph-toolbox/) in the Rook Ceph documentation.
* Use `kubectl exec` to enter the `rook-ceph-operator` Pod, where the ceph CLI is available.
diff --git a/src/markdown-pages/install-with-kurl/system-requirements.md b/src/markdown-pages/install-with-kurl/system-requirements.md
index fef1151b..64fc9fd6 100644
--- a/src/markdown-pages/install-with-kurl/system-requirements.md
+++ b/src/markdown-pages/install-with-kurl/system-requirements.md
@@ -16,43 +16,71 @@ title: "System Requirements"
* Oracle Linux 7.4\*, 7.5\*, 7.6\*, 7.7\*, 7.8\*, 7.9, 8.0\*, 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7 (OL 8.x requires Containerd)
* Amazon Linux 2
-*: This version is deprecated since it is no longer supported by its creator. We continue to support it, but support will be removed in the future.
+** This version is deprecated since it is no longer supported by its creator. We continue to support it, but support will be removed in the future.*
## Minimum System Requirements
* 4 AMD64 CPUs or equivalent per machine
* 8 GB of RAM per machine
-* 40 GB of Disk Space per machine
-* The Rook add-on version 1.4.3 and later requires a dedicated block device on each node in the cluster.
- For more information about how to enable block storage for Rook, see [Block Storage](/docs/add-ons/rook#block-storage) in _Rook Add-On_.
-* TCP ports 2379, 2380, 6443, 10250, 10251 and 10252 open between cluster nodes
- * **Note**: When [Flannel](/docs/add-ons/flannel) is enabled, UDP port 8472 open between cluster nodes
- * **Note**: When [Weave](/docs/add-ons/weave) is enabled, TCP port 6783 and UDP port 6783 and 6784 open between cluster nodes
+* 100 GB of Disk Space per machine
+ *(For more specific requirements see [Disk Space Requirements](#disk-space-requirements) below)*
+* TCP ports 2379, 2380, 6443, 10250, 10257 and 10259 and UDP port 8472 (Flannel VXLAN) open between cluster nodes
+ *(For more specific add-on requirements see [Networking Requirements](#networking-requirements) below)*
-## Core Directory Disk Space Requirements
+## Disk Space Requirements
+
+### Core Requirements
The following table lists information about the core directory requirements.
-| Name | Location | Requirements |
-| ------------| ------------------- | -------------------------------------------------- |
-| etcd | /var/lib/etcd/ | This directory has a high I/O requirement. See [Cloud Disk Performance](/docs/install-with-kurl/system-requirements#cloud-disk-performance). |
-| kURL | /var/lib/kurl/ | 5 GB
kURL installs additional dependencies in the directory /var/lib/kurl/, including utilities, system packages, and container images. This directory must be writeable by the kURL installer and must have sufficient disk space.
This directory can be overridden with the flag `kurl-install-directory`. See [kURL Advanced Install Options](/docs/install-with-kurl/advanced-options).
|
-| kubelet | /var/lib/kubelet/ | 30 GiB and less than 80% full. See [Host Preflights](/docs/install-with-kurl/host-preflights). |
+| Name | Location | Requirements | Description |
+| -------------- | -------------------- | ------------------ | ----------- |
+| etcd | /var/lib/etcd/ | 2 GB | Kubernetes etcd cluster data directory. See the [etcd documentation](https://etcd.io/docs/v3.5/op-guide/hardware/#disks) and [Cloud Disk Performance](#cloud-disk-performance) for more information and recommendations. |
+| kubelet | /var/lib/kubelet/ | *30 GB ** | Used for local disk volumes, emptyDir, log storage, and more. See the Kubernetes [Resource Management for Pods and Containers documentation](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage) for more information. |
+| containerd | /var/lib/containerd/ | *30 GB ** | Snapshots, content, metadata for containers and image, as well as any plugin data will be kept in this location. See the [containerd documentation](https://github.com/containerd/containerd/blob/main/docs/ops.md#base-configuration) for more information. |
+| kube-apiserver | /var/log/apiserver/ | 1 GB | Kubernetes audit logs. See Kubernetes [Auditing documentation](https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/) for mode information. |
+| kURL | /var/lib/kurl/ | *10 GB *** | kURL data directory used to store utilities, system packages, and container images. This directory can be overridden with the flag `kurl-install-directory` (see [kURL Advanced Install Options](/docs/install-with-kurl/advanced-options)) |
+| Root Disk | / | 100 GB | Based on the aggregate requirements above and the fact that Kubernetes will start to reclaim space at 85% full disk, the minimum recommended root partition is 100 GB. See details above for each component. |
+
+** This requirement depends on the size of the container images and the amount of ephemeral data used by your application containers.*
+
+*** This requirement can vary depending on your choice of kURL add-ons and can grow over time.*
-## Add-on Directory Disk Space Requirements
+In addition to the storage requirements, the Kubernetes [garbage collection](https://kubernetes.io/docs/concepts/architecture/garbage-collection/) process attempts to ensure that the Node and Image filesystems do not reach their minimum available disk space [thresholds](https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#hard-eviction-thresholds) of 10% and 15% reqpectively.
+For this reason, kURL recommends an additional 20% overhead on top of these disk space requirements for the volume or volumes containing the directories /var/lib/kubelet/ and /var/lib/containerd/.
+For more information see the Kubernetes [Reclaiming node level resources](https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/#reclaim-node-resources) documentation.
+
+### Add-on Requirements
The following table lists the add-on directory locations and disk space requirements, if applicable. For any additional requirements, see the specific topic for the add-on.
-| Name | Location | Requirements |
-| --------------| ------------------- | ------------------------------|
-| Containerd | /var/lib/containerd/ | N/A |
-| Docker | /var/lib/docker/
/var/lib/dockershim/
| Docker: 30 GB and less than 80% full
Dockershim: N/A
See [Docker Add-on](/docs/add-ons/docker). |
-| Longhorn | /var/lib/longhorn/ |
This directory should have enough space to hold a complete copy of every PersistentVolumeClaim that will be in the cluster. See [Longhorn Add-on](/docs/add-ons/longhorn).
For host preflights, it should have 50GiB total space and be less than 80% full. See [Host Preflights](/docs/install-with-kurl/host-preflights).
|
-| OpenEBS | /var/openebs/ | N/A |
-| Rook | Versions earlier than 1.0.4-x: /opt/replicated/rook
Versions 1.0.4-x and later: /var/lib/rook/
| /opt/replicated/rook requires a minimum of 10GB and less than 80% full.
/var/lib/rook/ requires a 10 GB block device.
See [Rook Add-on](/docs/add-ons/rook).
|
-|Weave | /var/lib/cni/
/var/lib/weave/
| N/A |
+| Name | Location | Requirements | Description |
+| -------- | -------------------- | ------------- | ----------- |
+| Docker | /var/lib/docker/ | *30 GB ** | Images, containers and volumes, and more will be kept in this location. See the [Docker Storage documentation](https://docs.docker.com/storage/) for more information. When using the Docker runtime, /var/lib/containerd/ is not required. |
+| Docker | /var/lib/dockershim/ | N/A | Kubernetes dockershim data directory |
+| Weave | /var/lib/cni/ | N/A | Container networking data directory |
+| Weave | /var/lib/weave/ | N/A | Weave data directory |
+| Rook | /var/lib/rook/ | 10 GB | Ceph monitor metadata directory. See the [ceph-mon Minimum Hardware Recommendations](https://docs.ceph.com/en/quincy/start/hardware-recommendations/#minimum-hardware-recommendations) for more information. |
+| Registry | *PVC *** | N/A | Stores container images only in airgapped clusters. Data will be stored in Persistent Volumes. |
+| Velero | *PVC *** | N/A | Stores snapshot data. Data will be stored in Persistent Volumes. |
+
+** This requirement depends on the size of the container images and the amount of ephemeral data used by your application containers.*
+
+*** Data will be stored in Persistent Volumes. Requirements depend on the provisioner of choice. See [Persistent Volume Requirements](#persistent-volume-requirements) for more information.*
+
+### Persistent Volume Requirements
+
+Depending on the amount of persistent data stored by your application, you will need to allocate enough disk space at the following location dependent on your PVC provisioner or provisioners.
+
+| Name | Location | Description |
+| -------------------- | --------------------- | ----------- |
+| OpenEBS | /var/openebs/local/ | OpenEBS Local PV Hostpath volumes will be created under this directory. See the [OpenEBS Add-on](/docs/add-ons/openebs) documentation for more information. |
+| Rook (Block Storage) | | Rook add-on version 1.4.3 and later requires an unformatted storage device on each node in the cluster for Ceph volumes. See the [Rook Block Storage](/docs/add-ons/rook#block-storage) documentation for more information. |
+| Rook (version 1.0.x) | /opt/replicated/rook/ | Rook Filesystem volumes will be created under this directory. See the [Rook Filesystem Storage](/docs/add-ons/rook#filesystem-storage) documentation for more information. |
+| Longhorn | /var/lib/longhorn/ | Longhorn volumes will be created under this directory. See the [Longhorn Add-on](/docs/add-ons/longhorn) documentation for more information. |
## Networking Requirements
+
### Firewall Openings for Online Installations
The following domains need to be accessible from servers performing online kURL installs.
@@ -74,7 +102,6 @@ See [Advanced Options](/docs/install-with-kurl/advanced-options) for installer f
The following ports must be open between nodes for multi-node clusters:
-
#### Primary Nodes:
| Protocol | Direction | Port Range | Purpose | Used By |
@@ -122,7 +149,7 @@ In addition to the networking requirements described in the previous section, op
### Control Plane HA
To operate the Kubernetes control plane in HA mode, it is recommended to have a minimum of 3 primary nodes.
-In the event that one of these nodes becomes unavailable, the remaining two will still be able to function with an etcd quorom.
+In the event that one of these nodes becomes unavailable, the remaining two will still be able to function with an etcd quorum.
As the cluster scales, dedicating these primary nodes to control-plane only workloads using the `noSchedule` taint should be considered.
This will affect the number of nodes that need to be provisioned.
@@ -146,7 +173,7 @@ graph TB
Highly available cluster setups that do not leverage EKCO's [internal load balancing capability](/docs/add-ons/ekco#internal-load-balancer) require a load balancer to route requests to healthy nodes.
The following requirements need to be met for load balancers used on the control plane (primary nodes):
1. The load balancer must be able to route TCP traffic, as opposed to Layer 7/HTTP traffic.
-1. The load balancer must support hairpinning, i.e. nodes referring to eachother through the load balancer IP.
+1. The load balancer must support hairpinning, i.e. nodes referring to each other through the load balancer IP.
* **Note**: On AWS, only internet-facing Network Load Balancers (NLBs) and internal AWS NLBs **using IP targets** (not Instance targets) support this.
1. Load balancer health checks should be configured using TCP probes of port 6443 on each primary node.
1. The load balancer should target each primary node on port 6443.
@@ -163,7 +190,7 @@ Load balancer requirements for application workloads vary depending on workload.
The following example cloud VM instance/disk combinations are known to provide sufficient performance for etcd and will pass the write latency preflight.
-* AWS m4.xlarge with 80 GB standard EBS root device
+* AWS m4.xlarge with 100 GB standard EBS root device
* Azure D4ds_v4 with 8 GB ultra disk mounted at /var/lib/etcd provisioned with 2400 IOPS and 128 MB/s throughput
-* Google Cloud Platform n1-standard-4 with 50 GB pd-ssd boot disk
+* Google Cloud Platform n1-standard-4 with 100 GB pd-ssd boot disk
* Google Cloud Platform n1-standard-4 with 500 GB pd-standard boot disk