Skip to content

Commit

Permalink
Merge pull request #937 from replicatedhq/emosbaugh/sc-66405/improve-…
Browse files Browse the repository at this point in the history
…disk-space-requirements-documentation

chore: improve disk space requirements docs
  • Loading branch information
emosbaugh authored Feb 8, 2023
2 parents c808bf2 + 595fd85 commit 5480ff9
Show file tree
Hide file tree
Showing 4 changed files with 77 additions and 52 deletions.
8 changes: 4 additions & 4 deletions src/components/App.js
Original file line number Diff line number Diff line change
Expand Up @@ -218,15 +218,15 @@ cat install.sh | sudo bash -s airgap
</li>
{installerData && installerData.spec.flannel && installerData.spec.flannel.version &&
<li className="u-fontSize--small u-color--dustyGray u-fontWeight--medium u-lineHeight--normal">
TCP ports 2379, 2380, 6443, 10250, 10251 and 10252 open between cluster nodes
TCP ports 2379, 2380, 6443, 10250, 10257 and 10259 open between cluster nodes
</li>}
{installerData && installerData.spec.flannel && installerData.spec.flannel.version &&
<li className="u-fontSize--small u-color--dustyGray u-fontWeight--medium u-lineHeight--normal">
UDP port 8472 open between cluster nodes
</li>}
{installerData && installerData.spec.weave && installerData.spec.weave.version &&
<li className="u-fontSize--small u-color--dustyGray u-fontWeight--medium u-lineHeight--normal">
TCP ports 2379, 2380, 6443, 6783, 10250, 10251 and 10252 open between cluster nodes
TCP ports 2379, 2380, 6443, 6783, 10250, 10257 and 10259 open between cluster nodes
</li>}
{installerData && installerData.spec.weave && installerData.spec.weave.version &&
<li className="u-fontSize--small u-color--dustyGray u-fontWeight--medium u-lineHeight--normal">
Expand All @@ -236,10 +236,10 @@ cat install.sh | sudo bash -s airgap
<li className="u-fontSize--small u-color--dustyGray u-fontWeight--medium u-lineHeight--normal">
{installerData.spec.antrea.isEncryptionDisabled ?
<span>
TCP ports 2379, 2380, 6443, 8091, 10250, 10251 and 10252 open between cluster nodes
TCP ports 2379, 2380, 6443, 8091, 10250, 10257 and 10259 open between cluster nodes
</span> :
<span>
TCP ports 2379, 2380, 6443, 8091, 10250, 10251, 10252 and 51820 open between cluster nodes
TCP ports 2379, 2380, 6443, 8091, 10250, 10257, 10259 and 51820 open between cluster nodes
</span>}
</li>}
{installerData && installerData.spec.antrea && installerData.spec.antrea.version &&
Expand Down
36 changes: 17 additions & 19 deletions src/markdown-pages/add-ons/rook.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,16 @@ spec:
flags-table
## System Requirements
The following ports must be open between nodes for multi-node clusters:
| Protocol | Direction | Port Range | Purpose | Used By |
| ------- | --------- | ---------- | ----------------------- | ------- |
| TCP | Inbound | 9090 | CSI RBD Plugin Metrics | All |
The `/var/lib/rook/` directory requires at least 10 GB space available for Ceph monitor metadata.

## Block Storage

Rook versions 1.4.3 and later require a dedicated block device attached to each node in the cluster.
Expand Down Expand Up @@ -60,29 +70,25 @@ Additionally, `blockDeviceFilter` instructs Rook to use only block devices that
For more information about the available options, see [Advanced Install Options](#advanced-install-options) above.

The Rook add-on waits for the dedicated disk that you attached to your node before continuing with installation.
If you attached a disk to your node, but the installer is waiting at the Rook add-on installation step, see [OSD pods are not created on my devices](https://rook.io/docs/rook/v1.0/ceph-common-issues.html#osd-pods-are-not-created-on-my-devices) in the Rook documentation for troubleshooting information.
If you attached a disk to your node, but the installer is waiting at the Rook add-on installation step, see [OSD pods are not created on my devices](https://rook.io/docs/rook/v1.10/Troubleshooting/ceph-common-issues/#osd-pods-are-not-created-on-my-devices) in the Rook documentation for troubleshooting information.

## Filesystem Storage

By default, for Rook versions earlier than 1.4.3, the cluster uses the filesystem for Rook storage.
However, block storage is recommended for Rook in production clusters.
For more information, see [Block Storage](#block-storage) above.

When using the filesystem for storage, each node in the cluster has a single OSD backed by a directory in `/opt/replicated/rook`.
Nodes with a Ceph Monitor also use `/var/lib/rook`.

Sufficient disk space must be available to `/var/lib/rook` for the Ceph Monitors and other configs. For disk requirements, see [Add-on Directory Disk Space Requirements](/docs/install-with-kurl/system-requirements/#add-on-directory-disk-space-requirements).

We recommend a separate partition to prevent a disruption in Ceph's operation as a result of `/var` or the root partition running out of space.
When using the filesystem for storage, each node in the cluster has a single OSD backed by a directory in `/opt/replicated/rook/`.
We recommend a separate disk or partition at `/opt/replicated/rook/` to prevent a disruption in Ceph's operation as a result the root partition running out of space.

**Note**: All disks used for storage in the cluster should be of similar size.
A cluster with large discrepancies in disk size may fail to replicate data to all available nodes.

## Shared Filesystem

The [Ceph filesystem](https://rook.io/docs/rook/v1.4/ceph-filesystem.html) is supported with version 1.4.3+.
The [Ceph filesystem](https://rook.io/docs/rook/v1.10/Storage-Configuration/Shared-Filesystem-CephFS/filesystem-storage/) is supported with version 1.4.3+.
This allows the use of PersistentVolumeClaims with access mode `ReadWriteMany`.
Set the storage class to `rook-cephfs` in the pvc spec to use this feature.
Set the storage class to `rook-cephfs` in the PVC spec to use this feature.

```yaml
apiVersion: v1
Expand All @@ -98,14 +104,6 @@ spec:
storageClassName: rook-cephfs
```

## System Requirements

The following additional ports must be open between nodes for multi-node clusters:

| Protocol | Direction | Port Range | Purpose | Used By |
| ------- | --------- | ---------- | ----------------------- | ------- |
| TCP | Inbound | 9090 | CSI RBD Plugin Metrics | All |

## Upgrades

It is now possible to upgrade multiple minor versions of the Rook add-on at once.
Expand All @@ -121,10 +119,10 @@ For example:
curl https://k8s.kurl.sh/latest/tasks.sh | sudo bash -s rook-upgrade to-version=1.10
```

Rook upgrades from 1.0.x migrate data off of any hostpath-based OSDs in favor of block device-based OSDs.
Rook upgrades from 1.0.x migrate data off of any filesystem-based OSDs in favor of block device-based OSDs.
The upstream Rook project introduced a requirement for block storage in versions 1.3.x and later.

## Monitor Rook Ceph
## Monitoring

For Rook version 1.9.12 and later, when you install with both the Rook add-on and the Prometheus add-on, kURL enables Ceph metrics collection and creates a Ceph cluster statistics Grafana dashboard.

Expand Down
4 changes: 2 additions & 2 deletions src/markdown-pages/install-with-kurl/managing-nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ Complete the following prerequisites before you remove one or more nodes from a

* Upgrade Rook Ceph to v1.4 or later.

The two latest minor releases of Rook Ceph are actively maintained. It is recommended to upgrade to the latest stable release available. For more information, see [Release Cycle](https://rook.io/docs/rook/latest/Getting-Started/release-cycle/) in the Rook Ceph documentation.
The two latest minor releases of Rook Ceph are actively maintained. It is recommended to upgrade to the latest stable release available. For more information, see [Release Cycle](https://rook.io/docs/rook/v1.10/Getting-Started/release-cycle/) in the Rook Ceph documentation.

Attempting to remove a node from a cluster that uses a Rook Ceph version earlier than v1.4 can cause Ceph to enter an unhealthy state. For example, see [Rook Ceph v1.0.4 is Unhealthy with Mon Pods Not Rescheduled](#rook-ceph-v104-is-unhealthy-with-mon-pods-not-rescheduled) under _Troubleshoot Node Removal_ below.

Expand All @@ -115,7 +115,7 @@ Complete the following prerequisites before you remove one or more nodes from a
* (Recommended) Use the `rook-ceph-tools` Pod to access the ceph CLI.
Use the same version of the Rook toolbox as the version of Rook Ceph that is installed in the cluster.
By default, the `rook-ceph-tools` Pod is included on kURL clusters with Rook Ceph v1.4 and later.
For more information about `rook-ceph-tools` Pods, see [Rook Toolbox](https://rook.io/docs/rook/v1.5/ceph-toolbox.html) in the Rook Ceph documentation.
For more information about `rook-ceph-tools` Pods, see [Rook Toolbox](https://rook.io/docs/rook/v1.10/Troubleshooting/ceph-toolbox/) in the Rook Ceph documentation.

* Use `kubectl exec` to enter the `rook-ceph-operator` Pod, where the ceph CLI is available.

Expand Down
Loading

2 comments on commit 5480ff9

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🎉 Published on https://kurlsh.netlify.app as production
🚀 Deployed on https://63e4106a2cf8cb27c9bb324c--kurlsh.netlify.app

Please sign in to comment.