Skip to content

Commit

Permalink
Merge pull request rook#13355 from rook/mergify/bp/release-1.13/pr-13335
Browse files Browse the repository at this point in the history
docs: update upgrade docs for v1.13 release (backport rook#13335)
  • Loading branch information
mergify[bot] authored Dec 8, 2023
2 parents 941a623 + abf7495 commit 332f804
Show file tree
Hide file tree
Showing 20 changed files with 71 additions and 63 deletions.
2 changes: 1 addition & 1 deletion .commitlintrc.json
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
"external",
"file",
"helm",
"k8sutil",
"k8sutil",
"manifest",
"mds",
"mgr",
Expand Down
11 changes: 7 additions & 4 deletions Documentation/CRDs/Cluster/ceph-cluster-crd.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ Official releases of Ceph Container images are available from [Docker Hub](https
These are general purpose Ceph container with all necessary daemons and dependencies installed.

| TAG | MEANING |
|----------------------|-----------------------------------------------------------|
| -------------------- | --------------------------------------------------------- |
| vRELNUM | Latest release in this series (e.g., *v17* = Quincy) |
| vRELNUM.Y | Latest stable release in this stable series (e.g., v17.2) |
| vRELNUM.Y.Z | A specific release (e.g., v17.2.6) |
Expand Down Expand Up @@ -217,7 +217,7 @@ Configure the network that will be enabled for the cluster and services.
If this setting is enabled, CephFS volumes also require setting `CSI_CEPHFS_KERNEL_MOUNT_OPTIONS` to `"ms_mode=secure"` in operator.yaml.
* `compression`:
* `enabled`: Whether to compress the data in transit across the wire. The default is false.
Requires Ceph Quincy (v17) or newer. Also see the kernel requirements above for encryption.
See the kernel requirements above for encryption.

!!! caution
Changing networking configuration after a Ceph cluster has been deployed is NOT
Expand Down Expand Up @@ -490,7 +490,7 @@ The following storage selection settings are specific to Ceph and do not apply t
Allowed configurations are:

| block device type | host-based cluster | PVC-based cluster |
|:------------------|:--------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
| :---------------- | :------------------------------------------------------------------------------------------------ | :------------------------------------------------------------------------------ |
| disk | | |
| part | `encryptedDevice` must be `false` | `encrypted` must be `false` |
| lvm | `metadataDevice` must be `""`, `osdsPerDevice` must be `1`, and `encryptedDevice` must be `false` | `metadata.name` must not be `metadata` or `wal` and `encrypted` must be `false` |
Expand Down Expand Up @@ -949,7 +949,10 @@ set the `allowUninstallWithVolumes` to true under `spec.CleanupPolicy`.
!!! attention
This feature is experimental.

The Ceph config options are applied after the MONs are all in quorum and running. To set Ceph config options, you can add them to your `CephCluster` spec like this:
The Ceph config options are applied after the MONs are all in quorum and running.
To set Ceph config options, you can add them to your `CephCluster` spec as shown below.
See the [Ceph config reference](https://docs.ceph.com/en/latest/rados/configuration/general-config-ref/)
for detailed information about how to configure Ceph.

```yaml
spec:
Expand Down
2 changes: 1 addition & 1 deletion Documentation/CRDs/Object-Storage/ceph-object-store-crd.md
Original file line number Diff line number Diff line change
Expand Up @@ -234,7 +234,7 @@ vault write -f transit/keys/<mybucketkey> exportable=true # transit engine

* TLS authentication with custom certificates between Vault and CephObjectStore RGWs are supported from ceph v16.2.6 onwards
* `tokenSecretName` can be (and often will be) the same for both kms and s3 configurations.
* `AWS-SSE:S3` requires Ceph Quincy (v17.2.3) and later.
* `AWS-SSE:S3` requires Ceph Quincy v17.2.3 or later.

## Deleting a CephObjectStore

Expand Down
5 changes: 2 additions & 3 deletions Documentation/CRDs/specification.md
Original file line number Diff line number Diff line change
Expand Up @@ -4649,7 +4649,7 @@ bool
<td>
<em>(Optional)</em>
<p>Whether to compress the data in transit across the wire.
The default is not set. Requires Ceph Quincy (v17) or newer.</p>
The default is not set.</p>
</td>
</tr>
</tbody>
Expand Down Expand Up @@ -6754,8 +6754,7 @@ bool
</td>
<td>
<em>(Optional)</em>
<p>Send the notifications with the CloudEvents header: <a href="https://github.com/cloudevents/spec/blob/main/cloudevents/adapters/aws-s3.md">https://github.com/cloudevents/spec/blob/main/cloudevents/adapters/aws-s3.md</a>
Supported for Ceph Quincy (v17) or newer.</p>
<p>Send the notifications with the CloudEvents header: <a href="https://github.com/cloudevents/spec/blob/main/cloudevents/adapters/aws-s3.md">https://github.com/cloudevents/spec/blob/main/cloudevents/adapters/aws-s3.md</a></p>
</td>
</tr>
</tbody>
Expand Down
8 changes: 4 additions & 4 deletions Documentation/Upgrade/ceph-upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ Official Ceph container images can be found on [Quay](https://quay.io/repository

These images are tagged in a few ways:

* The most explicit form of tags are full-ceph-version-and-build tags (e.g., `v17.2.6-20230410`).
* The most explicit form of tags are full-ceph-version-and-build tags (e.g., `v17.2.6-20231027`).
These tags are recommended for production clusters, as there is no possibility for the cluster to
be heterogeneous with respect to the version of Ceph running in containers.
* Ceph major version tags (e.g., `v17`) are useful for development and test clusters so that the
Expand All @@ -80,7 +80,7 @@ CephCluster CRD (`spec.cephVersion.image`).

```console
ROOK_CLUSTER_NAMESPACE=rook-ceph
NEW_CEPH_IMAGE='quay.io/ceph/ceph:v17.2.6-20230410'
NEW_CEPH_IMAGE='quay.io/ceph/ceph:v17.2.6-20231027'
kubectl -n $ROOK_CLUSTER_NAMESPACE patch CephCluster $ROOK_CLUSTER_NAMESPACE --type=merge -p "{\"spec\": {\"cephVersion\": {\"image\": \"$NEW_CEPH_IMAGE\"}}}"
```

Expand All @@ -92,7 +92,7 @@ employed by the new Rook operator release. Employing an outdated Ceph version wi
in unexpected behaviour.

```console
kubectl -n rook-ceph set image deploy/rook-ceph-tools rook-ceph-tools=quay.io/ceph/ceph:v17.2.6-20230410
kubectl -n rook-ceph set image deploy/rook-ceph-tools rook-ceph-tools=quay.io/ceph/ceph:v17.2.6-20231027
```

#### **3. Wait for the pod updates**
Expand All @@ -109,7 +109,7 @@ Confirm the upgrade is completed when the versions are all on the desired Ceph v
```console
kubectl -n $ROOK_CLUSTER_NAMESPACE get deployment -l rook_cluster=$ROOK_CLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{"ceph-version="}{.metadata.labels.ceph-version}{"\n"}{end}' | sort | uniq
This cluster is not yet finished:
ceph-version=15.2.13-0
ceph-version=v16.2.14-0
ceph-version=v17.2.6-0
This cluster is finished:
ceph-version=v17.2.6-0
Expand Down
58 changes: 31 additions & 27 deletions Documentation/Upgrade/rook-upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,14 +14,15 @@ We welcome feedback and opening issues!

## Supported Versions

This guide is for upgrading from **Rook v1.11.x to Rook v1.12.x**.
This guide is for upgrading from **Rook v1.12.x to Rook v1.13.x**.

Please refer to the upgrade guides from previous releases for supported upgrade paths.
Rook upgrades are only supported between official releases.

For a guide to upgrade previous versions of Rook, please refer to the version of documentation for
those releases.

* [Upgrade 1.11 to 1.12](https://rook.io/docs/rook/v1.12/Upgrade/rook-upgrade/)
* [Upgrade 1.10 to 1.11](https://rook.io/docs/rook/v1.11/Upgrade/rook-upgrade/)
* [Upgrade 1.9 to 1.10](https://rook.io/docs/rook/v1.10/Upgrade/rook-upgrade/)
* [Upgrade 1.8 to 1.9](https://rook.io/docs/rook/v1.9/Upgrade/rook-upgrade/)
Expand Down Expand Up @@ -49,16 +50,19 @@ those releases.
## Breaking changes in v1.13

* The minimum supported version of Kubernetes is v1.23.
* Support for the admission controller/webhooks has been removed. If admission controller/webhooks is enabled, disable by changing
`ROOK_DISABLE_ADMISSION_CONTROLLER: "true"` in operator.yaml before upgrading to rook v1.13. CRD validation is now enabled with [Common Expression Language](https://kubernetes.io/docs/reference/using-api/cel/). This requires Kubernetes version 1.25 or higher.

## Breaking changes in v1.12

* The minimum supported version of Kubernetes is v1.22.
Upgrade to Kubernetes v1.23 or higher before upgrading Rook.
* The minimum supported version of Ceph is v17.2.0.
If a lower version is currently deployed, [Upgrade Ceph](./ceph-upgrade.md) before upgrading Rook.
* CephCSI CephFS driver introduced a breaking change in v3.9.0. If any existing CephFS storageclass in
the cluster has `MountOptions` parameter set, follow the steps mentioned in the
[CephCSI upgrade guide](https://github.com/ceph/ceph-csi/blob/v3.9.0/docs/ceph-csi-upgrade.md/#upgrading-cephfs)
to ensure a smooth upgrade.
This became the default CSI version in Rook v1.12.1, and may have already been resolved.
* Support for the admission controller has been removed. CRD validation is now enabled with
[Validating Admission Policies](https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/).
Validating Admission Policy rules are ignored in Kubernetes v1.24 and lower.
If the admission controller is enabled, it is advised to upgrade to Kubernetes v1.25 or higher before upgrading Rook.
For more info, see https://github.com/rook/rook/pull/11532.

## Considerations

Expand All @@ -74,23 +78,23 @@ With this upgrade guide, there are a few notes to consider:

Unless otherwise noted due to extenuating requirements, upgrades from one patch release of Rook to
another are as simple as updating the common resources and the image of the Rook operator. For
example, when Rook v1.12.1 is released, the process of updating from v1.12.0 is as simple as running
example, when Rook v1.13.1 is released, the process of updating from v1.13.0 is as simple as running
the following:

```console
git clone --single-branch --depth=1 --branch v1.12.1 https://github.com/rook/rook.git
git clone --single-branch --depth=1 --branch v1.13.1 https://github.com/rook/rook.git
cd rook/deploy/examples
```

If the Rook Operator or CephCluster are deployed into a different namespace than
`rook-ceph`, see the [Update common resources and CRDs](#1-update-common-resources-and-crds)
section for instructions on how to change the default namespaces in `common.yaml`.

Then, apply the latest changes from v1.12, and update the Rook Operator image.
Then, apply the latest changes from v1.13, and update the Rook Operator image.

```console
kubectl apply -f common.yaml -f crds.yaml
kubectl -n rook-ceph set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.12.1
kubectl -n rook-ceph set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.13.1
```

As exemplified above, it is a good practice to update Rook common resources from the example
Expand Down Expand Up @@ -122,9 +126,9 @@ In order to successfully upgrade a Rook cluster, the following prerequisites mus

## Rook Operator Upgrade

The examples given in this guide upgrade a live Rook cluster running `v1.11.7` to
the version `v1.12.0`. This upgrade should work from any official patch release of Rook v1.11 to any
official patch release of v1.12.
The examples given in this guide upgrade a live Rook cluster running `v1.12.9` to
the version `v1.13.0`. This upgrade should work from any official patch release of Rook v1.12 to any
official patch release of v1.13.

Let's get started!

Expand Down Expand Up @@ -185,7 +189,7 @@ kubectl apply -f deploy/examples/monitoring/rbac.yaml
!!! hint
The operator is automatically updated when using Helm charts.

The largest portion of the upgrade is triggered when the operator's image is updated to `v1.12.x`.
The largest portion of the upgrade is triggered when the operator's image is updated to `v1.13.x`.
When the operator is updated, it will proceed to update all of the Ceph daemons.

```console
Expand Down Expand Up @@ -219,18 +223,18 @@ watch --exec kubectl -n $ROOK_CLUSTER_NAMESPACE get deployments -l rook_cluster=
```

As an example, this cluster is midway through updating the OSDs. When all deployments report `1/1/1`
availability and `rook-version=v1.12.0`, the Ceph cluster's core components are fully updated.
availability and `rook-version=v1.13.0`, the Ceph cluster's core components are fully updated.

```console
Every 2.0s: kubectl -n rook-ceph get deployment -o j...

rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.12.0
rook-ceph-mon-a req/upd/avl: 1/1/1 rook-version=v1.12.0
rook-ceph-mon-b req/upd/avl: 1/1/1 rook-version=v1.12.0
rook-ceph-mon-c req/upd/avl: 1/1/1 rook-version=v1.12.0
rook-ceph-osd-0 req/upd/avl: 1// rook-version=v1.12.0
rook-ceph-osd-1 req/upd/avl: 1/1/1 rook-version=v1.11.7
rook-ceph-osd-2 req/upd/avl: 1/1/1 rook-version=v1.11.7
rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.13.0
rook-ceph-mon-a req/upd/avl: 1/1/1 rook-version=v1.13.0
rook-ceph-mon-b req/upd/avl: 1/1/1 rook-version=v1.13.0
rook-ceph-mon-c req/upd/avl: 1/1/1 rook-version=v1.13.0
rook-ceph-osd-0 req/upd/avl: 1// rook-version=v1.13.0
rook-ceph-osd-1 req/upd/avl: 1/1/1 rook-version=v1.12.9
rook-ceph-osd-2 req/upd/avl: 1/1/1 rook-version=v1.12.9
```

An easy check to see if the upgrade is totally finished is to check that there is only one
Expand All @@ -239,14 +243,14 @@ An easy check to see if the upgrade is totally finished is to check that there i
```console
# kubectl -n $ROOK_CLUSTER_NAMESPACE get deployment -l rook_cluster=$ROOK_CLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{"rook-version="}{.metadata.labels.rook-version}{"\n"}{end}' | sort | uniq
This cluster is not yet finished:
rook-version=v1.11.7
rook-version=v1.12.0
rook-version=v1.12.9
rook-version=v1.13.0
This cluster is finished:
rook-version=v1.12.0
rook-version=v1.13.0
```

### **5. Verify the updated cluster**

At this point, the Rook operator should be running version `rook/ceph:v1.12.0`.
At this point, the Rook operator should be running version `rook/ceph:v1.13.0`.

Verify the CephCluster health using the [health verification doc](health-verification.md).
11 changes: 7 additions & 4 deletions PendingReleaseNotes.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,14 @@

## Breaking Changes

- Removed official support for Kubernetes v1.22
- Removed support for Ceph Pacific (v16)
- Support for the admission controller/webhooks has been removed. If admission controller/webhooks is enabled, disable by changing
`ROOK_DISABLE_ADMISSION_CONTROLLER: "true"` in operator.yaml before upgrading to rook v1.13. CRD validation is now enabled with [Common Expression Language](https://kubernetes.io/docs/reference/using-api/cel/). This requires Kubernetes version 1.25 or higher.
- Support for the admission controller has been removed. See the
[Rook upgrade guide](./Documentation/Upgrade/rook-upgrade.md#breaking-changes-in-v113) for more details.

## Features

- Added `cephConfig:` to CephCluster to allow setting Ceph config options in the Ceph MON config store via the CRD
- CephCSI v3.10.0 is now the default CSI driver version. Refer to [Ceph-CSI v3.10.0 Release Notes](https://github.com/ceph/ceph-csi/releases/tag/v3.10.0) for more details.
- Added official support for Kubernetes v1.28
- Added experimental `cephConfig` to CephCluster to allow setting Ceph config options in the Ceph MON config store via the CRD
- CephCSI v3.10.0 is now the default CSI driver version.
Refer to [Ceph-CSI v3.10.0 Release Notes](https://github.com/ceph/ceph-csi/releases/tag/v3.10.0) for more details.
4 changes: 2 additions & 2 deletions deploy/charts/rook-ceph/templates/resources.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -676,7 +676,7 @@ spec:
description: Indicate whether the server certificate is validated by the client or not
type: boolean
sendCloudEvents:
description: 'Send the notifications with the CloudEvents header: https://github.com/cloudevents/spec/blob/main/cloudevents/adapters/aws-s3.md Supported for Ceph Quincy (v17) or newer.'
description: 'Send the notifications with the CloudEvents header: https://github.com/cloudevents/spec/blob/main/cloudevents/adapters/aws-s3.md'
type: boolean
uri:
description: The URI of the HTTP endpoint to push notification to
Expand Down Expand Up @@ -2207,7 +2207,7 @@ spec:
nullable: true
properties:
enabled:
description: Whether to compress the data in transit across the wire. The default is not set. Requires Ceph Quincy (v17) or newer.
description: Whether to compress the data in transit across the wire. The default is not set.
type: boolean
type: object
encryption:
Expand Down
4 changes: 2 additions & 2 deletions deploy/examples/bucket-topic.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ spec:
endpoint:
http:
uri: http://my-notification-endpoint:8080
# uri: https://my-notification-endpoint:8443
# uri: https://my-notification-endpoint:8443
disableVerifySSL: true
sendCloudEvents: false # supported only in Ceph Quincy (v17) or newer
sendCloudEvents: false
# amqp:
# uri: amqp://my-rabbitmq-service:5672/vhost1
# uri: amqps://my-rabbitmq-service:5671/vhost1
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/cluster-external-management.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,4 +19,4 @@ spec:
dataDirHostPath: /var/lib/rook
# providing an image is required, if you want to create other CRs (rgw, mds, nfs)
cephVersion:
image: quay.io/ceph/ceph:v17.2.6 # Should match external cluster version
image: quay.io/ceph/ceph:v18.2.0 # Should match external cluster version
2 changes: 1 addition & 1 deletion deploy/examples/cluster-multus-test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ metadata:
spec:
dataDirHostPath: /var/lib/rook
cephVersion:
image: quay.io/ceph/ceph:v17
image: quay.io/ceph/ceph:v18
allowUnsupported: true
mon:
count: 1
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/cluster-on-local-pvc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ spec:
requests:
storage: 10Gi
cephVersion:
image: quay.io/ceph/ceph:v17.2.6
image: quay.io/ceph/ceph:v18.2.0
allowUnsupported: false
skipUpgradeChecks: false
continueUpgradeAfterChecksEvenIfNotHealthy: false
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/cluster-on-pvc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ spec:
requests:
storage: 10Gi
cephVersion:
image: quay.io/ceph/ceph:v17.2.6
image: quay.io/ceph/ceph:v18.2.0
allowUnsupported: false
skipUpgradeChecks: false
continueUpgradeAfterChecksEvenIfNotHealthy: false
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/cluster-stretched-aws.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ spec:
mgr:
count: 2
cephVersion:
image: quay.io/ceph/ceph:v17.2.6
image: quay.io/ceph/ceph:v18.2.0
allowUnsupported: true
skipUpgradeChecks: false
continueUpgradeAfterChecksEvenIfNotHealthy: false
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/cluster-stretched.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ spec:
mgr:
count: 2
cephVersion:
image: quay.io/ceph/ceph:v17.2.6
image: quay.io/ceph/ceph:v18.2.0
allowUnsupported: true
skipUpgradeChecks: false
continueUpgradeAfterChecksEvenIfNotHealthy: false
Expand Down
6 changes: 3 additions & 3 deletions deploy/examples/cluster.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,9 @@ spec:
# v17 is Quincy, v18 is Reef.
# RECOMMENDATION: In production, use a specific version tag instead of the general v17 flag, which pulls the latest release and could result in different
# versions running within the cluster. See tags available at https://hub.docker.com/r/ceph/ceph/tags/.
# If you want to be more precise, you can always use a timestamp tag such as quay.io/ceph/ceph:v17.2.6-20230410
# If you want to be more precise, you can always use a timestamp tag such as quay.io/ceph/ceph:v17.2.6-20231027
# This tag might not contain a new Ceph version, just security fixes from the underlying operating system, which will reduce vulnerabilities
image: quay.io/ceph/ceph:v17.2.6
image: quay.io/ceph/ceph:v18.2.0
# Whether to allow unsupported versions of Ceph. Currently `quincy` and `reef` are supported.
# Future versions such as `squid` (v19) would require this to be set to `true`.
# Do not set to true in production.
Expand Down Expand Up @@ -92,7 +92,7 @@ spec:
encryption:
enabled: false
# Whether to compress the data in transit across the wire. The default is false.
# Requires Ceph Quincy (v17) or newer. Also see the kernel requirements above for encryption.
# See the kernel requirements above for encryption.
compression:
enabled: false
# Whether to require communication over msgr2. If true, the msgr v1 port (6789) will be disabled
Expand Down
Loading

0 comments on commit 332f804

Please sign in to comment.