Skip to content

Commit

Permalink
Merge pull request #684 from red-hat-storage/sync_us--master
Browse files Browse the repository at this point in the history
Syncing latest changes from upstream master for rook
  • Loading branch information
travisn authored Jul 25, 2024
2 parents f470cff + f0a5538 commit c4f7e0c
Show file tree
Hide file tree
Showing 66 changed files with 949 additions and 719 deletions.
6 changes: 6 additions & 0 deletions .github/workflows/auto-assign.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,14 @@ on:
issue_comment:
types: [created, edited]

permissions:
contents: read

jobs:
assign:
permissions:
# write permissions are needed to assign the issue.
contents: write
name: Run self assign job
runs-on: ubuntu-latest
steps:
Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/canary-integration-suite.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,9 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event_name == 'pull_request' && github.head_ref || github.sha }}
cancel-in-progress: true

permissions:
contents: read

jobs:
canary-tests:
uses: ./.github/workflows/canary-integration-test.yml
Expand Down
42 changes: 0 additions & 42 deletions .github/workflows/create-tag.yaml

This file was deleted.

3 changes: 3 additions & 0 deletions .github/workflows/daily-nightly-jobs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@ defaults:
# reference: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#using-a-specific-shell
shell: bash --noprofile --norc -eo pipefail -x {0}

permissions:
contents: read

jobs:
canary-arm64:
runs-on: [self-hosted, ubuntu-20.04-arm64, ARM64]
Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/integration-test-helm-suite.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,9 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event_name == 'pull_request' && github.head_ref || github.sha }}
cancel-in-progress: true

permissions:
contents: read

jobs:
TestCephHelmSuite:
if: ${{ github.event_name == 'pull_request' && github.ref != 'refs/heads/master' && !contains(github.event.pull_request.labels.*.name, 'skip-ci') }}
Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/integration-test-mgr-suite.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event_name == 'pull_request' && github.head_ref || github.sha }}
cancel-in-progress: true

permissions:
contents: read

jobs:
TestCephMgrSuite:
runs-on: ubuntu-22.04
Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/integration-test-multi-cluster-suite.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,9 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event_name == 'pull_request' && github.head_ref || github.sha }}
cancel-in-progress: true

permissions:
contents: read

jobs:
TestCephMultiClusterDeploySuite:
if: ${{ github.event_name == 'pull_request' && github.ref != 'refs/heads/master' && !contains(github.event.pull_request.labels.*.name, 'skip-ci') }}
Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/integration-test-object-suite.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,9 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event_name == 'pull_request' && github.head_ref || github.sha }}
cancel-in-progress: true

permissions:
contents: read

jobs:
TestCephObjectSuite:
if: ${{ github.event_name == 'pull_request' && github.ref != 'refs/heads/master' && !contains(github.event.pull_request.labels.*.name, 'skip-ci') }}
Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/integration-test-smoke-suite.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,9 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event_name == 'pull_request' && github.head_ref || github.sha }}
cancel-in-progress: true

permissions:
contents: read

jobs:
TestCephSmokeSuite:
if: ${{ github.event_name == 'pull_request' && github.ref != 'refs/heads/master' && !contains(github.event.pull_request.labels.*.name, 'skip-ci') }}
Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/integration-test-upgrade-suite.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,9 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event_name == 'pull_request' && github.head_ref || github.sha }}
cancel-in-progress: true

permissions:
contents: read

jobs:
TestCephUpgradeSuite:
if: ${{ github.event_name == 'pull_request' && github.ref != 'refs/heads/master' && !contains(github.event.pull_request.labels.*.name, 'skip-ci') }}
Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/integration-tests-on-release.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,9 @@ defaults:
# reference: https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#using-a-specific-shell
shell: bash --noprofile --norc -eo pipefail -x {0}

permissions:
contents: read

jobs:
TestCephHelmSuite:
runs-on: ubuntu-22.04
Expand Down
3 changes: 3 additions & 0 deletions .github/workflows/multus.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,9 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event_name == 'pull_request' && github.head_ref || github.sha }}
cancel-in-progress: true

permissions:
contents: read

jobs:
test-validation-tool:
runs-on: ubuntu-latest
Expand Down
16 changes: 9 additions & 7 deletions Documentation/CRDs/Cluster/ceph-cluster-crd.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Settings can be specified at the global level to apply to the cluster as a whole
* `external`:
* `enable`: if `true`, the cluster will not be managed by Rook but via an external entity. This mode is intended to connect to an existing cluster. In this case, Rook will only consume the external cluster. However, Rook will be able to deploy various daemons in Kubernetes such as object gateways, mds and nfs if an image is provided and will refuse otherwise. If this setting is enabled **all** the other options will be ignored except `cephVersion.image` and `dataDirHostPath`. See [external cluster configuration](external-cluster/external-cluster.md). If `cephVersion.image` is left blank, Rook will refuse the creation of extra CRs like object, file and nfs.
* `cephVersion`: The version information for launching the ceph daemons.
* `image`: The image used for running the ceph daemons. For example, `quay.io/ceph/ceph:v18.2.2`. For more details read the [container images section](#ceph-container-images).
* `image`: The image used for running the ceph daemons. For example, `quay.io/ceph/ceph:v18.2.4`. For more details read the [container images section](#ceph-container-images).
For the latest ceph images, see the [Ceph DockerHub](https://hub.docker.com/r/ceph/ceph/tags/).
To ensure a consistent version of the image is running across all nodes in the cluster, it is recommended to use a very specific image version.
Tags also exist that would give the latest version, but they are only recommended for test environments. For example, the tag `v17` will be updated each time a new Quincy build is released.
Expand Down Expand Up @@ -80,6 +80,8 @@ For more details on the mons and when to choose a number other than `3`, see the
`useAllNodes` must be set to `false` to use specific nodes and their config.
See [node settings](#node-settings) below.
* `config`: Config settings applied to all OSDs on the node unless overridden by `devices`. See the [config settings](#osd-configuration-settings) below.
* `allowDeviceClassUpdate`: Whether to allow changing the device class of an OSD after it is created. The default is false
to prevent unintentional data movement or CRUSH changes if the device class is changed accidentally.
* [storage selection settings](#storage-selection-settings)
* [Storage Class Device Sets](#storage-class-device-sets)
* `onlyApplyOSDPlacement`: Whether the placement specific for OSDs is merged with the `all` placement. If `false`, the OSD placement will be merged with the `all` placement. If true, the `OSD placement will be applied` and the `all` placement will be ignored. The placement for OSDs is computed from several different places depending on the type of OSD:
Expand Down Expand Up @@ -114,8 +116,8 @@ These are general purpose Ceph container with all necessary daemons and dependen
| -------------------- | --------------------------------------------------------- |
| vRELNUM | Latest release in this series (e.g., **v17** = Quincy) |
| vRELNUM.Y | Latest stable release in this stable series (e.g., v17.2) |
| vRELNUM.Y.Z | A specific release (e.g., v18.2.2) |
| vRELNUM.Y.Z-YYYYMMDD | A specific build (e.g., v18.2.2-20240311) |
| vRELNUM.Y.Z | A specific release (e.g., v18.2.4) |
| vRELNUM.Y.Z-YYYYMMDD | A specific build (e.g., v18.2.4-20240724) |

A specific will contain a specific release of Ceph as well as security fixes from the Operating System.

Expand Down Expand Up @@ -340,7 +342,7 @@ The following storage selection settings are specific to Ceph and do not apply t
* `metadataDevice`: Name of a device, [partition](#limitations-of-metadata-device) or lvm to use for the metadata of OSDs on each node. Performance can be improved by using a low latency device (such as SSD or NVMe) as the metadata device, while other spinning platter (HDD) devices on a node are used to store data. Provisioning will fail if the user specifies a `metadataDevice` but that device is not used as a metadata device by Ceph. Notably, `ceph-volume` will not use a device of the same device class (HDD, SSD, NVMe) as OSD devices for metadata, resulting in this failure.
* `databaseSizeMB`: The size in MB of a bluestore database. Include quotes around the size.
* `walSizeMB`: The size in MB of a bluestore write ahead log (WAL). Include quotes around the size.
* `deviceClass`: The [CRUSH device class](https://ceph.io/community/new-luminous-crush-device-classes/) to use for this selection of storage devices. (By default, if a device's class has not already been set, OSDs will automatically set a device's class to either `hdd`, `ssd`, or `nvme` based on the hardware properties exposed by the Linux kernel.) These storage classes can then be used to select the devices backing a storage pool by specifying them as the value of [the pool spec's `deviceClass` field](../Block-Storage/ceph-block-pool-crd.md#spec).
* `deviceClass`: The [CRUSH device class](https://ceph.io/community/new-luminous-crush-device-classes/) to use for this selection of storage devices. (By default, if a device's class has not already been set, OSDs will automatically set a device's class to either `hdd`, `ssd`, or `nvme` based on the hardware properties exposed by the Linux kernel.) These storage classes can then be used to select the devices backing a storage pool by specifying them as the value of [the pool spec's `deviceClass` field](../Block-Storage/ceph-block-pool-crd.md#spec). If updating the device class of an OSD after the OSD is already created, `allowDeviceClassUpdate: true` must be set. Otherwise updates to this `deviceClass` will be ignored.
* `initialWeight`: The initial OSD weight in TiB units. By default, this value is derived from OSD's capacity.
* `primaryAffinity`: The [primary-affinity](https://docs.ceph.com/en/latest/rados/operations/crush-map/#primary-affinity) value of an OSD, within range `[0, 1]` (default: `1`).
* `osdsPerDevice`**: The number of OSDs to create on each device. High performance devices such as NVMe can handle running multiple OSDs. If desired, this can be overridden for each node and each device.
Expand Down Expand Up @@ -420,7 +422,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: quay.io/ceph/ceph:v18.2.2
image: quay.io/ceph/ceph:v18.2.4
dataDirHostPath: /var/lib/rook
mon:
count: 3
Expand Down Expand Up @@ -526,7 +528,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: quay.io/ceph/ceph:v18.2.2
image: quay.io/ceph/ceph:v18.2.4
dataDirHostPath: /var/lib/rook
mon:
count: 3
Expand Down Expand Up @@ -654,7 +656,7 @@ kubectl -n rook-ceph get CephCluster -o yaml
deviceClasses:
- name: hdd
version:
image: quay.io/ceph/ceph:v18.2.2
image: quay.io/ceph/ceph:v18.2.4
version: 16.2.6-0
conditions:
- lastHeartbeatTime: "2021-03-02T21:22:11Z"
Expand Down
6 changes: 3 additions & 3 deletions Documentation/CRDs/Cluster/host-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ metadata:
spec:
cephVersion:
# see the "Cluster Settings" section below for more details on which image of ceph to run
image: quay.io/ceph/ceph:v18.2.2
image: quay.io/ceph/ceph:v18.2.4
dataDirHostPath: /var/lib/rook
mon:
count: 3
Expand All @@ -49,7 +49,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: quay.io/ceph/ceph:v18.2.2
image: quay.io/ceph/ceph:v18.2.4
dataDirHostPath: /var/lib/rook
mon:
count: 3
Expand Down Expand Up @@ -101,7 +101,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: quay.io/ceph/ceph:v18.2.2
image: quay.io/ceph/ceph:v18.2.4
dataDirHostPath: /var/lib/rook
mon:
count: 3
Expand Down
6 changes: 3 additions & 3 deletions Documentation/CRDs/Cluster/pvc-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: quay.io/ceph/ceph:v18.2.2
image: quay.io/ceph/ceph:v18.2.4
dataDirHostPath: /var/lib/rook
mon:
count: 3
Expand Down Expand Up @@ -72,7 +72,7 @@ spec:
requests:
storage: 10Gi
cephVersion:
image: quay.io/ceph/ceph:v18.2.2
image: quay.io/ceph/ceph:v18.2.4
allowUnsupported: false
dashboard:
enabled: true
Expand Down Expand Up @@ -128,7 +128,7 @@ metadata:
namespace: rook-ceph
spec:
cephVersion:
image: quay.io/ceph/ceph:v18.2.2
image: quay.io/ceph/ceph:v18.2.4
dataDirHostPath: /var/lib/rook
mon:
count: 3
Expand Down
2 changes: 1 addition & 1 deletion Documentation/CRDs/Cluster/stretch-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ spec:
- name: b
- name: c
cephVersion:
image: quay.io/ceph/ceph:v18.2.2
image: quay.io/ceph/ceph:v18.2.4
allowUnsupported: true
# Either storageClassDeviceSets or the storage section can be specified for creating OSDs.
# This example uses all devices for simplicity.
Expand Down
14 changes: 13 additions & 1 deletion Documentation/CRDs/specification.md
Original file line number Diff line number Diff line change
Expand Up @@ -10255,7 +10255,7 @@ the triple <key,value,effect> using the matching operator <operator></p>
</td>
<td>
<em>(Optional)</em>
<p>TopologySpreadConstraint specifies how to spread matching pods among the given topology</p>
<p>TopologySpreadConstraints specifies how to spread matching pods among the given topology</p>
</td>
</tr>
</tbody>
Expand Down Expand Up @@ -12218,6 +12218,18 @@ float64
<p>BackfillFullRatio is the ratio at which the cluster is too full for backfill. Backfill will be disabled if above this threshold. Default is 0.90.</p>
</td>
</tr>
<tr>
<td>
<code>allowDeviceClassUpdate</code><br/>
<em>
bool
</em>
</td>
<td>
<em>(Optional)</em>
<p>Whether to allow updating the device class after the OSD is initially provisioned</p>
</td>
</tr>
</tbody>
</table>
<h3 id="ceph.rook.io/v1.StoreType">StoreType
Expand Down
10 changes: 5 additions & 5 deletions Documentation/Upgrade/ceph-upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ Official Ceph container images can be found on [Quay](https://quay.io/repository

These images are tagged in a few ways:

* The most explicit form of tags are full-ceph-version-and-build tags (e.g., `v18.2.2-20240311`).
* The most explicit form of tags are full-ceph-version-and-build tags (e.g., `v18.2.4-20240724`).
These tags are recommended for production clusters, as there is no possibility for the cluster to
be heterogeneous with respect to the version of Ceph running in containers.
* Ceph major version tags (e.g., `v18`) are useful for development and test clusters so that the
Expand All @@ -67,7 +67,7 @@ CephCluster CRD (`spec.cephVersion.image`).

```console
ROOK_CLUSTER_NAMESPACE=rook-ceph
NEW_CEPH_IMAGE='quay.io/ceph/ceph:v18.2.2-20240311'
NEW_CEPH_IMAGE='quay.io/ceph/ceph:v18.2.4-20240724'
kubectl -n $ROOK_CLUSTER_NAMESPACE patch CephCluster $ROOK_CLUSTER_NAMESPACE --type=merge -p "{\"spec\": {\"cephVersion\": {\"image\": \"$NEW_CEPH_IMAGE\"}}}"
```

Expand All @@ -79,7 +79,7 @@ employed by the new Rook operator release. Employing an outdated Ceph version wi
in unexpected behaviour.

```console
kubectl -n rook-ceph set image deploy/rook-ceph-tools rook-ceph-tools=quay.io/ceph/ceph:v18.2.2-20240311
kubectl -n rook-ceph set image deploy/rook-ceph-tools rook-ceph-tools=quay.io/ceph/ceph:v18.2.4-20240724
```

#### **3. Wait for the pod updates**
Expand All @@ -97,9 +97,9 @@ Confirm the upgrade is completed when the versions are all on the desired Ceph v
kubectl -n $ROOK_CLUSTER_NAMESPACE get deployment -l rook_cluster=$ROOK_CLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{"ceph-version="}{.metadata.labels.ceph-version}{"\n"}{end}' | sort | uniq
This cluster is not yet finished:
ceph-version=v17.2.7-0
ceph-version=v18.2.2-0
ceph-version=v18.2.4-0
This cluster is finished:
ceph-version=v18.2.2-0
ceph-version=v18.2.4-0
```

#### **4. Verify cluster health**
Expand Down
1 change: 1 addition & 0 deletions PendingReleaseNotes.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,4 @@ please update the `BucketClass` and `BucketAccessClass` for resolving refer [her
## Features

- Added support for Ceph Squid (v19)
- Allow updating the device class of OSDs, if `allowDeviceClassUpdate: true` is set
2 changes: 2 additions & 0 deletions build/csv/ceph/ceph.rook.io_cephclusters.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1551,6 +1551,8 @@ spec:
storage:
nullable: true
properties:
allowDeviceClassUpdate:
type: boolean
backfillFullRatio:
maximum: 1
minimum: 0
Expand Down
Loading

0 comments on commit c4f7e0c

Please sign in to comment.