diff --git a/Documentation/CRDs/Cluster/ceph-cluster-crd.md b/Documentation/CRDs/Cluster/ceph-cluster-crd.md index f4b067195294..af35aa3eccdf 100755 --- a/Documentation/CRDs/Cluster/ceph-cluster-crd.md +++ b/Documentation/CRDs/Cluster/ceph-cluster-crd.md @@ -82,6 +82,9 @@ For more details on the mons and when to choose a number other than `3`, see the * `config`: Config settings applied to all OSDs on the node unless overridden by `devices`. See the [config settings](#osd-configuration-settings) below. * `allowDeviceClassUpdate`: Whether to allow changing the device class of an OSD after it is created. The default is false to prevent unintentional data movement or CRUSH changes if the device class is changed accidentally. + * `allowOsdCrushWeightUpdate`: Whether Rook will resize the OSD CRUSH weight when the OSD PVC size is increased. + This allows cluster data to be rebalanced to make most effective use of new OSD space. + The default is false since data rebalancing can cause temporary cluster slowdown. * [storage selection settings](#storage-selection-settings) * [Storage Class Device Sets](#storage-class-device-sets) * `onlyApplyOSDPlacement`: Whether the placement specific for OSDs is merged with the `all` placement. If `false`, the OSD placement will be merged with the `all` placement. If true, the `OSD placement will be applied` and the `all` placement will be ignored. The placement for OSDs is computed from several different places depending on the type of OSD: @@ -322,7 +325,7 @@ The following are the settings for Storage Class Device Sets which can be config * `preparePlacement`: The placement criteria for the preparation of the OSD devices. Creating OSDs is a two-step process and the prepare job may require different placement than the OSD daemons. If the `preparePlacement` is not specified, the `placement` will instead be applied for consistent placement for the OSD prepare jobs and OSD deployments. The `preparePlacement` is only useful for `portable` OSDs in the device sets. OSDs that are not portable will be tied to the host where the OSD prepare job initially runs. * For example, provisioning may require topology spread constraints across zones, but the OSD daemons may require constraints across hosts within the zones. * `portable`: If `true`, the OSDs will be allowed to move between nodes during failover. This requires a storage class that supports portability (e.g. `aws-ebs`, but not the local storage provisioner). If `false`, the OSDs will be assigned to a node permanently. Rook will configure Ceph's CRUSH map to support the portability. -* `tuneDeviceClass`: For example, Ceph cannot detect AWS volumes as HDDs from the storage class "gp2", so you can improve Ceph performance by setting this to true. +* `tuneDeviceClass`: For example, Ceph cannot detect AWS volumes as HDDs from the storage class "gp2-csi", so you can improve Ceph performance by setting this to true. * `tuneFastDeviceClass`: For example, Ceph cannot detect Azure disks as SSDs from the storage class "managed-premium", so you can improve Ceph performance by setting this to true.. * `volumeClaimTemplates`: A list of PVC templates to use for provisioning the underlying storage devices. * `metadata.name`: "data", "metadata", or "wal". If a single template is provided, the name must be "data". If the name is "metadata" or "wal", the devices are used to store the Ceph metadata or WAL respectively. In both cases, the devices must be raw devices or LVM logical volumes. diff --git a/Documentation/CRDs/Cluster/pvc-cluster.md b/Documentation/CRDs/Cluster/pvc-cluster.md index 231ff7dcaa11..fc3efe4322e5 100644 --- a/Documentation/CRDs/Cluster/pvc-cluster.md +++ b/Documentation/CRDs/Cluster/pvc-cluster.md @@ -8,7 +8,7 @@ in clusters where a local PV provisioner is available. ## AWS Storage Example -In this example, the mon and OSD volumes are provisioned from the AWS `gp2` storage class. This storage class can be replaced by any storage class that provides `file` mode (for mons) and `block` mode (for OSDs). +In this example, the mon and OSD volumes are provisioned from the AWS `gp2-csi` storage class. This storage class can be replaced by any storage class that provides `file` mode (for mons) and `block` mode (for OSDs). ```yaml apiVersion: ceph.rook.io/v1 @@ -25,7 +25,7 @@ spec: allowMultiplePerNode: false volumeClaimTemplate: spec: - storageClassName: gp2 + storageClassName: gp2-csi resources: requests: storage: 10Gi @@ -42,8 +42,8 @@ spec: resources: requests: storage: 10Gi - # IMPORTANT: Change the storage class depending on your environment (e.g. local-storage, gp2) - storageClassName: gp2 + # IMPORTANT: Change the storage class depending on your environment (e.g. local-storage, gp2-csi) + storageClassName: gp2-csi volumeMode: Block accessModes: - ReadWriteOnce diff --git a/Documentation/CRDs/specification.md b/Documentation/CRDs/specification.md index fd4346f7344d..301b3c969429 100644 --- a/Documentation/CRDs/specification.md +++ b/Documentation/CRDs/specification.md @@ -12603,6 +12603,20 @@ bool

Whether to allow updating the device class after the OSD is initially provisioned

+ + +allowOsdCrushWeightUpdate
+ +bool + + + +(Optional) +

Whether Rook will resize the OSD CRUSH weight when the OSD PVC size is increased. +This allows cluster data to be rebalanced to make most effective use of new OSD space. +The default is false since data rebalancing can cause temporary cluster slowdown.

+ +

StoreType @@ -13007,7 +13021,7 @@ If the resource referred to by volumeAttributesClass does not exist, this Persis set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ -(Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled.

+(Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default).

diff --git a/Documentation/Contributing/development-flow.md b/Documentation/Contributing/development-flow.md index b3000a03a4e5..9b62dc9adbf5 100644 --- a/Documentation/Contributing/development-flow.md +++ b/Documentation/Contributing/development-flow.md @@ -3,7 +3,7 @@ title: Development Flow --- Thank you for your time and effort to help us improve Rook! Here are a few steps to get started. If you have any questions, -don't hesitate to reach out to us on our [Slack](https://Rook-io.slack.com) dev channel. +don't hesitate to reach out to us on our [Slack](https://Rook-io.slack.com) dev channel. Sign up for the Rook Slack [here](https://slack.rook.io). ## Prerequisites diff --git a/Documentation/Contributing/rook-test-framework.md b/Documentation/Contributing/rook-test-framework.md index fbb21250312d..bc0d5ace5fb5 100644 --- a/Documentation/Contributing/rook-test-framework.md +++ b/Documentation/Contributing/rook-test-framework.md @@ -95,7 +95,7 @@ go test -v -timeout 1800s -run CephSmokeSuite github.com/rook/rook/tests/integra ```console export TEST_ENV_NAME=openshift - export TEST_STORAGE_CLASS=gp2 + export TEST_STORAGE_CLASS=gp2-csi export TEST_BASE_DIR=/tmp ``` diff --git a/Documentation/Helm-Charts/operator-chart.md b/Documentation/Helm-Charts/operator-chart.md index 756980ef4f3d..2a7173c73949 100644 --- a/Documentation/Helm-Charts/operator-chart.md +++ b/Documentation/Helm-Charts/operator-chart.md @@ -54,7 +54,7 @@ The following table lists the configurable parameters of the rook-operator chart | `crds.enabled` | Whether the helm chart should create and update the CRDs. If false, the CRDs must be managed independently with deploy/examples/crds.yaml. **WARNING** Only set during first deployment. If later disabled the cluster may be DESTROYED. If the CRDs are deleted in this case, see [the disaster recovery guide](https://rook.io/docs/rook/latest/Troubleshooting/disaster-recovery/#restoring-crds-after-deletion) to restore them. | `true` | | `csi.allowUnsupportedVersion` | Allow starting an unsupported ceph-csi image | `false` | | `csi.attacher.repository` | Kubernetes CSI Attacher image repository | `"registry.k8s.io/sig-storage/csi-attacher"` | -| `csi.attacher.tag` | Attacher image tag | `"v4.5.1"` | +| `csi.attacher.tag` | Attacher image tag | `"v4.6.1"` | | `csi.cephFSAttachRequired` | Whether to skip any attach operation altogether for CephFS PVCs. See more details [here](https://kubernetes-csi.github.io/docs/skip-attach.html#skip-attach-with-csi-driver-object). If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation of pods using the CephFS PVC fast. **WARNING** It's highly discouraged to use this for CephFS RWO volumes. Refer to this [issue](https://github.com/kubernetes/kubernetes/issues/103305) for more details. | `true` | | `csi.cephFSFSGroupPolicy` | Policy for modifying a volume's ownership or permissions when the CephFS PVC is being mounted. supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html | `"File"` | | `csi.cephFSKernelMountOptions` | Set CephFS Kernel mount options to use https://docs.ceph.com/en/latest/man/8/mount.ceph/#options. Set to "ms_mode=secure" when connections.encrypted is enabled in CephCluster CR | `nil` | @@ -113,7 +113,7 @@ The following table lists the configurable parameters of the rook-operator chart | `csi.pluginPriorityClassName` | PriorityClassName to be set on csi driver plugin pods | `"system-node-critical"` | | `csi.pluginTolerations` | Array of tolerations in YAML format which will be added to CephCSI plugin DaemonSet | `nil` | | `csi.provisioner.repository` | Kubernetes CSI provisioner image repository | `"registry.k8s.io/sig-storage/csi-provisioner"` | -| `csi.provisioner.tag` | Provisioner image tag | `"v4.0.1"` | +| `csi.provisioner.tag` | Provisioner image tag | `"v5.0.1"` | | `csi.provisionerNodeAffinity` | The node labels for affinity of the CSI provisioner deployment [^1] | `nil` | | `csi.provisionerPriorityClassName` | PriorityClassName to be set on csi driver provisioner pods | `"system-cluster-critical"` | | `csi.provisionerReplicas` | Set replicas for csi provisioner deployment | `2` | @@ -125,16 +125,16 @@ The following table lists the configurable parameters of the rook-operator chart | `csi.rbdPluginUpdateStrategyMaxUnavailable` | A maxUnavailable parameter of CSI RBD plugin daemonset update strategy. | `1` | | `csi.rbdPodLabels` | Labels to add to the CSI RBD Deployments and DaemonSets Pods | `nil` | | `csi.registrar.repository` | Kubernetes CSI registrar image repository | `"registry.k8s.io/sig-storage/csi-node-driver-registrar"` | -| `csi.registrar.tag` | Registrar image tag | `"v2.10.1"` | +| `csi.registrar.tag` | Registrar image tag | `"v2.11.1"` | | `csi.resizer.repository` | Kubernetes CSI resizer image repository | `"registry.k8s.io/sig-storage/csi-resizer"` | -| `csi.resizer.tag` | Resizer image tag | `"v1.10.1"` | +| `csi.resizer.tag` | Resizer image tag | `"v1.11.1"` | | `csi.serviceMonitor.enabled` | Enable ServiceMonitor for Ceph CSI drivers | `false` | | `csi.serviceMonitor.interval` | Service monitor scrape interval | `"10s"` | | `csi.serviceMonitor.labels` | ServiceMonitor additional labels | `{}` | | `csi.serviceMonitor.namespace` | Use a different namespace for the ServiceMonitor | `nil` | | `csi.sidecarLogLevel` | Set logging level for Kubernetes-csi sidecar containers. Supported values from 0 to 5. 0 for general useful logs (the default), 5 for trace level verbosity. | `0` | | `csi.snapshotter.repository` | Kubernetes CSI snapshotter image repository | `"registry.k8s.io/sig-storage/csi-snapshotter"` | -| `csi.snapshotter.tag` | Snapshotter image tag | `"v7.0.2"` | +| `csi.snapshotter.tag` | Snapshotter image tag | `"v8.0.1"` | | `csi.topology.domainLabels` | domainLabels define which node labels to use as domains for CSI nodeplugins to advertise their domains | `nil` | | `csi.topology.enabled` | Enable topology based provisioning | `false` | | `currentNamespaceOnly` | Whether the operator should watch cluster CRD in its own namespace or not | `false` | @@ -151,7 +151,7 @@ The following table lists the configurable parameters of the rook-operator chart | `enableOBCWatchOperatorNamespace` | Whether the OBC provisioner should watch on the operator namespace or not, if not the namespace of the cluster will be used | `true` | | `hostpathRequiresPrivileged` | Runs Ceph Pods as privileged to be able to write to `hostPaths` in OpenShift with SELinux restrictions. | `false` | | `image.pullPolicy` | Image pull policy | `"IfNotPresent"` | -| `image.repository` | Image | `"rook/ceph"` | +| `image.repository` | Image | `"docker.io/rook/ceph"` | | `image.tag` | Image tag | `master` | | `imagePullSecrets` | imagePullSecrets option allow to pull docker images from private docker registry. Option will be passed to all service accounts. | `nil` | | `logLevel` | Global log level for the operator. Options: `ERROR`, `WARNING`, `INFO`, `DEBUG` | `"INFO"` | diff --git a/Documentation/Storage-Configuration/Ceph-CSI/custom-images.md b/Documentation/Storage-Configuration/Ceph-CSI/custom-images.md index 795c4fbab78a..59c0575c5a9a 100644 --- a/Documentation/Storage-Configuration/Ceph-CSI/custom-images.md +++ b/Documentation/Storage-Configuration/Ceph-CSI/custom-images.md @@ -19,11 +19,11 @@ The default upstream images are included below, which you can change to your des ```yaml ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.12.0" -ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1" -ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v4.0.1" -ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v4.5.1" -ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.10.1" -ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v7.0.2" +ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.11.1" +ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v5.0.1" +ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v4.6.1" +ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.11.1" +ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1" ROOK_CSIADDONS_IMAGE: "quay.io/csiaddons/k8s-sidecar:v0.9.0" ``` diff --git a/Makefile b/Makefile index de860b82fd99..5922c4b14c4a 100644 --- a/Makefile +++ b/Makefile @@ -32,7 +32,7 @@ all: build # Controller-gen version # f284e2e8... is master ahead of v0.5.0 which has ability to generate embedded objectmeta in CRDs -CONTROLLER_GEN_VERSION=v0.14.0 +CONTROLLER_GEN_VERSION=v0.16.1 # Set GOBIN ifeq (,$(shell go env GOBIN)) diff --git a/PendingReleaseNotes.md b/PendingReleaseNotes.md index 4f222865221b..2ef38449c60b 100644 --- a/PendingReleaseNotes.md +++ b/PendingReleaseNotes.md @@ -25,3 +25,4 @@ - CephObjectStore support for keystone authentication for S3 and Swift (see [#9088](https://github.com/rook/rook/issues/9088)). - Support K8s versions v1.26 through v1.31. +- Use fully-qualified image names (`docker.io/rook/ceph`) in operator manifests and helm charts diff --git a/build/csv/ceph/ceph.rook.io_cephblockpoolradosnamespaces.yaml b/build/csv/ceph/ceph.rook.io_cephblockpoolradosnamespaces.yaml index e90250cb6bea..24f47ea9d4a5 100644 --- a/build/csv/ceph/ceph.rook.io_cephblockpoolradosnamespaces.yaml +++ b/build/csv/ceph/ceph.rook.io_cephblockpoolradosnamespaces.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 creationTimestamp: null name: cephblockpoolradosnamespaces.ceph.rook.io spec: diff --git a/build/csv/ceph/ceph.rook.io_cephblockpools.yaml b/build/csv/ceph/ceph.rook.io_cephblockpools.yaml index 1a87863bcea0..f82d46db40e8 100644 --- a/build/csv/ceph/ceph.rook.io_cephblockpools.yaml +++ b/build/csv/ceph/ceph.rook.io_cephblockpools.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 creationTimestamp: null name: cephblockpools.ceph.rook.io spec: diff --git a/build/csv/ceph/ceph.rook.io_cephbucketnotifications.yaml b/build/csv/ceph/ceph.rook.io_cephbucketnotifications.yaml index d9a377fc83fb..f9d0fa2e81b6 100644 --- a/build/csv/ceph/ceph.rook.io_cephbucketnotifications.yaml +++ b/build/csv/ceph/ceph.rook.io_cephbucketnotifications.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 creationTimestamp: null name: cephbucketnotifications.ceph.rook.io spec: diff --git a/build/csv/ceph/ceph.rook.io_cephbuckettopics.yaml b/build/csv/ceph/ceph.rook.io_cephbuckettopics.yaml index 18a2a43f6b32..e854f5f8d637 100644 --- a/build/csv/ceph/ceph.rook.io_cephbuckettopics.yaml +++ b/build/csv/ceph/ceph.rook.io_cephbuckettopics.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 creationTimestamp: null name: cephbuckettopics.ceph.rook.io spec: diff --git a/build/csv/ceph/ceph.rook.io_cephclients.yaml b/build/csv/ceph/ceph.rook.io_cephclients.yaml index 7869729e17cf..2a3ab74c3c8a 100644 --- a/build/csv/ceph/ceph.rook.io_cephclients.yaml +++ b/build/csv/ceph/ceph.rook.io_cephclients.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 creationTimestamp: null name: cephclients.ceph.rook.io spec: diff --git a/build/csv/ceph/ceph.rook.io_cephclusters.yaml b/build/csv/ceph/ceph.rook.io_cephclusters.yaml index 1c76ccee2c36..db2978259122 100644 --- a/build/csv/ceph/ceph.rook.io_cephclusters.yaml +++ b/build/csv/ceph/ceph.rook.io_cephclusters.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 creationTimestamp: null name: cephclusters.ceph.rook.io spec: @@ -252,6 +252,7 @@ spec: format: int32 type: integer service: + default: "" type: string required: - port @@ -339,6 +340,7 @@ spec: format: int32 type: integer service: + default: "" type: string required: - port @@ -1495,6 +1497,8 @@ spec: properties: name: type: string + request: + type: string required: - name type: object @@ -1554,6 +1558,8 @@ spec: properties: allowDeviceClassUpdate: type: boolean + allowOsdCrushWeightUpdate: + type: boolean backfillFullRatio: maximum: 1 minimum: 0 @@ -1638,6 +1644,8 @@ spec: properties: name: type: string + request: + type: string required: - name type: object @@ -2830,6 +2838,8 @@ spec: properties: name: type: string + request: + type: string required: - name type: object diff --git a/build/csv/ceph/ceph.rook.io_cephcosidrivers.yaml b/build/csv/ceph/ceph.rook.io_cephcosidrivers.yaml index 6003df89d5af..cfa3949c3a84 100644 --- a/build/csv/ceph/ceph.rook.io_cephcosidrivers.yaml +++ b/build/csv/ceph/ceph.rook.io_cephcosidrivers.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 creationTimestamp: null name: cephcosidrivers.ceph.rook.io spec: @@ -554,6 +554,8 @@ spec: properties: name: type: string + request: + type: string required: - name type: object diff --git a/build/csv/ceph/ceph.rook.io_cephfilesystemmirrors.yaml b/build/csv/ceph/ceph.rook.io_cephfilesystemmirrors.yaml index 939d93893790..37fdd7d7e653 100644 --- a/build/csv/ceph/ceph.rook.io_cephfilesystemmirrors.yaml +++ b/build/csv/ceph/ceph.rook.io_cephfilesystemmirrors.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 creationTimestamp: null name: cephfilesystemmirrors.ceph.rook.io spec: @@ -563,6 +563,8 @@ spec: properties: name: type: string + request: + type: string required: - name type: object diff --git a/build/csv/ceph/ceph.rook.io_cephfilesystems.yaml b/build/csv/ceph/ceph.rook.io_cephfilesystems.yaml index 277298c06ca4..fce67dc19fa6 100644 --- a/build/csv/ceph/ceph.rook.io_cephfilesystems.yaml +++ b/build/csv/ceph/ceph.rook.io_cephfilesystems.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 creationTimestamp: null name: cephfilesystems.ceph.rook.io spec: @@ -345,6 +345,7 @@ spec: format: int32 type: integer service: + default: "" type: string required: - port @@ -928,6 +929,8 @@ spec: properties: name: type: string + request: + type: string required: - name type: object @@ -976,6 +979,7 @@ spec: format: int32 type: integer service: + default: "" type: string required: - port diff --git a/build/csv/ceph/ceph.rook.io_cephfilesystemsubvolumegroups.yaml b/build/csv/ceph/ceph.rook.io_cephfilesystemsubvolumegroups.yaml index c0e7aebade7e..30e38a5a0ed9 100644 --- a/build/csv/ceph/ceph.rook.io_cephfilesystemsubvolumegroups.yaml +++ b/build/csv/ceph/ceph.rook.io_cephfilesystemsubvolumegroups.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 creationTimestamp: null name: cephfilesystemsubvolumegroups.ceph.rook.io spec: diff --git a/build/csv/ceph/ceph.rook.io_cephnfses.yaml b/build/csv/ceph/ceph.rook.io_cephnfses.yaml index 2dfb9259624b..2c5a69d5b4d0 100644 --- a/build/csv/ceph/ceph.rook.io_cephnfses.yaml +++ b/build/csv/ceph/ceph.rook.io_cephnfses.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 creationTimestamp: null name: cephnfses.ceph.rook.io spec: @@ -808,6 +808,8 @@ spec: properties: name: type: string + request: + type: string required: - name type: object @@ -1123,6 +1125,7 @@ spec: format: int32 type: integer service: + default: "" type: string required: - port @@ -1708,6 +1711,8 @@ spec: properties: name: type: string + request: + type: string required: - name type: object diff --git a/build/csv/ceph/ceph.rook.io_cephobjectrealms.yaml b/build/csv/ceph/ceph.rook.io_cephobjectrealms.yaml index 7f13f7a34010..7dd5ce07e00b 100644 --- a/build/csv/ceph/ceph.rook.io_cephobjectrealms.yaml +++ b/build/csv/ceph/ceph.rook.io_cephobjectrealms.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 creationTimestamp: null name: cephobjectrealms.ceph.rook.io spec: diff --git a/build/csv/ceph/ceph.rook.io_cephobjectstores.yaml b/build/csv/ceph/ceph.rook.io_cephobjectstores.yaml index c2ff4ad69e02..9bb9dee6b533 100644 --- a/build/csv/ceph/ceph.rook.io_cephobjectstores.yaml +++ b/build/csv/ceph/ceph.rook.io_cephobjectstores.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 creationTimestamp: null name: cephobjectstores.ceph.rook.io spec: @@ -767,6 +767,8 @@ spec: properties: name: type: string + request: + type: string required: - name type: object @@ -836,6 +838,7 @@ spec: format: int32 type: integer service: + default: "" type: string required: - port @@ -922,6 +925,7 @@ spec: format: int32 type: integer service: + default: "" type: string required: - port diff --git a/build/csv/ceph/ceph.rook.io_cephobjectstoreusers.yaml b/build/csv/ceph/ceph.rook.io_cephobjectstoreusers.yaml index ea3f4c6f0b53..31090140268f 100644 --- a/build/csv/ceph/ceph.rook.io_cephobjectstoreusers.yaml +++ b/build/csv/ceph/ceph.rook.io_cephobjectstoreusers.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 creationTimestamp: null name: cephobjectstoreusers.ceph.rook.io spec: diff --git a/build/csv/ceph/ceph.rook.io_cephobjectzonegroups.yaml b/build/csv/ceph/ceph.rook.io_cephobjectzonegroups.yaml index 87bc226619b0..bf0a5c6ab6c7 100644 --- a/build/csv/ceph/ceph.rook.io_cephobjectzonegroups.yaml +++ b/build/csv/ceph/ceph.rook.io_cephobjectzonegroups.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 creationTimestamp: null name: cephobjectzonegroups.ceph.rook.io spec: diff --git a/build/csv/ceph/ceph.rook.io_cephobjectzones.yaml b/build/csv/ceph/ceph.rook.io_cephobjectzones.yaml index ef6b78cf5f42..f5946e3142e8 100644 --- a/build/csv/ceph/ceph.rook.io_cephobjectzones.yaml +++ b/build/csv/ceph/ceph.rook.io_cephobjectzones.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 creationTimestamp: null name: cephobjectzones.ceph.rook.io spec: diff --git a/build/csv/ceph/ceph.rook.io_cephrbdmirrors.yaml b/build/csv/ceph/ceph.rook.io_cephrbdmirrors.yaml index fa690789cd27..58a375f8c369 100644 --- a/build/csv/ceph/ceph.rook.io_cephrbdmirrors.yaml +++ b/build/csv/ceph/ceph.rook.io_cephrbdmirrors.yaml @@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 creationTimestamp: null name: cephrbdmirrors.ceph.rook.io spec: @@ -577,6 +577,8 @@ spec: properties: name: type: string + request: + type: string required: - name type: object diff --git a/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml b/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml index 97aca838ea72..e74a4c21d98d 100644 --- a/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml +++ b/build/csv/ceph/rook-ceph-operator.clusterserviceversion.yaml @@ -2496,6 +2496,14 @@ spec: - get - list - watch + - apiGroups: + - storage.k8s.io + resources: + - csinodes + verbs: + - get + - list + - watch - apiGroups: - "" resources: @@ -2989,14 +2997,6 @@ spec: - get - list - watch - - apiGroups: - - storage.k8s.io - resources: - - csinodes - verbs: - - get - - list - - watch serviceAccountName: rook-csi-rbd-provisioner-sa - rules: - verbs: @@ -3157,7 +3157,7 @@ spec: fieldPath: metadata.namespace - name: ROOK_OBC_WATCH_OPERATOR_NAMESPACE value: "true" - image: docker.io/rook/ceph:v1.13.0.399.g9c0d795e2 + image: docker.io/rook/ceph:master name: rook-ceph-operator resources: {} securityContext: diff --git a/build/makelib/golang.mk b/build/makelib/golang.mk index 7b75ea3ba920..e1ae8468e0af 100644 --- a/build/makelib/golang.mk +++ b/build/makelib/golang.mk @@ -132,8 +132,8 @@ go.test.unit: @$(MAKE) $(GOJUNIT) @echo === go test unit-tests @mkdir -p $(GO_TEST_OUTPUT) - CGO_ENABLED=$(CGO_ENABLED_VALUE) $(GOHOST) test -v -cover $(GO_STATIC_FLAGS) $(GO_PACKAGES) - CGO_ENABLED=$(CGO_ENABLED_VALUE) $(GOHOST) test -v -cover $(GO_TEST_FLAGS) $(GO_STATIC_FLAGS) $(GO_PACKAGES) 2>&1 | tee $(GO_TEST_OUTPUT)/unit-tests.log + CGO_ENABLED=$(CGO_ENABLED_VALUE) $(GO) test -v -cover $(GO_STATIC_FLAGS) $(GO_PACKAGES) + CGO_ENABLED=$(CGO_ENABLED_VALUE) $(GO) test -v -cover $(GO_TEST_FLAGS) $(GO_STATIC_FLAGS) $(GO_PACKAGES) 2>&1 | tee $(GO_TEST_OUTPUT)/unit-tests.log @cat $(GO_TEST_OUTPUT)/unit-tests.log | $(GOJUNIT) -set-exit-code > $(GO_TEST_OUTPUT)/unit-tests.xml .PHONY: diff --git a/deploy/charts/rook-ceph/templates/clusterrole.yaml b/deploy/charts/rook-ceph/templates/clusterrole.yaml index fb1827699267..6b5af7536dc1 100644 --- a/deploy/charts/rook-ceph/templates/clusterrole.yaml +++ b/deploy/charts/rook-ceph/templates/clusterrole.yaml @@ -504,6 +504,9 @@ rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] + - apiGroups: ["storage.k8s.io"] + resources: ["csinodes"] + verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "update", "delete", "patch"] @@ -652,9 +655,6 @@ rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - - apiGroups: ["storage.k8s.io"] - resources: ["csinodes"] - verbs: ["get", "list", "watch"] --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 diff --git a/deploy/charts/rook-ceph/templates/configmap.yaml b/deploy/charts/rook-ceph/templates/configmap.yaml index 13c8e96c9235..ea1c5230f107 100644 --- a/deploy/charts/rook-ceph/templates/configmap.yaml +++ b/deploy/charts/rook-ceph/templates/configmap.yaml @@ -9,6 +9,9 @@ data: ROOK_LOG_LEVEL: {{ .Values.logLevel | quote }} ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS: {{ .Values.cephCommandsTimeoutSeconds | quote }} ROOK_OBC_WATCH_OPERATOR_NAMESPACE: {{ .Values.enableOBCWatchOperatorNamespace | quote }} +{{- if .Values.operatorMetricsBindAddress }} + ROOK_OPERATOR_METRICS_BIND_ADDRESS: {{ .Values.operatorMetricsBindAddress | quote }} +{{- end }} {{- if .Values.obcProvisionerNamePrefix }} ROOK_OBC_PROVISIONER_NAME_PREFIX: {{ .Values.obcProvisionerNamePrefix | quote }} {{- end }} diff --git a/deploy/charts/rook-ceph/templates/resources.yaml b/deploy/charts/rook-ceph/templates/resources.yaml index 263c0f27fbd2..568c976430f7 100644 --- a/deploy/charts/rook-ceph/templates/resources.yaml +++ b/deploy/charts/rook-ceph/templates/resources.yaml @@ -4,7 +4,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 helm.sh/resource-policy: keep name: cephblockpoolradosnamespaces.ceph.rook.io spec: @@ -95,7 +95,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 helm.sh/resource-policy: keep name: cephblockpools.ceph.rook.io spec: @@ -527,7 +527,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 helm.sh/resource-policy: keep name: cephbucketnotifications.ceph.rook.io spec: @@ -690,7 +690,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 helm.sh/resource-policy: keep name: cephbuckettopics.ceph.rook.io spec: @@ -850,7 +850,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 helm.sh/resource-policy: keep name: cephclients.ceph.rook.io spec: @@ -934,7 +934,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 helm.sh/resource-policy: keep name: cephclusters.ceph.rook.io spec: @@ -1284,11 +1284,11 @@ spec: format: int32 type: integer service: + default: "" description: |- Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). - If this is not specified, the default behavior is defined by gRPC. type: string required: @@ -1432,11 +1432,11 @@ spec: format: int32 type: integer service: + default: "" description: |- Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). - If this is not specified, the default behavior is defined by gRPC. type: string required: @@ -1852,7 +1852,7 @@ spec: set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ - (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). type: string volumeMode: description: |- @@ -2078,7 +2078,7 @@ spec: set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ - (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). type: string volumeMode: description: |- @@ -2311,7 +2311,7 @@ spec: set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ - (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). type: string volumeMode: description: |- @@ -2373,7 +2373,6 @@ spec: the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. - TODO: this design is not final and this field is subject to change in the future. type: string kind: description: |- @@ -2444,7 +2443,6 @@ spec: description: |- An IPv4 or IPv6 network CIDR. - This naive kubebuilder regex provides immediate feedback for some typos and for a common problem case where the range spec is forgotten (e.g., /24). Rook does in-depth validation in code. pattern: ^[0-9a-fA-F:.]{2,}\/[0-9]{1,3}$ @@ -2456,7 +2454,6 @@ spec: description: |- An IPv4 or IPv6 network CIDR. - This naive kubebuilder regex provides immediate feedback for some typos and for a common problem case where the range spec is forgotten (e.g., /24). Rook does in-depth validation in code. pattern: ^[0-9a-fA-F:.]{2,}\/[0-9]{1,3}$ @@ -2552,15 +2549,12 @@ spec: networks when the "multus" network provider is used. This config section is not used for other network providers. - Valid keys are "public" and "cluster". Refer to Ceph networking documentation for more: https://docs.ceph.com/en/reef/rados/configuration/network-config-ref/ - Refer to Multus network annotation documentation for help selecting values: https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md#run-pod-with-network-annotation - Rook will make a best-effort attempt to automatically detect CIDR address ranges for given network attachment definitions. Rook's methods are robust but may be imprecise for sufficiently complicated networks. Rook's auto-detection process obtains a new IP address @@ -2568,7 +2562,6 @@ spec: partially detects, or if underlying networks do not support reusing old IP addresses, it is best to use the 'addressRanges' config section to specify CIDR ranges for the Ceph cluster. - As a contrived example, one can use a theoretical Kubernetes-wide network for Ceph client traffic and a theoretical Rook-only network for Ceph replication traffic as shown: selectors: @@ -3115,11 +3108,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -3130,6 +3121,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -3208,6 +3205,12 @@ spec: allowDeviceClassUpdate: description: Whether to allow updating the device class after the OSD is initially provisioned type: boolean + allowOsdCrushWeightUpdate: + description: |- + Whether Rook will resize the OSD CRUSH weight when the OSD PVC size is increased. + This allows cluster data to be rebalanced to make most effective use of new OSD space. + The default is false since data rebalancing can cause temporary cluster slowdown. + type: boolean backfillFullRatio: description: BackfillFullRatio is the ratio at which the cluster is too full for backfill. Backfill will be disabled if above this threshold. Default is 0.90. maximum: 1 @@ -3312,11 +3315,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -3327,6 +3328,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -3574,7 +3581,7 @@ spec: set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ - (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). type: string volumeMode: description: |- @@ -4647,11 +4654,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -4662,6 +4667,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -4916,7 +4927,7 @@ spec: set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ - (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). type: string volumeMode: description: |- @@ -5168,7 +5179,7 @@ spec: set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ - (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). type: string volumeMode: description: |- @@ -5372,7 +5383,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 helm.sh/resource-policy: keep name: cephcosidrivers.ceph.rook.io spec: @@ -5941,11 +5952,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -5956,6 +5965,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -6000,7 +6015,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 helm.sh/resource-policy: keep name: cephfilesystemmirrors.ceph.rook.io spec: @@ -6578,11 +6593,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -6593,6 +6606,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -6671,7 +6690,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 helm.sh/resource-policy: keep name: cephfilesystems.ceph.rook.io spec: @@ -7151,11 +7170,11 @@ spec: format: int32 type: integer service: + default: "" description: |- Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). - If this is not specified, the default behavior is defined by gRPC. type: string required: @@ -7779,11 +7798,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -7794,6 +7811,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -7867,11 +7890,11 @@ spec: format: int32 type: integer service: + default: "" description: |- Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). - If this is not specified, the default behavior is defined by gRPC. type: string required: @@ -8241,7 +8264,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 helm.sh/resource-policy: keep name: cephfilesystemsubvolumegroups.ceph.rook.io spec: @@ -8380,7 +8403,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 helm.sh/resource-policy: keep name: cephnfses.ceph.rook.io spec: @@ -8447,13 +8470,11 @@ spec: ConfigFiles defines where the Kerberos configuration should be sourced from. Config files will be placed into the `/etc/krb5.conf.rook/` directory. - If this is left empty, Rook will not add any files. This allows you to manage the files yourself however you wish. For example, you may build them into your custom Ceph container image or use the Vault agent injector to securely add the files via annotations on the CephNFS spec (passed to the NFS server pods). - Rook configures Kerberos to log to stderr. We suggest removing logging sections from config files to avoid consuming unnecessary disk space from logging to files. properties: @@ -9255,11 +9276,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -9270,6 +9289,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -9623,11 +9648,11 @@ spec: format: int32 type: integer service: + default: "" description: |- Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). - If this is not specified, the default behavior is defined by gRPC. type: string required: @@ -10254,11 +10279,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -10269,6 +10292,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -10354,7 +10383,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 helm.sh/resource-policy: keep name: cephobjectrealms.ceph.rook.io spec: @@ -10445,7 +10474,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 helm.sh/resource-policy: keep name: cephobjectstores.ceph.rook.io spec: @@ -11312,11 +11341,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -11327,6 +11354,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -11429,11 +11462,11 @@ spec: format: int32 type: integer service: + default: "" description: |- Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). - If this is not specified, the default behavior is defined by gRPC. type: string required: @@ -11575,11 +11608,11 @@ spec: format: int32 type: integer service: + default: "" description: |- Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). - If this is not specified, the default behavior is defined by gRPC. type: string required: @@ -12099,7 +12132,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 helm.sh/resource-policy: keep name: cephobjectstoreusers.ceph.rook.io spec: @@ -12342,7 +12375,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 helm.sh/resource-policy: keep name: cephobjectzonegroups.ceph.rook.io spec: @@ -12438,7 +12471,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 helm.sh/resource-policy: keep name: cephobjectzones.ceph.rook.io spec: @@ -12491,7 +12524,6 @@ spec: CephObjectStore associated with this CephObjectStoreZone reachable to peer clusters. The list can have one or more endpoints pointing to different RGW servers in the zone. - If a CephObjectStore endpoint is omitted from this list, that object store's gateways will not receive multisite replication data (see CephObjectStore.spec.gateway.disableMultisiteSyncTraffic). @@ -12938,7 +12970,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 helm.sh/resource-policy: keep name: cephrbdmirrors.ceph.rook.io spec: @@ -13533,11 +13565,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -13548,6 +13578,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object diff --git a/deploy/charts/rook-ceph/values.yaml b/deploy/charts/rook-ceph/values.yaml index f20417fa157f..f93548f38619 100644 --- a/deploy/charts/rook-ceph/values.yaml +++ b/deploy/charts/rook-ceph/values.yaml @@ -4,7 +4,7 @@ image: # -- Image - repository: rook/ceph + repository: docker.io/rook/ceph # -- Image tag # @default -- `master` tag: master @@ -493,31 +493,31 @@ csi: # -- Kubernetes CSI registrar image repository repository: registry.k8s.io/sig-storage/csi-node-driver-registrar # -- Registrar image tag - tag: v2.10.1 + tag: v2.11.1 provisioner: # -- Kubernetes CSI provisioner image repository repository: registry.k8s.io/sig-storage/csi-provisioner # -- Provisioner image tag - tag: v4.0.1 + tag: v5.0.1 snapshotter: # -- Kubernetes CSI snapshotter image repository repository: registry.k8s.io/sig-storage/csi-snapshotter # -- Snapshotter image tag - tag: v7.0.2 + tag: v8.0.1 attacher: # -- Kubernetes CSI Attacher image repository repository: registry.k8s.io/sig-storage/csi-attacher # -- Attacher image tag - tag: v4.5.1 + tag: v4.6.1 resizer: # -- Kubernetes CSI resizer image repository repository: registry.k8s.io/sig-storage/csi-resizer # -- Resizer image tag - tag: v1.10.1 + tag: v1.11.1 # -- Image pull policy imagePullPolicy: IfNotPresent diff --git a/deploy/examples/cluster-on-pvc.yaml b/deploy/examples/cluster-on-pvc.yaml index 2087394aa3cc..01017800ce48 100644 --- a/deploy/examples/cluster-on-pvc.yaml +++ b/deploy/examples/cluster-on-pvc.yaml @@ -28,7 +28,7 @@ spec: # size appropriate for monitor data will be used. volumeClaimTemplate: spec: - storageClassName: gp2 + storageClassName: gp2-csi resources: requests: storage: 10Gi @@ -54,6 +54,7 @@ spec: maxLogSize: 500M # SUFFIX may be 'M' or 'G'. Must be at least 1M. storage: allowDeviceClassUpdate: false # whether to allow changing the device class of an OSD after it is created + allowOsdCrushWeightUpdate: true # whether to allow resizing the OSD crush weight after osd pvc is increased storageClassDeviceSets: - name: set1 # The number of OSDs to create from this device set @@ -64,7 +65,7 @@ spec: portable: true # Certain storage class in the Cloud are slow # Rook can configure the OSD running on PVC to accommodate that by tuning some of the Ceph internal - # Currently, "gp2" has been identified as such + # Currently, "gp2-csi" has been identified as such tuneDeviceClass: true # Certain storage class in the Cloud are fast # Rook can configure the OSD running on PVC to accommodate that by tuning some of the Ceph internal @@ -132,7 +133,7 @@ spec: requests: storage: 10Gi # IMPORTANT: Change the storage class depending on your environment - storageClassName: gp2 + storageClassName: gp2-csi volumeMode: Block accessModes: - ReadWriteOnce diff --git a/deploy/examples/cluster-stretched-aws.yaml b/deploy/examples/cluster-stretched-aws.yaml index bf82efd38165..c286e7dcc943 100644 --- a/deploy/examples/cluster-stretched-aws.yaml +++ b/deploy/examples/cluster-stretched-aws.yaml @@ -37,7 +37,7 @@ spec: - name: us-east-2c volumeClaimTemplate: spec: - storageClassName: gp2 + storageClassName: gp2-csi resources: requests: storage: 10Gi @@ -85,7 +85,7 @@ spec: resources: requests: storage: 10Gi - storageClassName: gp2 + storageClassName: gp2-csi volumeMode: Block accessModes: - ReadWriteOnce @@ -118,7 +118,7 @@ spec: resources: requests: storage: 10Gi - storageClassName: gp2 + storageClassName: gp2-csi volumeMode: Block accessModes: - ReadWriteOnce diff --git a/deploy/examples/cluster-test.yaml b/deploy/examples/cluster-test.yaml index 7d33a8e93f3d..bde8e182e82b 100644 --- a/deploy/examples/cluster-test.yaml +++ b/deploy/examples/cluster-test.yaml @@ -34,6 +34,7 @@ spec: useAllNodes: true useAllDevices: true allowDeviceClassUpdate: true + allowOsdCrushWeightUpdate: false #deviceFilter: #config: # deviceClass: testclass diff --git a/deploy/examples/cluster.yaml b/deploy/examples/cluster.yaml index 6127c79d4fee..11860340376e 100644 --- a/deploy/examples/cluster.yaml +++ b/deploy/examples/cluster.yaml @@ -262,6 +262,7 @@ spec: # encryptedDevice: "true" # the default value for this option is "false" # deviceClass: "myclass" # specify a device class for OSDs in the cluster allowDeviceClassUpdate: false # whether to allow changing the device class of an OSD after it is created + allowOsdCrushWeightUpdate: false # whether to allow resizing the OSD crush weight after osd pvc is increased # Individual nodes and their config can be specified as well, but 'useAllNodes' above must be set to false. Then, only the named # nodes below will be used as storage resources. Each node's 'name' field should match their 'kubernetes.io/hostname' label. # nodes: diff --git a/deploy/examples/common.yaml b/deploy/examples/common.yaml index 0f5d57715158..bb7703389f8c 100644 --- a/deploy/examples/common.yaml +++ b/deploy/examples/common.yaml @@ -46,6 +46,9 @@ rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] + - apiGroups: ["storage.k8s.io"] + resources: ["csinodes"] + verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "update", "delete", "patch"] @@ -213,9 +216,6 @@ rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - - apiGroups: ["storage.k8s.io"] - resources: ["csinodes"] - verbs: ["get", "list", "watch"] --- # The cluster role for managing all the cluster-specific resources in a namespace apiVersion: rbac.authorization.k8s.io/v1 diff --git a/deploy/examples/crds.yaml b/deploy/examples/crds.yaml index 5e87f7a14652..03d4288cdbb6 100644 --- a/deploy/examples/crds.yaml +++ b/deploy/examples/crds.yaml @@ -8,7 +8,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 name: cephblockpoolradosnamespaces.ceph.rook.io spec: group: ceph.rook.io @@ -98,7 +98,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 name: cephblockpools.ceph.rook.io spec: group: ceph.rook.io @@ -529,7 +529,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 name: cephbucketnotifications.ceph.rook.io spec: group: ceph.rook.io @@ -691,7 +691,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 name: cephbuckettopics.ceph.rook.io spec: group: ceph.rook.io @@ -850,7 +850,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 name: cephclients.ceph.rook.io spec: group: ceph.rook.io @@ -933,7 +933,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 name: cephclusters.ceph.rook.io spec: group: ceph.rook.io @@ -1282,11 +1282,11 @@ spec: format: int32 type: integer service: + default: "" description: |- Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). - If this is not specified, the default behavior is defined by gRPC. type: string required: @@ -1430,11 +1430,11 @@ spec: format: int32 type: integer service: + default: "" description: |- Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). - If this is not specified, the default behavior is defined by gRPC. type: string required: @@ -1850,7 +1850,7 @@ spec: set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ - (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). type: string volumeMode: description: |- @@ -2076,7 +2076,7 @@ spec: set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ - (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). type: string volumeMode: description: |- @@ -2309,7 +2309,7 @@ spec: set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ - (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). type: string volumeMode: description: |- @@ -2371,7 +2371,6 @@ spec: the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. - TODO: this design is not final and this field is subject to change in the future. type: string kind: description: |- @@ -2442,7 +2441,6 @@ spec: description: |- An IPv4 or IPv6 network CIDR. - This naive kubebuilder regex provides immediate feedback for some typos and for a common problem case where the range spec is forgotten (e.g., /24). Rook does in-depth validation in code. pattern: ^[0-9a-fA-F:.]{2,}\/[0-9]{1,3}$ @@ -2454,7 +2452,6 @@ spec: description: |- An IPv4 or IPv6 network CIDR. - This naive kubebuilder regex provides immediate feedback for some typos and for a common problem case where the range spec is forgotten (e.g., /24). Rook does in-depth validation in code. pattern: ^[0-9a-fA-F:.]{2,}\/[0-9]{1,3}$ @@ -2550,15 +2547,12 @@ spec: networks when the "multus" network provider is used. This config section is not used for other network providers. - Valid keys are "public" and "cluster". Refer to Ceph networking documentation for more: https://docs.ceph.com/en/reef/rados/configuration/network-config-ref/ - Refer to Multus network annotation documentation for help selecting values: https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/how-to-use.md#run-pod-with-network-annotation - Rook will make a best-effort attempt to automatically detect CIDR address ranges for given network attachment definitions. Rook's methods are robust but may be imprecise for sufficiently complicated networks. Rook's auto-detection process obtains a new IP address @@ -2566,7 +2560,6 @@ spec: partially detects, or if underlying networks do not support reusing old IP addresses, it is best to use the 'addressRanges' config section to specify CIDR ranges for the Ceph cluster. - As a contrived example, one can use a theoretical Kubernetes-wide network for Ceph client traffic and a theoretical Rook-only network for Ceph replication traffic as shown: selectors: @@ -3113,11 +3106,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -3128,6 +3119,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -3206,6 +3203,12 @@ spec: allowDeviceClassUpdate: description: Whether to allow updating the device class after the OSD is initially provisioned type: boolean + allowOsdCrushWeightUpdate: + description: |- + Whether Rook will resize the OSD CRUSH weight when the OSD PVC size is increased. + This allows cluster data to be rebalanced to make most effective use of new OSD space. + The default is false since data rebalancing can cause temporary cluster slowdown. + type: boolean backfillFullRatio: description: BackfillFullRatio is the ratio at which the cluster is too full for backfill. Backfill will be disabled if above this threshold. Default is 0.90. maximum: 1 @@ -3310,11 +3313,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -3325,6 +3326,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -3572,7 +3579,7 @@ spec: set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ - (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). type: string volumeMode: description: |- @@ -4645,11 +4652,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -4660,6 +4665,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -4914,7 +4925,7 @@ spec: set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ - (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). type: string volumeMode: description: |- @@ -5166,7 +5177,7 @@ spec: set to a Pending state, as reflected by the modifyVolumeStatus field, until such as a resource exists. More info: https://kubernetes.io/docs/concepts/storage/volume-attributes-classes/ - (Alpha) Using this field requires the VolumeAttributesClass feature gate to be enabled. + (Beta) Using this field requires the VolumeAttributesClass feature gate to be enabled (off by default). type: string volumeMode: description: |- @@ -5370,7 +5381,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 name: cephcosidrivers.ceph.rook.io spec: group: ceph.rook.io @@ -5938,11 +5949,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -5953,6 +5962,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -5997,7 +6012,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 name: cephfilesystemmirrors.ceph.rook.io spec: group: ceph.rook.io @@ -6574,11 +6589,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -6589,6 +6602,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -6667,7 +6686,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 name: cephfilesystems.ceph.rook.io spec: group: ceph.rook.io @@ -7146,11 +7165,11 @@ spec: format: int32 type: integer service: + default: "" description: |- Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). - If this is not specified, the default behavior is defined by gRPC. type: string required: @@ -7774,11 +7793,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -7789,6 +7806,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -7862,11 +7885,11 @@ spec: format: int32 type: integer service: + default: "" description: |- Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). - If this is not specified, the default behavior is defined by gRPC. type: string required: @@ -8236,7 +8259,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 name: cephfilesystemsubvolumegroups.ceph.rook.io spec: group: ceph.rook.io @@ -8374,7 +8397,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 name: cephnfses.ceph.rook.io spec: group: ceph.rook.io @@ -8440,13 +8463,11 @@ spec: ConfigFiles defines where the Kerberos configuration should be sourced from. Config files will be placed into the `/etc/krb5.conf.rook/` directory. - If this is left empty, Rook will not add any files. This allows you to manage the files yourself however you wish. For example, you may build them into your custom Ceph container image or use the Vault agent injector to securely add the files via annotations on the CephNFS spec (passed to the NFS server pods). - Rook configures Kerberos to log to stderr. We suggest removing logging sections from config files to avoid consuming unnecessary disk space from logging to files. properties: @@ -9248,11 +9269,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -9263,6 +9282,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -9616,11 +9641,11 @@ spec: format: int32 type: integer service: + default: "" description: |- Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). - If this is not specified, the default behavior is defined by gRPC. type: string required: @@ -10247,11 +10272,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -10262,6 +10285,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -10347,7 +10376,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 name: cephobjectrealms.ceph.rook.io spec: group: ceph.rook.io @@ -10437,7 +10466,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 name: cephobjectstores.ceph.rook.io spec: group: ceph.rook.io @@ -11303,11 +11332,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -11318,6 +11345,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object @@ -11420,11 +11453,11 @@ spec: format: int32 type: integer service: + default: "" description: |- Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). - If this is not specified, the default behavior is defined by gRPC. type: string required: @@ -11566,11 +11599,11 @@ spec: format: int32 type: integer service: + default: "" description: |- Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). - If this is not specified, the default behavior is defined by gRPC. type: string required: @@ -12090,7 +12123,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 name: cephobjectstoreusers.ceph.rook.io spec: group: ceph.rook.io @@ -12332,7 +12365,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 name: cephobjectzonegroups.ceph.rook.io spec: group: ceph.rook.io @@ -12427,7 +12460,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 name: cephobjectzones.ceph.rook.io spec: group: ceph.rook.io @@ -12479,7 +12512,6 @@ spec: CephObjectStore associated with this CephObjectStoreZone reachable to peer clusters. The list can have one or more endpoints pointing to different RGW servers in the zone. - If a CephObjectStore endpoint is omitted from this list, that object store's gateways will not receive multisite replication data (see CephObjectStore.spec.gateway.disableMultisiteSyncTraffic). @@ -12926,7 +12958,7 @@ apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: - controller-gen.kubebuilder.io/version: v0.14.0 + controller-gen.kubebuilder.io/version: v0.16.1 name: cephrbdmirrors.ceph.rook.io spec: group: ceph.rook.io @@ -13520,11 +13552,9 @@ spec: Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. - This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. - This field is immutable. It can only be set for containers. items: description: ResourceClaim references one entry in PodSpec.ResourceClaims. @@ -13535,6 +13565,12 @@ spec: the Pod where this field is used. It makes that resource available inside a container. type: string + request: + description: |- + Request is the name chosen for a request in the referenced claim. + If empty, everything from the claim is made available, otherwise + only the result of this request. + type: string required: - name type: object diff --git a/deploy/examples/csi-operator.yaml b/deploy/examples/csi-operator.yaml index 6735f1462b5d..5eaaf127ccd6 100644 --- a/deploy/examples/csi-operator.yaml +++ b/deploy/examples/csi-operator.yaml @@ -1,12 +1,3 @@ -apiVersion: v1 -kind: Namespace -metadata: - labels: - app.kubernetes.io/managed-by: kustomize - app.kubernetes.io/name: ceph-csi-operator - control-plane: controller-manager - name: ceph-csi-operator-system ---- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: @@ -118,34 +109,24 @@ spec: spec: description: ClientProfileMappingSpec defines the desired state of ClientProfileMapping properties: - blockPoolMapping: + mappings: items: description: - BlockPoolMappingSpec define a mapiing between a local - and remote block pools + MappingsSpec define a mapping between a local and remote + profiles properties: - local: - description: - BlockPoolRefSpec identify a blockpool - client - profile pair - properties: - clientProfileName: - type: string - poolId: - minimum: 0 - type: integer - type: object - remote: - description: - BlockPoolRefSpec identify a blockpool - client - profile pair - properties: - clientProfileName: + blockPoolIdMapping: + items: + items: type: string - poolId: - minimum: 0 - type: integer - type: object + maxItems: 2 + minItems: 2 + type: array + type: array + localClientProfile: + type: string + remoteClientProfile: + type: string type: object type: array type: object @@ -3935,12 +3916,14 @@ spec: description: log rotation for csi pods properties: logHostPath: - description: - LogHostPath is the prefix directory path for - the csi log files + description: |- + LogHostPath is the prefix directory path for the csi log files + Default to /var/lib/cephcsi type: string maxFiles: - description: MaxFiles is the number of logrtoate files + description: |- + MaxFiles is the number of logrtoate files + Default to 7 type: integer maxLogSize: anyOf: @@ -3960,6 +3943,9 @@ spec: - monthly type: string type: object + x-kubernetes-validations: + - message: Either maxLogSize or periodicity must be set + rule: (has(self.maxLogSize)) || (has(self.periodicity)) verbosity: description: |- Log verbosity level for driver pods, @@ -11046,12 +11032,14 @@ spec: description: log rotation for csi pods properties: logHostPath: - description: - LogHostPath is the prefix directory path - for the csi log files + description: |- + LogHostPath is the prefix directory path for the csi log files + Default to /var/lib/cephcsi type: string maxFiles: - description: MaxFiles is the number of logrtoate files + description: |- + MaxFiles is the number of logrtoate files + Default to 7 type: integer maxLogSize: anyOf: @@ -11073,6 +11061,9 @@ spec: - monthly type: string type: object + x-kubernetes-validations: + - message: Either maxLogSize or periodicity must be set + rule: (has(self.maxLogSize)) || (has(self.periodicity)) verbosity: description: |- Log verbosity level for driver pods, @@ -14483,20 +14474,80 @@ spec: --- apiVersion: v1 kind: ServiceAccount +metadata: + name: ceph-csi-cephfs-ctrlplugin-sa + namespace: rook-ceph +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: ceph-csi-cephfs-nodeplugin-sa + namespace: rook-ceph +--- +apiVersion: v1 +kind: ServiceAccount metadata: labels: app.kubernetes.io/managed-by: kustomize app.kubernetes.io/name: ceph-csi-operator - name: ceph-csi-operator-controller-manager + name: ceph-csi-controller-manager + namespace: rook-ceph +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: ceph-csi-nfs-ctrlplugin-sa + namespace: rook-ceph +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: ceph-csi-nfs-nodeplugin-sa + namespace: rook-ceph +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: ceph-csi-rbd-ctrlplugin-sa + namespace: rook-ceph +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: ceph-csi-rbd-nodeplugin-sa namespace: rook-ceph --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role +metadata: + name: ceph-csi-cephfs-ctrlplugin-r + namespace: rook-ceph +rules: + - apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - get + - watch + - list + - delete + - update + - create + - apiGroups: + - csiaddons.openshift.io + resources: + - csiaddonsnodes + verbs: + - create +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role metadata: labels: app.kubernetes.io/managed-by: kustomize app.kubernetes.io/name: ceph-csi-operator - name: ceph-csi-operator-leader-election-role + name: ceph-csi-leader-election-role namespace: rook-ceph rules: - apiGroups: @@ -14532,12 +14583,49 @@ rules: - patch --- apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: ceph-csi-rbd-ctrlplugin-r + namespace: rook-ceph +rules: + - apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - get + - watch + - list + - delete + - update + - create + - apiGroups: + - csiaddons.openshift.io + resources: + - csiaddonsnodes + verbs: + - create +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: ceph-csi-rbd-nodeplugin-r + namespace: rook-ceph +rules: + - apiGroups: + - csiaddons.openshift.io + resources: + - csiaddonsnodes + verbs: + - create +--- +apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: app.kubernetes.io/managed-by: kustomize app.kubernetes.io/name: ceph-csi-operator - name: ceph-csi-operator-cephconnection-viewer-role + name: ceph-csi-cephconnection-viewer-role rules: - apiGroups: - csi.ceph.io @@ -14560,7 +14648,7 @@ metadata: labels: app.kubernetes.io/managed-by: kustomize app.kubernetes.io/name: ceph-csi-operator - name: ceph-csi-operator-cephconnections-editor-role + name: ceph-csi-cephconnections-editor-role rules: - apiGroups: - csi.ceph.io @@ -14583,11 +14671,152 @@ rules: --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole +metadata: + name: ceph-csi-cephfs-ctrlplugin-cr +rules: + - apiGroups: + - "" + resources: + - secrets + verbs: + - get + - list + - apiGroups: + - "" + resources: + - persistentvolumes + verbs: + - get + - list + - watch + - create + - delete + - patch + - apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - get + - list + - watch + - patch + - update + - apiGroups: + - storage.k8s.io + resources: + - storageclasses + verbs: + - get + - list + - watch + - apiGroups: + - "" + resources: + - events + verbs: + - list + - watch + - create + - update + - patch + - apiGroups: + - storage.k8s.io + resources: + - volumeattachments + verbs: + - get + - list + - watch + - patch + - apiGroups: + - storage.k8s.io + resources: + - volumeattachments/status + verbs: + - patch + - apiGroups: + - "" + resources: + - persistentvolumeclaims/status + verbs: + - patch + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshots + verbs: + - get + - list + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshotclasses + verbs: + - get + - list + - watch + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshotcontents + verbs: + - get + - list + - watch + - patch + - update + - apiGroups: + - snapshot.storage.k8s.io + resources: + - volumesnapshotcontents/status + verbs: + - update + - patch +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: ceph-csi-cephfs-nodeplugin-cr +rules: + - apiGroups: + - "" + resources: + - nodes + verbs: + - get + - apiGroups: + - "" + resources: + - secrets + verbs: + - get + - apiGroups: + - "" + resources: + - configmaps + verbs: + - get + - apiGroups: + - "" + resources: + - serviceaccounts + verbs: + - get + - apiGroups: + - "" + resources: + - serviceaccounts/token + verbs: + - create +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole metadata: labels: app.kubernetes.io/managed-by: kustomize app.kubernetes.io/name: ceph-csi-operator - name: ceph-csi-operator-clientprofile-viewer-role + name: ceph-csi-clientprofile-viewer-role rules: - apiGroups: - csi.ceph.io @@ -14610,7 +14839,7 @@ metadata: labels: app.kubernetes.io/managed-by: kustomize app.kubernetes.io/name: ceph-csi-operator - name: ceph-csi-operator-clientprofilemapping-editor-role + name: ceph-csi-clientprofilemapping-editor-role rules: - apiGroups: - csi.ceph.io @@ -14637,7 +14866,7 @@ metadata: labels: app.kubernetes.io/managed-by: kustomize app.kubernetes.io/name: ceph-csi-operator - name: ceph-csi-operator-clientprofilemapping-viewer-role + name: ceph-csi-clientprofilemapping-viewer-role rules: - apiGroups: - csi.ceph.io @@ -14660,7 +14889,7 @@ metadata: labels: app.kubernetes.io/managed-by: kustomize app.kubernetes.io/name: ceph-csi-operator - name: ceph-csi-operator-clientprofiles-editor-role + name: ceph-csi-clientprofiles-editor-role rules: - apiGroups: - csi.ceph.io @@ -14687,7 +14916,7 @@ metadata: labels: app.kubernetes.io/managed-by: kustomize app.kubernetes.io/name: ceph-csi-operator - name: ceph-csi-operator-driver-editor-role + name: ceph-csi-driver-editor-role rules: - apiGroups: - csi.ceph.io @@ -14714,7 +14943,7 @@ metadata: labels: app.kubernetes.io/managed-by: kustomize app.kubernetes.io/name: ceph-csi-operator - name: ceph-csi-operator-driver-viewer-role + name: ceph-csi-driver-viewer-role rules: - apiGroups: - csi.ceph.io @@ -14734,7 +14963,7 @@ rules: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: - name: ceph-csi-operator-manager-role + name: ceph-csi-manager-role rules: - apiGroups: - "" @@ -14789,6 +15018,7 @@ rules: resources: - cephconnections verbs: + - delete - get - list - update @@ -14898,7 +15128,7 @@ metadata: labels: app.kubernetes.io/managed-by: kustomize app.kubernetes.io/name: ceph-csi-operator - name: ceph-csi-operator-metrics-reader + name: ceph-csi-metrics-reader rules: - nonResourceURLs: - /metrics @@ -14908,378 +15138,106 @@ rules: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: - labels: - app.kubernetes.io/managed-by: kustomize - app.kubernetes.io/name: ceph-csi-operator - name: ceph-csi-operator-operatorconfig-editor-role -rules: - - apiGroups: - - csi.ceph.io - resources: - - operatorconfigs - verbs: - - create - - delete - - get - - list - - patch - - update - - watch - - apiGroups: - - csi.ceph.io - resources: - - operatorconfigs/status - verbs: - - get ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - labels: - app.kubernetes.io/managed-by: kustomize - app.kubernetes.io/name: ceph-csi-operator - name: ceph-csi-operator-operatorconfig-viewer-role + name: ceph-csi-nfs-ctrlplugin-cr rules: - apiGroups: - - csi.ceph.io + - "" resources: - - operatorconfigs + - persistentvolumes verbs: - get - list - watch - - apiGroups: - - csi.ceph.io - resources: - - operatorconfigs/status - verbs: - - get ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - labels: - app.kubernetes.io/managed-by: kustomize - app.kubernetes.io/name: ceph-csi-operator - name: ceph-csi-operator-proxy-role -rules: - - apiGroups: - - authentication.k8s.io - resources: - - tokenreviews - verbs: - create - - apiGroups: - - authorization.k8s.io - resources: - - subjectaccessreviews - verbs: - - create ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: RoleBinding -metadata: - labels: - app.kubernetes.io/managed-by: kustomize - app.kubernetes.io/name: ceph-csi-operator - name: ceph-csi-operator-leader-election-rolebinding - namespace: rook-ceph -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: Role - name: ceph-csi-operator-leader-election-role -subjects: - - kind: ServiceAccount - name: ceph-csi-operator-controller-manager - namespace: rook-ceph ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - labels: - app.kubernetes.io/managed-by: kustomize - app.kubernetes.io/name: ceph-csi-operator - name: ceph-csi-operator-manager-rolebinding -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: ceph-csi-operator-manager-role -subjects: - - kind: ServiceAccount - name: ceph-csi-operator-controller-manager - namespace: rook-ceph ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - labels: - app.kubernetes.io/managed-by: kustomize - app.kubernetes.io/name: ceph-csi-operator - name: ceph-csi-operator-proxy-rolebinding -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: ceph-csi-operator-proxy-role -subjects: - - kind: ServiceAccount - name: ceph-csi-operator-controller-manager - namespace: rook-ceph ---- -apiVersion: v1 -kind: Service -metadata: - labels: - app.kubernetes.io/managed-by: kustomize - app.kubernetes.io/name: ceph-csi-operator - control-plane: controller-manager - name: ceph-csi-operator-controller-manager-metrics-service - namespace: rook-ceph -spec: - ports: - - name: https - port: 8443 - protocol: TCP - targetPort: https - selector: - control-plane: controller-manager ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - labels: - app.kubernetes.io/managed-by: kustomize - app.kubernetes.io/name: ceph-csi-operator - control-plane: controller-manager - name: ceph-csi-operator-controller-manager - namespace: rook-ceph -spec: - replicas: 1 - selector: - matchLabels: - control-plane: ceph-csi-op-controller-manager - template: - metadata: - annotations: - kubectl.kubernetes.io/default-container: manager - labels: - control-plane: ceph-csi-op-controller-manager - spec: - containers: - - args: - - --secure-listen-address=0.0.0.0:8443 - - --upstream=http://127.0.0.1:8080/ - - --logtostderr=true - - --v=0 - image: gcr.io/kubebuilder/kube-rbac-proxy:v0.16.0 - name: kube-rbac-proxy - ports: - - containerPort: 8443 - name: https - protocol: TCP - resources: - limits: - cpu: 500m - memory: 128Mi - requests: - cpu: 5m - memory: 64Mi - securityContext: - allowPrivilegeEscalation: false - capabilities: - drop: - - ALL - - args: - - --health-probe-bind-address=:8081 - - --metrics-bind-address=127.0.0.1:8080 - - --leader-elect - command: - - /manager - env: - - name: OPERATOR_NAMESPACE - valueFrom: - fieldRef: - fieldPath: metadata.namespace - image: subham03/csi:latest - livenessProbe: - httpGet: - path: /healthz - port: 8081 - initialDelaySeconds: 15 - periodSeconds: 20 - name: manager - readinessProbe: - httpGet: - path: /readyz - port: 8081 - initialDelaySeconds: 5 - periodSeconds: 10 - resources: - limits: - cpu: 500m - memory: 128Mi - requests: - cpu: 10m - memory: 64Mi - securityContext: - allowPrivilegeEscalation: false - capabilities: - drop: - - ALL - securityContext: - runAsNonRoot: true - serviceAccountName: ceph-csi-operator-controller-manager - terminationGracePeriodSeconds: 10 ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: csi-cephfs-ctrlplugin-sa - namespace: rook-ceph ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: csi-cephfs-nodeplugin-sa - namespace: rook-ceph ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: csi-rbd-ctrlplugin-sa - namespace: rook-ceph ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: csi-rbd-nodeplugin-sa - namespace: rook-ceph ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: Role -metadata: - name: csi-cephfs-ctrlplugin-r - namespace: rook-ceph -rules: - - apiGroups: - - coordination.k8s.io - resources: - - leases - verbs: - - get - - watch - - list - - delete - update - - create ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: Role -metadata: - name: csi-rbd-ctrlplugin-r - namespace: rook-ceph -rules: + - delete + - patch - apiGroups: - - coordination.k8s.io + - "" resources: - - leases + - persistentvolumeclaims verbs: - get - - watch - list - - delete + - watch + - patch - update - - create - apiGroups: - - csiaddons.openshift.io - resources: - - csiaddonsnodes - verbs: - - create ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: Role -metadata: - name: csi-rbd-nodeplugin-r - namespace: rook-ceph -rules: - - apiGroups: - - csiaddons.openshift.io - resources: - - csiaddonsnodes - verbs: - - create ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: csi-cephfs-ctrlplugin-cr -rules: - - apiGroups: - - "" + - storage.k8s.io resources: - - secrets + - storageclasses verbs: - get - list + - watch - apiGroups: - "" resources: - - persistentvolumes + - events verbs: - get - list - watch - create - - delete + - update - patch - apiGroups: - - "" + - storage.k8s.io resources: - - persistentvolumeclaims + - csinodes verbs: - get - list - watch - - patch - - update - apiGroups: - - storage.k8s.io + - "" resources: - - storageclasses + - nodes verbs: - get - list - watch - apiGroups: - - "" + - coordination.k8s.io resources: - - events + - leases verbs: + - get - list - watch - create - update - patch - apiGroups: - - storage.k8s.io + - "" + resources: + - secrets + verbs: + - get + - apiGroups: + - snapshot.storage.k8s.io resources: - - volumeattachments + - volumesnapshotclasses verbs: - get - list - watch - - patch - apiGroups: - - storage.k8s.io + - snapshot.storage.k8s.io resources: - - volumeattachments/status + - volumesnapshotcontents verbs: + - get + - list + - watch + - update - patch - apiGroups: - - "" + - snapshot.storage.k8s.io resources: - - persistentvolumeclaims/status + - volumesnapshotcontents/status verbs: + - update - patch - apiGroups: - snapshot.storage.k8s.io @@ -15289,35 +15247,31 @@ rules: - get - list - apiGroups: - - snapshot.storage.k8s.io + - "" resources: - - volumesnapshotclasses + - persistentvolumeclaims/status verbs: - - get - - list - - watch + - patch - apiGroups: - - snapshot.storage.k8s.io + - storage.k8s.io resources: - - volumesnapshotcontents + - volumeattachments verbs: - get - list - watch - patch - - update - apiGroups: - - snapshot.storage.k8s.io + - storage.k8s.io resources: - - volumesnapshotcontents/status + - volumeattachments/status verbs: - - update - patch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: - name: csi-cephfs-nodeplugin-cr + name: ceph-csi-nfs-nodeplugin-cr rules: - apiGroups: - "" @@ -15325,35 +15279,82 @@ rules: - nodes verbs: - get +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + labels: + app.kubernetes.io/managed-by: kustomize + app.kubernetes.io/name: ceph-csi-operator + name: ceph-csi-operatorconfig-editor-role +rules: - apiGroups: - - "" + - csi.ceph.io resources: - - secrets + - operatorconfigs verbs: + - create + - delete - get + - list + - patch + - update + - watch - apiGroups: - - "" + - csi.ceph.io resources: - - configmaps + - operatorconfigs/status + verbs: + - get +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + labels: + app.kubernetes.io/managed-by: kustomize + app.kubernetes.io/name: ceph-csi-operator + name: ceph-csi-operatorconfig-viewer-role +rules: + - apiGroups: + - csi.ceph.io + resources: + - operatorconfigs verbs: - get + - list + - watch - apiGroups: - - "" + - csi.ceph.io resources: - - serviceaccounts + - operatorconfigs/status verbs: - get +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + labels: + app.kubernetes.io/managed-by: kustomize + app.kubernetes.io/name: ceph-csi-operator + name: ceph-csi-proxy-role +rules: - apiGroups: - - "" + - authentication.k8s.io resources: - - serviceaccounts/token + - tokenreviews + verbs: + - create + - apiGroups: + - authorization.k8s.io + resources: + - subjectaccessreviews verbs: - create --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: - name: csi-rbd-ctrlplugin-cr + name: ceph-csi-rbd-ctrlplugin-cr rules: - apiGroups: - "" @@ -15509,7 +15510,7 @@ rules: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: - name: csi-rbd-nodeplugin-cr + name: ceph-csi-rbd-nodeplugin-cr rules: - apiGroups: - "" @@ -15560,93 +15561,275 @@ rules: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: - name: csi-cephfs-ctrlplugin-rb + name: ceph-csi-cephfs-ctrlplugin-rb + namespace: rook-ceph +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: ceph-csi-cephfs-ctrlplugin-r +subjects: + - kind: ServiceAccount + name: ceph-csi-cephfs-ctrlplugin-sa + namespace: rook-ceph +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + labels: + app.kubernetes.io/managed-by: kustomize + app.kubernetes.io/name: ceph-csi-operator + name: ceph-csi-leader-election-rolebinding namespace: rook-ceph roleRef: apiGroup: rbac.authorization.k8s.io kind: Role - name: csi-cephfs-ctrlplugin-r + name: ceph-csi-leader-election-role subjects: - kind: ServiceAccount - name: csi-cephfs-ctrlplugin-sa + name: ceph-csi-controller-manager namespace: rook-ceph --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: - name: csi-rbd-ctrlplugin-rb + name: ceph-csi-rbd-ctrlplugin-rb namespace: rook-ceph roleRef: apiGroup: rbac.authorization.k8s.io kind: Role - name: csi-rbd-ctrlplugin-r + name: ceph-csi-rbd-ctrlplugin-r subjects: - kind: ServiceAccount - name: csi-rbd-ctrlplugin-sa + name: ceph-csi-rbd-ctrlplugin-sa namespace: rook-ceph --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: - name: csi-rbd-nodeplugin-rb + name: ceph-csi-rbd-nodeplugin-rb namespace: rook-ceph roleRef: apiGroup: rbac.authorization.k8s.io kind: Role - name: csi-rbd-nodeplugin-r + name: ceph-csi-rbd-nodeplugin-r +subjects: + - kind: ServiceAccount + name: ceph-csi-rbd-nodeplugin-sa + namespace: rook-ceph +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: ceph-csi-cephfs-ctrlplugin-crb +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: ceph-csi-cephfs-ctrlplugin-cr +subjects: + - kind: ServiceAccount + name: ceph-csi-cephfs-ctrlplugin-sa + namespace: rook-ceph +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: ceph-csi-cephfs-nodeplugin-crb +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: ceph-csi-cephfs-nodeplugin-cr +subjects: + - kind: ServiceAccount + name: ceph-csi-cephfs-nodeplugin-sa + namespace: rook-ceph +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + app.kubernetes.io/managed-by: kustomize + app.kubernetes.io/name: ceph-csi-operator + name: ceph-csi-manager-rolebinding +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: ceph-csi-manager-role subjects: - kind: ServiceAccount - name: csi-rbd-nodeplugin-sa + name: ceph-csi-controller-manager namespace: rook-ceph --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: - name: csi-cephfs-ctrlplugin-crb + name: ceph-csi-nfs-ctrlplugin-crb roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole - name: csi-cephfs-ctrlplugin-cr + name: ceph-csi-nfs-ctrlplugin-cr subjects: - kind: ServiceAccount - name: csi-cephfs-ctrlplugin-sa + name: ceph-csi-nfs-ctrlplugin-sa namespace: rook-ceph --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: - name: csi-cephfs-nodeplugin-crb + name: ceph-csi-nfs-nodeplugin-crb +roleRef: + kind: ClusterRole + name: ceph-csi-nfs-nodeplugin-cr +subjects: + - kind: ServiceAccount + name: ceph-csi-nfs-nodeplugin-sa + namespace: rook-ceph +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + app.kubernetes.io/managed-by: kustomize + app.kubernetes.io/name: ceph-csi-operator + name: ceph-csi-proxy-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole - name: csi-cephfs-nodeplugin-cr + name: ceph-csi-proxy-role subjects: - kind: ServiceAccount - name: csi-cephfs-nodeplugin-sa + name: ceph-csi-controller-manager namespace: rook-ceph --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: - name: csi-rbd-ctrlplugin-crb + name: ceph-csi-rbd-ctrlplugin-crb roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole - name: csi-rbd-ctrlplugin-cr + name: ceph-csi-rbd-ctrlplugin-cr subjects: - kind: ServiceAccount - name: csi-rbd-ctrlplugin-sa + name: ceph-csi-rbd-ctrlplugin-sa namespace: rook-ceph --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: - name: csi-rbd-nodeplugin-crb + name: ceph-csi-rbd-nodeplugin-crb roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole - name: csi-rbd-nodeplugin-cr + name: ceph-csi-rbd-nodeplugin-cr subjects: - kind: ServiceAccount - name: csi-rbd-nodeplugin-sa + name: ceph-csi-rbd-nodeplugin-sa namespace: rook-ceph +--- +apiVersion: v1 +kind: Service +metadata: + labels: + app.kubernetes.io/managed-by: kustomize + app.kubernetes.io/name: ceph-csi-operator + control-plane: controller-manager + name: ceph-csi-controller-manager-metrics-service + namespace: rook-ceph +spec: + ports: + - name: https + port: 8443 + protocol: TCP + targetPort: https + selector: + control-plane: controller-manager +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/managed-by: kustomize + app.kubernetes.io/name: ceph-csi-operator + control-plane: controller-manager + name: ceph-csi-controller-manager + namespace: rook-ceph +spec: + replicas: 1 + selector: + matchLabels: + control-plane: ceph-csi-op-controller-manager + template: + metadata: + annotations: + kubectl.kubernetes.io/default-container: manager + labels: + control-plane: ceph-csi-op-controller-manager + spec: + containers: + - args: + - --secure-listen-address=0.0.0.0:8443 + - --upstream=http://127.0.0.1:8080/ + - --v=0 + image: gcr.io/kubebuilder/kube-rbac-proxy:v0.16.0 + name: kube-rbac-proxy + ports: + - containerPort: 8443 + name: https + protocol: TCP + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 5m + memory: 64Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + readOnlyRootFilesystem: true + - args: + - --health-probe-bind-address=:8081 + - --metrics-bind-address=127.0.0.1:8080 + - --leader-elect + command: + - /manager + env: + - name: OPERATOR_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: CSI_SERVICE_ACCOUNT_PREFIX + value: ceph-csi- + image: quay.io/cephcsi/ceph-csi-operator:v0.1.0 + livenessProbe: + httpGet: + path: /healthz + port: 8081 + initialDelaySeconds: 15 + periodSeconds: 20 + name: manager + readinessProbe: + httpGet: + path: /readyz + port: 8081 + initialDelaySeconds: 5 + periodSeconds: 10 + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 10m + memory: 64Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + readOnlyRootFilesystem: true + securityContext: + runAsNonRoot: true + serviceAccountName: ceph-csi-controller-manager + terminationGracePeriodSeconds: 10 diff --git a/deploy/examples/direct-mount.yaml b/deploy/examples/direct-mount.yaml index 2788c7fc6d81..ff9ebbea261f 100644 --- a/deploy/examples/direct-mount.yaml +++ b/deploy/examples/direct-mount.yaml @@ -19,7 +19,7 @@ spec: serviceAccountName: rook-ceph-default containers: - name: rook-direct-mount - image: rook/ceph:master + image: docker.io/rook/ceph:master command: ["/bin/bash"] args: ["-m", "-c", "/usr/local/bin/toolbox.sh"] imagePullPolicy: IfNotPresent diff --git a/deploy/examples/images.txt b/deploy/examples/images.txt index 4b6c4d7a1945..88a25cdd0df7 100644 --- a/deploy/examples/images.txt +++ b/deploy/examples/images.txt @@ -1,11 +1,11 @@ + docker.io/rook/ceph:master gcr.io/k8s-staging-sig-storage/objectstorage-sidecar:v20240513-v0.1.0-35-gefb3255 quay.io/ceph/ceph:v18.2.4 quay.io/ceph/cosi:v0.1.2 quay.io/cephcsi/cephcsi:v3.12.0 quay.io/csiaddons/k8s-sidecar:v0.9.0 - registry.k8s.io/sig-storage/csi-attacher:v4.5.1 - registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1 - registry.k8s.io/sig-storage/csi-provisioner:v4.0.1 - registry.k8s.io/sig-storage/csi-resizer:v1.10.1 - registry.k8s.io/sig-storage/csi-snapshotter:v7.0.2 - rook/ceph:master + registry.k8s.io/sig-storage/csi-attacher:v4.6.1 + registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.11.1 + registry.k8s.io/sig-storage/csi-provisioner:v5.0.1 + registry.k8s.io/sig-storage/csi-resizer:v1.11.1 + registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1 diff --git a/deploy/examples/operator-openshift.yaml b/deploy/examples/operator-openshift.yaml index 85767a3eec84..bd26983924f2 100644 --- a/deploy/examples/operator-openshift.yaml +++ b/deploy/examples/operator-openshift.yaml @@ -198,11 +198,11 @@ data: # of the CSI driver to something other than what is officially supported, change # these images to the desired release of the CSI driver. # ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.12.0" - # ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1" - # ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.10.1" - # ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v4.0.1" - # ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v7.0.2" - # ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v4.5.1" + # ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.11.1" + # ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.11.1" + # ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v5.0.1" + # ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1" + # ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v4.6.1" # (Optional) set user created priorityclassName for csi plugin pods. CSI_PLUGIN_PRIORITY_CLASSNAME: "system-node-critical" @@ -673,7 +673,7 @@ spec: serviceAccountName: rook-ceph-system containers: - name: rook-ceph-operator - image: rook/ceph:master + image: docker.io/rook/ceph:master args: ["ceph", "operator"] securityContext: runAsNonRoot: true diff --git a/deploy/examples/operator.yaml b/deploy/examples/operator.yaml index 86458948d386..454380848316 100644 --- a/deploy/examples/operator.yaml +++ b/deploy/examples/operator.yaml @@ -25,6 +25,9 @@ data: # The logging level for the operator: ERROR | WARNING | INFO | DEBUG ROOK_LOG_LEVEL: "INFO" + # The address for the operator's controller-runtime metrics. 0 is disabled. :8080 serves metrics on port 8080. + ROOK_OPERATOR_METRICS_BIND_ADDRESS: "0" + # Allow using loop devices for osds in test clusters. ROOK_CEPH_ALLOW_LOOP_DEVICES: "false" @@ -125,11 +128,11 @@ data: # of the CSI driver to something other than what is officially supported, change # these images to the desired release of the CSI driver. # ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.12.0" - # ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1" - # ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.10.1" - # ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v4.0.1" - # ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v7.0.2" - # ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v4.5.1" + # ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.11.1" + # ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.11.1" + # ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v5.0.1" + # ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1" + # ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v4.6.1" # To indicate the image pull policy to be applied to all the containers in the csi driver pods. # ROOK_CSI_IMAGE_PULL_POLICY: "IfNotPresent" @@ -599,7 +602,7 @@ spec: serviceAccountName: rook-ceph-system containers: - name: rook-ceph-operator - image: rook/ceph:master + image: docker.io/rook/ceph:master args: ["ceph", "operator"] securityContext: runAsNonRoot: true diff --git a/deploy/examples/osd-purge.yaml b/deploy/examples/osd-purge.yaml index f7915180dca7..5d4b594093f6 100644 --- a/deploy/examples/osd-purge.yaml +++ b/deploy/examples/osd-purge.yaml @@ -28,7 +28,7 @@ spec: serviceAccountName: rook-ceph-purge-osd containers: - name: osd-removal - image: rook/ceph:master + image: docker.io/rook/ceph:master # TODO: Insert the OSD ID in the last parameter that is to be removed # The OSD IDs are a comma-separated list. For example: "0" or "0,2". # If you want to preserve the OSD PVCs, set `--preserve-pvc true`. diff --git a/deploy/examples/sqlitevfs-client.yaml b/deploy/examples/sqlitevfs-client.yaml index a821bd2923f1..77f4613beba5 100644 --- a/deploy/examples/sqlitevfs-client.yaml +++ b/deploy/examples/sqlitevfs-client.yaml @@ -111,7 +111,7 @@ spec: initContainers: ## Setup Ceph SQLite VFS - name: setup - image: bitnami/kubectl:1.21.11 + image: docker.io/bitnami/kubectl:1.21.11 command: - /bin/bash - -c diff --git a/deploy/examples/toolbox-job.yaml b/deploy/examples/toolbox-job.yaml index 940cb98660f9..a6864f59641f 100644 --- a/deploy/examples/toolbox-job.yaml +++ b/deploy/examples/toolbox-job.yaml @@ -10,7 +10,7 @@ spec: spec: initContainers: - name: config-init - image: rook/ceph:master + image: docker.io/rook/ceph:master command: ["/usr/local/bin/toolbox.sh"] args: ["--skip-watch"] imagePullPolicy: IfNotPresent @@ -29,7 +29,7 @@ spec: mountPath: /var/lib/rook-ceph-mon containers: - name: script - image: rook/ceph:master + image: docker.io/rook/ceph:master volumeMounts: - mountPath: /etc/ceph name: ceph-config diff --git a/deploy/examples/toolbox-operator-image.yaml b/deploy/examples/toolbox-operator-image.yaml index 4e733c17664f..9b7ddf13fec5 100644 --- a/deploy/examples/toolbox-operator-image.yaml +++ b/deploy/examples/toolbox-operator-image.yaml @@ -25,7 +25,7 @@ spec: serviceAccountName: rook-ceph-default containers: - name: rook-ceph-tools-operator-image - image: rook/ceph:master + image: docker.io/rook/ceph:master command: - /bin/bash - -c diff --git a/go.mod b/go.mod index e8caa832514a..bf1f75feef61 100644 --- a/go.mod +++ b/go.mod @@ -18,7 +18,7 @@ require ( github.com/IBM/keyprotect-go-client v0.15.1 github.com/aws/aws-sdk-go v1.55.5 github.com/banzaicloud/k8s-objectmatcher v1.8.0 - github.com/ceph/ceph-csi-operator/api v0.0.0-20240807130124-deb792aa7ad8 + github.com/ceph/ceph-csi-operator/api v0.0.0-20240819112305-88e6db254d6c github.com/ceph/go-ceph v0.28.0 github.com/coreos/pkg v0.0.0-20230601102743-20bbbf26f4d8 github.com/csi-addons/kubernetes-csi-addons v0.9.0 @@ -45,14 +45,14 @@ require ( golang.org/x/sync v0.8.0 gopkg.in/ini.v1 v1.67.0 gopkg.in/yaml.v2 v2.4.0 - k8s.io/api v0.30.3 - k8s.io/apiextensions-apiserver v0.30.3 - k8s.io/apimachinery v0.30.3 + k8s.io/api v0.31.0 + k8s.io/apiextensions-apiserver v0.31.0 + k8s.io/apimachinery v0.31.0 k8s.io/cli-runtime v0.30.3 - k8s.io/client-go v0.30.3 + k8s.io/client-go v0.31.0 k8s.io/cloud-provider v0.30.3 - k8s.io/utils v0.0.0-20240502163921-fe8a2dddb1d0 - sigs.k8s.io/controller-runtime v0.18.4 + k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 + sigs.k8s.io/controller-runtime v0.19.0 sigs.k8s.io/mcs-api v0.1.0 sigs.k8s.io/yaml v1.4.0 ) @@ -65,11 +65,14 @@ require ( github.com/Azure/azure-sdk-for-go/sdk/keyvault/internal v0.7.1 // indirect github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 // indirect github.com/Masterminds/semver/v3 v3.2.1 // indirect + github.com/fxamacker/cbor/v2 v2.7.0 // indirect github.com/go-jose/go-jose/v4 v4.0.1 // indirect github.com/golang-jwt/jwt/v5 v5.2.1 // indirect github.com/kylelemons/godebug v1.1.0 // indirect github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect github.com/portworx/sched-ops v1.20.4-rc1 // indirect + github.com/x448/float16 v0.8.4 // indirect + gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect ) require ( @@ -124,11 +127,10 @@ require ( github.com/mailru/easyjson v0.7.7 // indirect github.com/mattn/go-colorable v0.1.13 // indirect github.com/mattn/go-isatty v0.0.20 // indirect - github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 // indirect github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d // indirect github.com/mitchellh/go-homedir v1.1.0 // indirect github.com/mitchellh/mapstructure v1.5.0 // indirect - github.com/moby/spdystream v0.2.0 // indirect + github.com/moby/spdystream v0.4.0 // indirect github.com/moby/term v0.0.0-20221205130635-1aeaba878587 // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/reflect2 v1.0.2 // indirect @@ -138,10 +140,10 @@ require ( github.com/openshift/api v0.0.0-20240301093301-ce10821dc999 // indirect github.com/peterbourgon/diskv v2.0.1+incompatible // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect - github.com/prometheus/client_golang v1.18.0 // indirect - github.com/prometheus/client_model v0.5.0 // indirect - github.com/prometheus/common v0.45.0 // indirect - github.com/prometheus/procfs v0.12.0 // indirect + github.com/prometheus/client_golang v1.19.1 // indirect + github.com/prometheus/client_model v0.6.1 // indirect + github.com/prometheus/common v0.55.0 // indirect + github.com/prometheus/procfs v0.15.1 // indirect github.com/ryanuber/go-glob v1.0.0 // indirect github.com/sirupsen/logrus v1.9.3 // indirect github.com/stretchr/objx v0.5.2 // indirect diff --git a/go.sum b/go.sum index 501df452a7d3..ed44f4e476d4 100644 --- a/go.sum +++ b/go.sum @@ -162,8 +162,8 @@ github.com/cenkalti/backoff/v3 v3.0.0/go.mod h1:cIeZDE3IrqwwJl6VUwCN6trj1oXrTS4r github.com/cenkalti/backoff/v3 v3.2.2 h1:cfUAAO3yvKMYKPrvhDuHSwQnhZNk/RMHKdZqKTxfm6M= github.com/cenkalti/backoff/v3 v3.2.2/go.mod h1:cIeZDE3IrqwwJl6VUwCN6trj1oXrTS4rc0ij+ULvLYs= github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= -github.com/ceph/ceph-csi-operator/api v0.0.0-20240807130124-deb792aa7ad8 h1:hzHNnPUN5F8GbaypJlY0tWTRT6sPFR0UDSba3YowOHQ= -github.com/ceph/ceph-csi-operator/api v0.0.0-20240807130124-deb792aa7ad8/go.mod h1:odEUoarG26wXBCC2l4O4nMWhAz6VTKr2FRkv9yELgi8= +github.com/ceph/ceph-csi-operator/api v0.0.0-20240819112305-88e6db254d6c h1:JOhwt7+iM18pm9s9zAhAKGRJm615AdIaKklbUd7Z8So= +github.com/ceph/ceph-csi-operator/api v0.0.0-20240819112305-88e6db254d6c/go.mod h1:odEUoarG26wXBCC2l4O4nMWhAz6VTKr2FRkv9yELgi8= github.com/ceph/ceph-csi/api v0.0.0-20231227104434-06f9a98b7a83 h1:xWhLO5MR+diAsZoOcPe0zVe+JcJrqMaVbScShye6pXw= github.com/ceph/ceph-csi/api v0.0.0-20231227104434-06f9a98b7a83/go.mod h1:ZSvtS90FCB/becFi/rjy85sSw1igchaWZfUigxN9FxY= github.com/ceph/go-ceph v0.28.0 h1:ZjlDV9XiVmBQIe9bKbT5j2Ft/bse3Jm+Ui65yE/oFFU= @@ -266,6 +266,8 @@ github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMo github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA= github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM= +github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E= +github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ= github.com/gemalto/flume v0.13.1 h1:wB9T4HP3D+3FRTymi8BzdDHkdTY8UbzH2eVSfYHmLxQ= github.com/gemalto/flume v0.13.1/go.mod h1:CCm9802zdB4Sy7Jx8dpHaFJjd4fF/nVfCIWBS4f8k9g= github.com/gemalto/kmip-go v0.0.10 h1:jAAZejUdRrspKigLoA62MTmIj0T7DDDOzdxHi1cDjoU= @@ -656,8 +658,6 @@ github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU= github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4= -github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0 h1:jWpvCLoY8Z/e3VKvlsiIGKtc+UG6U5vzxaoagmhXfyg= -github.com/matttproud/golang_protobuf_extensions/v2 v2.0.0/go.mod h1:QUyp042oQthUoa9bqDv0ER0wrtXnBruoNd7aNjkbP+k= github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d h1:5PJl274Y63IEHC+7izoQE9x6ikvDFZS2mDVS3drnohI= github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE= github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg= @@ -674,8 +674,9 @@ github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY= github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= -github.com/moby/spdystream v0.2.0 h1:cjW1zVyyoiM0T7b6UoySUFqzXMoqRckQtXwGPiBhOM8= github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c= +github.com/moby/spdystream v0.4.0 h1:Vy79D6mHeJJjiPdFEL2yku1kl0chZpJfZcPpb16BRl8= +github.com/moby/spdystream v0.4.0/go.mod h1:xBAYlnt/ay+11ShkdFKNAG7LsyK/tmNBVvVOwrfMgdI= github.com/moby/term v0.0.0-20200312100748-672ec06f55cd/go.mod h1:DdlQx2hp0Ss5/fLikoLlEeIYiATotOjgB//nb973jeo= github.com/moby/term v0.0.0-20221205130635-1aeaba878587 h1:HfkjXDfhgVaN5rmueG8cL8KKeFNecRCXFhaJ2qZ5SKA= github.com/moby/term v0.0.0-20221205130635-1aeaba878587/go.mod h1:8FzsFHVUBGZdbDsJw/ot+X+d5HLUbvklYLJ9uGfcI3Y= @@ -785,21 +786,21 @@ github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXP github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo= github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M= -github.com/prometheus/client_golang v1.18.0 h1:HzFfmkOzH5Q8L8G+kSJKUx5dtG87sewO+FoDDqP5Tbk= -github.com/prometheus/client_golang v1.18.0/go.mod h1:T+GXkCk5wSJyOqMIzVgvvjFDlkOQntgjkJWKrN5txjA= +github.com/prometheus/client_golang v1.19.1 h1:wZWJDwK+NameRJuPGDhlnFgx8e8HN3XHQeLaYJFJBOE= +github.com/prometheus/client_golang v1.19.1/go.mod h1:mP78NwGzrVks5S2H6ab8+ZZGJLZUq1hoULYBAYBw1Ho= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= -github.com/prometheus/client_model v0.5.0 h1:VQw1hfvPvk3Uv6Qf29VrPF32JB6rtbgI6cYPYQjL0Qw= -github.com/prometheus/client_model v0.5.0/go.mod h1:dTiFglRmd66nLR9Pv9f0mZi7B7fk5Pm3gvsjB5tr+kI= +github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E= +github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY= github.com/prometheus/common v0.0.0-20180801064454-c7de2306084e/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo= -github.com/prometheus/common v0.45.0 h1:2BGz0eBc2hdMDLnO/8n0jeB3oPrt2D08CekT0lneoxM= -github.com/prometheus/common v0.45.0/go.mod h1:YJmSTw9BoKxJplESWWxlbyttQR4uaEcGyv9MZjVOJsY= +github.com/prometheus/common v0.55.0 h1:KEi6DK7lXW/m7Ig5i47x0vRzuBsHuvJdi5ee6Y3G1dc= +github.com/prometheus/common v0.55.0/go.mod h1:2SECS4xJG1kd8XF9IcM1gMX6510RAEL65zxzNImwdc8= github.com/prometheus/procfs v0.0.0-20180725123919-05ee40e3a273/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= @@ -807,8 +808,8 @@ github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsT github.com/prometheus/procfs v0.0.11/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU= github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU= github.com/prometheus/procfs v0.2.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU= -github.com/prometheus/procfs v0.12.0 h1:jluTpSng7V9hY0O2R9DzzJHYb2xULk9VTR1V1R/k6Bo= -github.com/prometheus/procfs v0.12.0/go.mod h1:pcuDEFsWDnvcgNzo4EEweacyhjeA9Zk3cnaOZAZEfOo= +github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc= +github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk= github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg= github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ= @@ -895,6 +896,8 @@ github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijb github.com/vektah/gqlparser v1.1.2/go.mod h1:1ycwN7Ij5njmMkPPAOaRFY4rET2Enx7IkVv3vaXspKw= github.com/vishvananda/netns v0.0.4 h1:Oeaw1EM2JMxD51g9uhtC0D7erkIjgmj8+JZc26m1YX8= github.com/vishvananda/netns v0.0.4/go.mod h1:SpkAiCQRtJ6TvvxPnOSyH3BMl6unz3xZlaprSwhNNJM= +github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= +github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU= github.com/xlab/treeprint v1.2.0 h1:HzHnuAF1plUN2zGlAFHbSQP2qJ0ZAD3XF5XD7OesXRQ= github.com/xlab/treeprint v1.2.0/go.mod h1:gj5Gd3gPdKtR1ikdDK6fnFLdmIS0X30kTTuNd/WEJu0= @@ -1544,6 +1547,8 @@ gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntN gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw= gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= +gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4= +gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M= gopkg.in/evanphx/json-patch.v5 v5.7.0 h1:dGKGylPlZ/jus2g1YqhhyzfH0gPy2R8/MYUpW/OslTY= gopkg.in/evanphx/json-patch.v5 v5.7.0/go.mod h1:/kvTRh1TVm5wuM6OkHxqXtE/1nUZZpihg29RtuIyfvk= gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= @@ -1599,15 +1604,15 @@ k8s.io/api v0.20.1/go.mod h1:KqwcCVogGxQY3nBlRpwt+wpAMF/KjaCc7RpywacvqUo= k8s.io/api v0.20.4/go.mod h1:++lNL1AJMkDymriNniQsWRkMDzRaX2Y/POTUi8yvqYQ= k8s.io/api v0.23.5/go.mod h1:Na4XuKng8PXJ2JsploYYrivXrINeTaycCGcYgF91Xm8= k8s.io/api v0.26.0/go.mod h1:k6HDTaIFC8yn1i6pSClSqIwLABIcLV9l5Q4EcngKnQg= -k8s.io/api v0.30.3 h1:ImHwK9DCsPA9uoU3rVh4QHAHHK5dTSv1nxJUapx8hoQ= -k8s.io/api v0.30.3/go.mod h1:GPc8jlzoe5JG3pb0KJCSLX5oAFIW3/qNJITlDj8BH04= +k8s.io/api v0.31.0 h1:b9LiSjR2ym/SzTOlfMHm1tr7/21aD7fSkqgD/CVJBCo= +k8s.io/api v0.31.0/go.mod h1:0YiFF+JfFxMM6+1hQei8FY8M7s1Mth+z/q7eF1aJkTE= k8s.io/apiextensions-apiserver v0.0.0-20190409022649-727a075fdec8/go.mod h1:IxkesAMoaCRoLrPJdZNZUQp9NfZnzqaVzLhb2VEQzXE= k8s.io/apiextensions-apiserver v0.18.2/go.mod h1:q3faSnRGmYimiocj6cHQ1I3WpLqmDgJFlKL37fC4ZvY= k8s.io/apiextensions-apiserver v0.18.3/go.mod h1:TMsNGs7DYpMXd+8MOCX8KzPOCx8fnZMoIGB24m03+JE= k8s.io/apiextensions-apiserver v0.18.4/go.mod h1:NYeyeYq4SIpFlPxSAB6jHPIdvu3hL0pc36wuRChybio= k8s.io/apiextensions-apiserver v0.20.1/go.mod h1:ntnrZV+6a3dB504qwC5PN/Yg9PBiDNt1EVqbW2kORVk= -k8s.io/apiextensions-apiserver v0.30.3 h1:oChu5li2vsZHx2IvnGP3ah8Nj3KyqG3kRSaKmijhB9U= -k8s.io/apiextensions-apiserver v0.30.3/go.mod h1:uhXxYDkMAvl6CJw4lrDN4CPbONkF3+XL9cacCT44kV4= +k8s.io/apiextensions-apiserver v0.31.0 h1:fZgCVhGwsclj3qCw1buVXCV6khjRzKC5eCFt24kyLSk= +k8s.io/apiextensions-apiserver v0.31.0/go.mod h1:b9aMDEYaEe5sdK+1T0KU78ApR/5ZVp4i56VacZYEHxk= k8s.io/apimachinery v0.0.0-20190404173353-6a84e37a896d/go.mod h1:ccL7Eh7zubPUSh9A3USN90/OzHNSVN6zxzde07TDCL0= k8s.io/apimachinery v0.18.2/go.mod h1:9SnR/e11v5IbyPCGbvJViimtJ0SwHG4nfZFjU77ftcA= k8s.io/apimachinery v0.18.3/go.mod h1:OaXp26zu/5J7p0f92ASynJa1pZo06YlV9fG7BoWbCko= @@ -1619,8 +1624,8 @@ k8s.io/apimachinery v0.20.1/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRp k8s.io/apimachinery v0.20.4/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU= k8s.io/apimachinery v0.23.5/go.mod h1:BEuFMMBaIbcOqVIJqNZJXGFTP4W6AycEpb5+m/97hrM= k8s.io/apimachinery v0.26.0/go.mod h1:tnPmbONNJ7ByJNz9+n9kMjNP8ON+1qoAIIC70lztu74= -k8s.io/apimachinery v0.30.3 h1:q1laaWCmrszyQuSQCfNB8cFgCuDAoPszKY4ucAjDwHc= -k8s.io/apimachinery v0.30.3/go.mod h1:iexa2somDaxdnj7bha06bhb43Zpa6eWH8N8dbqVjTUc= +k8s.io/apimachinery v0.31.0 h1:m9jOiSr3FoSSL5WO9bjm1n6B9KROYYgNZOb4tyZ1lBc= +k8s.io/apimachinery v0.31.0/go.mod h1:rsPdaZJfTfLsNJSQzNHQvYoTmxhoOEofxtOsF3rtsMo= k8s.io/apiserver v0.18.2/go.mod h1:Xbh066NqrZO8cbsoenCwyDJ1OSi8Ag8I2lezeHxzwzw= k8s.io/apiserver v0.18.3/go.mod h1:tHQRmthRPLUtwqsOnJJMoI8SW3lnoReZeE861lH8vUw= k8s.io/apiserver v0.18.4/go.mod h1:q+zoFct5ABNnYkGIaGQ3bcbUNdmPyOCoEBcg51LChY8= @@ -1635,8 +1640,8 @@ k8s.io/client-go v0.19.2/go.mod h1:S5wPhCqyDNAlzM9CnEdgTGV4OqhsW3jGO1UM1epwfJA= k8s.io/client-go v0.20.0/go.mod h1:4KWh/g+Ocd8KkCwKF8vUNnmqgv+EVnQDK4MBF4oB5tY= k8s.io/client-go v0.20.1/go.mod h1:/zcHdt1TeWSd5HoUe6elJmHSQ6uLLgp4bIJHVEuy+/Y= k8s.io/client-go v0.23.5/go.mod h1:flkeinTO1CirYgzMPRWxUCnV0G4Fbu2vLhYCObnt/r4= -k8s.io/client-go v0.30.3 h1:bHrJu3xQZNXIi8/MoxYtZBBWQQXwy16zqJwloXXfD3k= -k8s.io/client-go v0.30.3/go.mod h1:8d4pf8vYu665/kUbsxWAQ/JDBNWqfFeZnvFiVdmx89U= +k8s.io/client-go v0.31.0 h1:QqEJzNjbN2Yv1H79SsS+SWnXkBgVu4Pj3CJQgbx0gI8= +k8s.io/client-go v0.31.0/go.mod h1:Y9wvC76g4fLjmU0BA+rV+h2cncoadjvjjkkIGoTLcGU= k8s.io/cloud-provider v0.30.3 h1:SNWZmllTymOTzIPJuhtZH6il/qVi75dQARRQAm9k6VY= k8s.io/cloud-provider v0.30.3/go.mod h1:Ax0AVdHnM7tMYnJH1Ycy4SMBD98+4zA+tboUR9eYsY8= k8s.io/code-generator v0.18.2/go.mod h1:+UHX5rSbxmR8kzS+FAv7um6dtYrZokQvjHpDSYRVkTc= @@ -1684,8 +1689,8 @@ k8s.io/utils v0.0.0-20210802155522-efc7438f0176/go.mod h1:jPW/WVKK9YHAvNhRxK0md/ k8s.io/utils v0.0.0-20211116205334-6203023598ed/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= k8s.io/utils v0.0.0-20221107191617-1a15be271d1d/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= k8s.io/utils v0.0.0-20221128185143-99ec85e7a448/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= -k8s.io/utils v0.0.0-20240502163921-fe8a2dddb1d0 h1:jgGTlFYnhF1PM1Ax/lAlxUPE+KfCIXHaathvJg1C3ak= -k8s.io/utils v0.0.0-20240502163921-fe8a2dddb1d0/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 h1:pUdcCO1Lk/tbT5ztQWOBi5HBgbBP1J8+AsQnQCKsi8A= +k8s.io/utils v0.0.0-20240711033017-18e509b52bc8/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0= rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA= @@ -1693,8 +1698,8 @@ sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.7/go.mod h1:PHgbrJT sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.14/go.mod h1:LEScyzhFmoF5pso/YSeBstl57mOzx9xlU9n85RGrDQg= sigs.k8s.io/controller-runtime v0.2.2/go.mod h1:9dyohw3ZtoXQuV1e766PHUn+cmrRCIcBh6XIMFNMZ+I= sigs.k8s.io/controller-runtime v0.6.1/go.mod h1:XRYBPdbf5XJu9kpS84VJiZ7h/u1hF3gEORz0efEja7A= -sigs.k8s.io/controller-runtime v0.18.4 h1:87+guW1zhvuPLh1PHybKdYFLU0YJp4FhJRmiHvm5BZw= -sigs.k8s.io/controller-runtime v0.18.4/go.mod h1:TVoGrfdpbA9VRFaRnKgk9P5/atA0pMwq+f+msb9M8Sg= +sigs.k8s.io/controller-runtime v0.19.0 h1:nWVM7aq+Il2ABxwiCizrVDSlmDcshi9llbaFbC0ji/Q= +sigs.k8s.io/controller-runtime v0.19.0/go.mod h1:iRmWllt8IlaLjvTTDLhRBXIEtkCK6hwVBJJsYS9Ajf4= sigs.k8s.io/controller-tools v0.3.0/go.mod h1:enhtKGfxZD1GFEoMgP8Fdbu+uKQ/cq1/WGJhdVChfvI= sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6/go.mod h1:p4QtZmO4uMYipTQNzagwnNoseA6OxSUutVw05NhYDRs= sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0= diff --git a/pkg/apis/ceph.rook.io/v1/types.go b/pkg/apis/ceph.rook.io/v1/types.go index e305698ab914..12a756231c17 100755 --- a/pkg/apis/ceph.rook.io/v1/types.go +++ b/pkg/apis/ceph.rook.io/v1/types.go @@ -2986,6 +2986,11 @@ type StorageScopeSpec struct { // Whether to allow updating the device class after the OSD is initially provisioned // +optional AllowDeviceClassUpdate bool `json:"allowDeviceClassUpdate,omitempty"` + // Whether Rook will resize the OSD CRUSH weight when the OSD PVC size is increased. + // This allows cluster data to be rebalanced to make most effective use of new OSD space. + // The default is false since data rebalancing can cause temporary cluster slowdown. + // +optional + AllowOsdCrushWeightUpdate bool `json:"allowOsdCrushWeightUpdate,omitempty"` } // OSDStore is the backend storage type used for creating the OSDs diff --git a/pkg/apis/go.mod b/pkg/apis/go.mod index e593a575ba19..093c0585519d 100644 --- a/pkg/apis/go.mod +++ b/pkg/apis/go.mod @@ -21,20 +21,22 @@ require ( github.com/libopenstorage/secrets v0.0.0-20240416031220-a17cf7f72c6c github.com/pkg/errors v0.9.1 github.com/stretchr/testify v1.9.0 - k8s.io/api v0.30.3 - k8s.io/apimachinery v0.30.3 + k8s.io/api v0.31.0 + k8s.io/apimachinery v0.31.0 ) require ( github.com/Masterminds/semver/v3 v3.2.1 // indirect + github.com/fxamacker/cbor/v2 v2.7.0 // indirect github.com/go-jose/go-jose/v4 v4.0.1 // indirect + github.com/google/go-cmp v0.6.0 // indirect github.com/google/uuid v1.6.0 // indirect github.com/onsi/ginkgo/v2 v2.20.0 // indirect github.com/onsi/gomega v1.34.1 // indirect - github.com/rogpeppe/go-internal v1.12.0 // indirect + github.com/x448/float16 v0.8.4 // indirect gopkg.in/yaml.v2 v2.4.0 // indirect - k8s.io/client-go v0.30.3 // indirect - k8s.io/utils v0.0.0-20240502163921-fe8a2dddb1d0 // indirect + k8s.io/client-go v0.31.0 // indirect + k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 // indirect sigs.k8s.io/yaml v1.4.0 // indirect ) @@ -43,7 +45,6 @@ require ( github.com/containernetworking/cni v1.2.0-rc1 // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect github.com/emicklei/go-restful/v3 v3.12.1 // indirect - github.com/evanphx/json-patch v5.9.0+incompatible // indirect github.com/fsnotify/fsnotify v1.7.0 // indirect github.com/go-logr/logr v1.4.2 // indirect github.com/go-openapi/jsonpointer v0.21.0 // indirect diff --git a/pkg/apis/go.sum b/pkg/apis/go.sum index 9f12c14acb70..f7222e6e5dbb 100644 --- a/pkg/apis/go.sum +++ b/pkg/apis/go.sum @@ -200,8 +200,6 @@ github.com/evanphx/json-patch v4.2.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLi github.com/evanphx/json-patch v4.5.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/evanphx/json-patch v4.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= -github.com/evanphx/json-patch v5.9.0+incompatible h1:fBXyNpNMuTTDdquAq/uisOr2lShz4oaXpDTX2bLe7ls= -github.com/evanphx/json-patch v5.9.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/fatih/color v1.16.0 h1:zmkK9Ngbjj+K0yRhTVONQh1p/HknKYSlNT+vZCzyokM= github.com/fatih/color v1.16.0/go.mod h1:fL2Sau1YI5c0pdGEVCbKQbLXB6edEj1ZgiY4NijnWvE= @@ -211,6 +209,8 @@ github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMo github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ= github.com/fsnotify/fsnotify v1.7.0 h1:8JEhPFa5W2WU7YfeZzPNqzMP6Lwt7L2715Ggo0nosvA= github.com/fsnotify/fsnotify v1.7.0/go.mod h1:40Bi/Hjc2AVfZrqy+aj+yEI+/bRxZnMJyTJwOpGvigM= +github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E= +github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ= github.com/getkin/kin-openapi v0.76.0/go.mod h1:660oXbgy5JFMKreazJaQTw7o+X00qeSyhcnluiMv+Xg= github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= @@ -749,6 +749,8 @@ github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijb github.com/vektah/gqlparser v1.1.2/go.mod h1:1ycwN7Ij5njmMkPPAOaRFY4rET2Enx7IkVv3vaXspKw= github.com/vishvananda/netns v0.0.4 h1:Oeaw1EM2JMxD51g9uhtC0D7erkIjgmj8+JZc26m1YX8= github.com/vishvananda/netns v0.0.4/go.mod h1:SpkAiCQRtJ6TvvxPnOSyH3BMl6unz3xZlaprSwhNNJM= +github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= +github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU= github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= @@ -1360,6 +1362,8 @@ gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntN gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw= gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= +gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4= +gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M= gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= gopkg.in/h2non/gock.v1 v1.0.15/go.mod h1:sX4zAkdYX1TRGJ2JY156cFspQn4yRWn6p9EMdODlynE= gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= @@ -1405,8 +1409,8 @@ k8s.io/api v0.20.1/go.mod h1:KqwcCVogGxQY3nBlRpwt+wpAMF/KjaCc7RpywacvqUo= k8s.io/api v0.20.4/go.mod h1:++lNL1AJMkDymriNniQsWRkMDzRaX2Y/POTUi8yvqYQ= k8s.io/api v0.23.5/go.mod h1:Na4XuKng8PXJ2JsploYYrivXrINeTaycCGcYgF91Xm8= k8s.io/api v0.26.0/go.mod h1:k6HDTaIFC8yn1i6pSClSqIwLABIcLV9l5Q4EcngKnQg= -k8s.io/api v0.30.3 h1:ImHwK9DCsPA9uoU3rVh4QHAHHK5dTSv1nxJUapx8hoQ= -k8s.io/api v0.30.3/go.mod h1:GPc8jlzoe5JG3pb0KJCSLX5oAFIW3/qNJITlDj8BH04= +k8s.io/api v0.31.0 h1:b9LiSjR2ym/SzTOlfMHm1tr7/21aD7fSkqgD/CVJBCo= +k8s.io/api v0.31.0/go.mod h1:0YiFF+JfFxMM6+1hQei8FY8M7s1Mth+z/q7eF1aJkTE= k8s.io/apiextensions-apiserver v0.0.0-20190409022649-727a075fdec8/go.mod h1:IxkesAMoaCRoLrPJdZNZUQp9NfZnzqaVzLhb2VEQzXE= k8s.io/apiextensions-apiserver v0.18.3/go.mod h1:TMsNGs7DYpMXd+8MOCX8KzPOCx8fnZMoIGB24m03+JE= k8s.io/apiextensions-apiserver v0.20.1/go.mod h1:ntnrZV+6a3dB504qwC5PN/Yg9PBiDNt1EVqbW2kORVk= @@ -1419,8 +1423,8 @@ k8s.io/apimachinery v0.20.1/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRp k8s.io/apimachinery v0.20.4/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU= k8s.io/apimachinery v0.23.5/go.mod h1:BEuFMMBaIbcOqVIJqNZJXGFTP4W6AycEpb5+m/97hrM= k8s.io/apimachinery v0.26.0/go.mod h1:tnPmbONNJ7ByJNz9+n9kMjNP8ON+1qoAIIC70lztu74= -k8s.io/apimachinery v0.30.3 h1:q1laaWCmrszyQuSQCfNB8cFgCuDAoPszKY4ucAjDwHc= -k8s.io/apimachinery v0.30.3/go.mod h1:iexa2somDaxdnj7bha06bhb43Zpa6eWH8N8dbqVjTUc= +k8s.io/apimachinery v0.31.0 h1:m9jOiSr3FoSSL5WO9bjm1n6B9KROYYgNZOb4tyZ1lBc= +k8s.io/apimachinery v0.31.0/go.mod h1:rsPdaZJfTfLsNJSQzNHQvYoTmxhoOEofxtOsF3rtsMo= k8s.io/apiserver v0.18.3/go.mod h1:tHQRmthRPLUtwqsOnJJMoI8SW3lnoReZeE861lH8vUw= k8s.io/apiserver v0.20.1/go.mod h1:ro5QHeQkgMS7ZGpvf4tSMx6bBOgPfE+f52KwvXfScaU= k8s.io/client-go v0.18.3/go.mod h1:4a/dpQEvzAhT1BbuWW09qvIaGw6Gbu1gZYiQZIi1DMw= @@ -1429,8 +1433,8 @@ k8s.io/client-go v0.19.2/go.mod h1:S5wPhCqyDNAlzM9CnEdgTGV4OqhsW3jGO1UM1epwfJA= k8s.io/client-go v0.20.0/go.mod h1:4KWh/g+Ocd8KkCwKF8vUNnmqgv+EVnQDK4MBF4oB5tY= k8s.io/client-go v0.20.1/go.mod h1:/zcHdt1TeWSd5HoUe6elJmHSQ6uLLgp4bIJHVEuy+/Y= k8s.io/client-go v0.23.5/go.mod h1:flkeinTO1CirYgzMPRWxUCnV0G4Fbu2vLhYCObnt/r4= -k8s.io/client-go v0.30.3 h1:bHrJu3xQZNXIi8/MoxYtZBBWQQXwy16zqJwloXXfD3k= -k8s.io/client-go v0.30.3/go.mod h1:8d4pf8vYu665/kUbsxWAQ/JDBNWqfFeZnvFiVdmx89U= +k8s.io/client-go v0.31.0 h1:QqEJzNjbN2Yv1H79SsS+SWnXkBgVu4Pj3CJQgbx0gI8= +k8s.io/client-go v0.31.0/go.mod h1:Y9wvC76g4fLjmU0BA+rV+h2cncoadjvjjkkIGoTLcGU= k8s.io/code-generator v0.18.3/go.mod h1:TgNEVx9hCyPGpdtCWA34olQYLkh3ok9ar7XfSsr8b6c= k8s.io/code-generator v0.19.0/go.mod h1:moqLn7w0t9cMs4+5CQyxnfA/HV8MF6aAVENF+WZZhgk= k8s.io/code-generator v0.20.0/go.mod h1:UsqdF+VX4PU2g46NC2JRs4gc+IfrctnwHb76RNbWHJg= @@ -1470,8 +1474,8 @@ k8s.io/utils v0.0.0-20210802155522-efc7438f0176/go.mod h1:jPW/WVKK9YHAvNhRxK0md/ k8s.io/utils v0.0.0-20211116205334-6203023598ed/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA= k8s.io/utils v0.0.0-20221107191617-1a15be271d1d/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= k8s.io/utils v0.0.0-20221128185143-99ec85e7a448/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= -k8s.io/utils v0.0.0-20240502163921-fe8a2dddb1d0 h1:jgGTlFYnhF1PM1Ax/lAlxUPE+KfCIXHaathvJg1C3ak= -k8s.io/utils v0.0.0-20240502163921-fe8a2dddb1d0/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 h1:pUdcCO1Lk/tbT5ztQWOBi5HBgbBP1J8+AsQnQCKsi8A= +k8s.io/utils v0.0.0-20240711033017-18e509b52bc8/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0= rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA= diff --git a/pkg/daemon/ceph/client/osd.go b/pkg/daemon/ceph/client/osd.go index 63b9341c2e29..8351438825bf 100644 --- a/pkg/daemon/ceph/client/osd.go +++ b/pkg/daemon/ceph/client/osd.go @@ -18,6 +18,7 @@ package client import ( "encoding/json" "fmt" + "math" "strconv" "strings" @@ -42,7 +43,7 @@ type OSDNodeUsage struct { CrushWeight json.Number `json:"crush_weight"` Depth json.Number `json:"depth"` Reweight json.Number `json:"reweight"` - KB json.Number `json:"kb"` + KB json.Number `json:"kb"` // KB is in KiB units UsedKB json.Number `json:"kb_used"` AvailKB json.Number `json:"kb_avail"` Utilization json.Number `json:"utilization"` @@ -220,6 +221,48 @@ func GetOSDUsage(context *clusterd.Context, clusterInfo *ClusterInfo) (*OSDUsage return &osdUsage, nil } +func convertKibibytesToTebibytes(kib string) (float64, error) { + kibFloat, err := strconv.ParseFloat(kib, 64) + if err != nil { + return float64(0), errors.Wrap(err, "failed to convert string to float") + } + return kibFloat / float64(1024*1024*1024), nil +} + +func ResizeOsdCrushWeight(actualOSD OSDNodeUsage, ctx *clusterd.Context, clusterInfo *ClusterInfo) (bool, error) { + currentCrushWeight, err := strconv.ParseFloat(actualOSD.CrushWeight.String(), 64) + if err != nil { + return false, errors.Wrapf(err, "failed converting string to float for osd.%d crush weight %q", actualOSD.ID, actualOSD.CrushWeight.String()) + } + // actualOSD.KB is in KiB units + calculatedCrushWeight, err := convertKibibytesToTebibytes(actualOSD.KB.String()) + if err != nil { + return false, errors.Wrapf(err, "failed to convert KiB to TiB for osd.%d crush weight %q", actualOSD.ID, actualOSD.KB.String()) + } + + // do not reweight if the calculated crush weight is 0 or less than equal to actualCrushWeight or there percentage resize is less than 1 percent + if calculatedCrushWeight == float64(0) { + logger.Debugf("osd size is 0 for osd.%d, not resizing the crush weights", actualOSD.ID) + return false, nil + } else if calculatedCrushWeight <= currentCrushWeight { + logger.Debugf("calculatedCrushWeight %f is less then current currentCrushWeight %f for osd.%d, not resizing the crush weights", calculatedCrushWeight, currentCrushWeight, actualOSD.ID) + return false, nil + } else if math.Abs(((calculatedCrushWeight - currentCrushWeight) / currentCrushWeight)) <= 0.01 { + logger.Debugf("calculatedCrushWeight %f is less then 1 percent increased from currentCrushWeight %f for osd.%d, not resizing the crush weights", calculatedCrushWeight, currentCrushWeight, actualOSD.ID) + return false, nil + } + + calculatedCrushWeightString := fmt.Sprintf("%f", calculatedCrushWeight) + logger.Infof("updating osd.%d crush weight to %q for cluster in namespace %q", actualOSD.ID, calculatedCrushWeightString, clusterInfo.Namespace) + args := []string{"osd", "crush", "reweight", fmt.Sprintf("osd.%d", actualOSD.ID), calculatedCrushWeightString} + buf, err := NewCephCommand(ctx, clusterInfo, args).Run() + if err != nil { + return false, errors.Wrapf(err, "failed to reweight osd.%d for cluster in namespace %q from actual crush weight %f to calculated crush weight %f: %s", actualOSD.ID, clusterInfo.Namespace, currentCrushWeight, calculatedCrushWeight, string(buf)) + } + + return true, nil +} + func SetDeviceClass(context *clusterd.Context, clusterInfo *ClusterInfo, osdID int, deviceClass string) error { // First remove the existing device class args := []string{"osd", "crush", "rm-device-class", fmt.Sprintf("osd.%d", osdID)} diff --git a/pkg/daemon/ceph/client/osd_test.go b/pkg/daemon/ceph/client/osd_test.go index b6a2c77f0a13..3fa09d8a0adc 100644 --- a/pkg/daemon/ceph/client/osd_test.go +++ b/pkg/daemon/ceph/client/osd_test.go @@ -141,6 +141,18 @@ func TestOSDDeviceClasses(t *testing.T) { }) } +func TestConvertKibibytesToTebibytes(t *testing.T) { + kib := "1024" + terabyte, err := convertKibibytesToTebibytes(kib) + assert.NoError(t, err) + assert.Equal(t, float64(9.5367431640625e-07), terabyte) + + kib = "1073741824" + terabyte, err = convertKibibytesToTebibytes(kib) + assert.NoError(t, err) + assert.Equal(t, float64(1), terabyte) +} + func TestOSDOkToStop(t *testing.T) { returnString := "" returnOkResult := true diff --git a/pkg/operator/ceph/cluster/mgr/dashboard_test.go b/pkg/operator/ceph/cluster/mgr/dashboard_test.go index 8a568758f59d..f82ada4d32d3 100644 --- a/pkg/operator/ceph/cluster/mgr/dashboard_test.go +++ b/pkg/operator/ceph/cluster/mgr/dashboard_test.go @@ -29,6 +29,7 @@ import ( exectest "github.com/rook/rook/pkg/util/exec/test" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" + v1 "k8s.io/api/core/v1" kerrors "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) @@ -158,7 +159,7 @@ func TestStartSecureDashboard(t *testing.T) { svc, err = c.context.Clientset.CoreV1().Services(clusterInfo.Namespace).Get(ctx, "rook-ceph-mgr-dashboard", metav1.GetOptions{}) assert.NotNil(t, err) assert.True(t, kerrors.IsNotFound(err)) - assert.Nil(t, svc) + assert.Equal(t, svc, &v1.Service{}) // Set the port to something over 1024 and confirm the port and targetPort are the same c.spec.Dashboard.Enabled = true diff --git a/pkg/operator/ceph/cluster/osd/create_test.go b/pkg/operator/ceph/cluster/osd/create_test.go index 6a5aea9de09d..7312d8b95ae7 100644 --- a/pkg/operator/ceph/cluster/osd/create_test.go +++ b/pkg/operator/ceph/cluster/osd/create_test.go @@ -353,7 +353,7 @@ func Test_startProvisioningOverPVCs(t *testing.T) { Name: "set1", Count: 0, VolumeClaimTemplates: []cephv1.VolumeClaimTemplate{ - newDummyPVC("data", namespace, "10Gi", "gp2"), + newDummyPVC("data", namespace, "10Gi", "gp2-csi"), }, }, }, @@ -378,7 +378,7 @@ func Test_startProvisioningOverPVCs(t *testing.T) { Name: "set1", Count: 2, VolumeClaimTemplates: []cephv1.VolumeClaimTemplate{ - newDummyPVC("data", namespace, "10Gi", "gp2"), + newDummyPVC("data", namespace, "10Gi", "gp2-csi"), }, }, }, diff --git a/pkg/operator/ceph/cluster/osd/osd.go b/pkg/operator/ceph/cluster/osd/osd.go index 24dbc52a1cd3..e9b4c0c6453d 100644 --- a/pkg/operator/ceph/cluster/osd/osd.go +++ b/pkg/operator/ceph/cluster/osd/osd.go @@ -326,13 +326,24 @@ func (c *Cluster) postReconcileUpdateOSDProperties(desiredOSDs map[int]*OSDInfo) } logger.Debugf("post processing osd properties with %d actual osds from ceph osd df and %d existing osds found during reconcile", len(osdUsage.OSDNodes), len(desiredOSDs)) for _, actualOSD := range osdUsage.OSDNodes { - if desiredOSD, ok := desiredOSDs[actualOSD.ID]; ok { - if err := c.updateDeviceClassIfChanged(actualOSD.ID, desiredOSD.DeviceClass, actualOSD.DeviceClass); err != nil { + if c.spec.Storage.AllowOsdCrushWeightUpdate { + _, err := cephclient.ResizeOsdCrushWeight(actualOSD, c.context, c.clusterInfo) + if err != nil { // Log the error and allow other updates to continue - logger.Error(err) + logger.Errorf("failed to resize osd crush weight on cluster in namespace %s: %v", c.clusterInfo.Namespace, err) } } + + desiredOSD, ok := desiredOSDs[actualOSD.ID] + if !ok { + continue + } + if err := c.updateDeviceClassIfChanged(actualOSD.ID, desiredOSD.DeviceClass, actualOSD.DeviceClass); err != nil { + // Log the error and allow other updates to continue + logger.Errorf("failed to update device class on cluster in namespace %s: %v", c.clusterInfo.Namespace, err) + } } + return nil } diff --git a/pkg/operator/ceph/cluster/osd/osd_test.go b/pkg/operator/ceph/cluster/osd/osd_test.go index 8d38bcae7c0e..de2e62c270e7 100644 --- a/pkg/operator/ceph/cluster/osd/osd_test.go +++ b/pkg/operator/ceph/cluster/osd/osd_test.go @@ -51,11 +51,20 @@ import ( const ( healthyCephStatus = `{"fsid":"877a47e0-7f6c-435e-891a-76983ab8c509","health":{"checks":{},"status":"HEALTH_OK"},"election_epoch":12,"quorum":[0,1,2],"quorum_names":["a","b","c"],"monmap":{"epoch":3,"fsid":"877a47e0-7f6c-435e-891a-76983ab8c509","modified":"2020-11-02 09:58:23.015313","created":"2020-11-02 09:57:37.719235","min_mon_release":14,"min_mon_release_name":"nautilus","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"172.30.74.42:3300","nonce":0},{"type":"v1","addr":"172.30.74.42:6789","nonce":0}]},"addr":"172.30.74.42:6789/0","public_addr":"172.30.74.42:6789/0"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"172.30.101.61:3300","nonce":0},{"type":"v1","addr":"172.30.101.61:6789","nonce":0}]},"addr":"172.30.101.61:6789/0","public_addr":"172.30.101.61:6789/0"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"172.30.250.55:3300","nonce":0},{"type":"v1","addr":"172.30.250.55:6789","nonce":0}]},"addr":"172.30.250.55:6789/0","public_addr":"172.30.250.55:6789/0"}]},"osdmap":{"osdmap":{"epoch":19,"num_osds":3,"num_up_osds":3,"num_in_osds":3,"num_remapped_pgs":0}},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":96}],"num_pgs":96,"num_pools":3,"num_objects":79,"data_bytes":81553681,"bytes_used":3255447552,"bytes_avail":1646011994112,"bytes_total":1649267441664,"read_bytes_sec":853,"write_bytes_sec":5118,"read_op_per_sec":1,"write_op_per_sec":0},"fsmap":{"epoch":9,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"ocs-storagecluster-cephfilesystem-b","status":"up:active","gid":14161},{"filesystem_id":1,"rank":0,"name":"ocs-storagecluster-cephfilesystem-a","status":"up:standby-replay","gid":24146}],"up:standby":0},"mgrmap":{"epoch":10,"active_gid":14122,"active_name":"a","active_addrs":{"addrvec":[{"type":"v2","addr":"10.131.0.28:6800","nonce":1},{"type":"v1","addr":"10.131.0.28:6801","nonce":1}]}}}` unHealthyCephStatus = `{"fsid":"613975f3-3025-4802-9de1-a2280b950e75","health":{"checks":{"OSD_DOWN":{"severity":"HEALTH_WARN","summary":{"message":"1 osds down"}},"OSD_HOST_DOWN":{"severity":"HEALTH_WARN","summary":{"message":"1 host (1 osds) down"}},"PG_AVAILABILITY":{"severity":"HEALTH_WARN","summary":{"message":"Reduced data availability: 101 pgs stale"}},"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"application not enabled on 1 pool(s)"}}},"status":"HEALTH_WARN","overall_status":"HEALTH_WARN"},"election_epoch":12,"quorum":[0,1,2],"quorum_names":["rook-ceph-mon0","rook-ceph-mon2","rook-ceph-mon1"],"monmap":{"epoch":3,"fsid":"613975f3-3025-4802-9de1-a2280b950e75","modified":"2017-08-11 20:13:02.075679","created":"2017-08-11 20:12:35.314510","features":{"persistent":["kraken","luminous"],"optional":[]},"mons":[{"rank":0,"name":"rook-ceph-mon0","addr":"10.3.0.45:6789/0","public_addr":"10.3.0.45:6789/0"},{"rank":1,"name":"rook-ceph-mon2","addr":"10.3.0.249:6789/0","public_addr":"10.3.0.249:6789/0"},{"rank":2,"name":"rook-ceph-mon1","addr":"10.3.0.252:6789/0","public_addr":"10.3.0.252:6789/0"}]},"osdmap":{"osdmap":{"epoch":17,"num_osds":2,"num_up_osds":1,"num_in_osds":2,"full":false,"nearfull":true,"num_remapped_pgs":0}},"pgmap":{"pgs_by_state":[{"state_name":"stale+active+clean","count":101},{"state_name":"active+clean","count":99}],"num_pgs":200,"num_pools":2,"num_objects":243,"data_bytes":976793635,"bytes_used":13611479040,"bytes_avail":19825307648,"bytes_total":33436786688},"fsmap":{"epoch":1,"by_rank":[]},"mgrmap":{"epoch":3,"active_gid":14111,"active_name":"rook-ceph-mgr0","active_addr":"10.2.73.6:6800/9","available":true,"standbys":[],"modules":["restful","status"],"available_modules":["dashboard","prometheus","restful","status","zabbix"]},"servicemap":{"epoch":1,"modified":"0.000000","services":{}}}` - osdDFResults = ` + // osdDFResults is a JSON representation of the output of `ceph osd df` command + // which has 5 osds with different storage usage + // Testing the resize of crush weight for OSDs based on the utilization + // 1) `ceph osd df`, kb size(in Tib) < crush_weight size -> no reweight + // 2) `ceph osd df`, kb size(in Tib) = 0 -> no reweight + // 3) `ceph osd df`, kb size(in Tib) and crush_weight size has 0.085% difference -> no reweight + // 4) & 5) `ceph osd df`, kb size(in Tib) and crush_weight size has more than 1% difference -> reweight + osdDFResults = ` {"nodes":[ {"id":0,"device_class":"hdd","name":"osd.0","type":"osd","type_id":0,"crush_weight":0.039093017578125,"depth":2,"pool_weights":{},"reweight":1,"kb":41943040,"kb_used":27640,"kb_used_data":432,"kb_used_omap":1,"kb_used_meta":27198,"kb_avail":41915400,"utilization":0.065898895263671875,"var":0.99448308946989694,"pgs":9,"status":"up"}, - {"id":1,"device_class":"hdd","name":"osd.1","type":"osd","type_id":0,"crush_weight":0.039093017578125,"depth":2,"pool_weights":{},"reweight":1,"kb":41943040,"kb_used":27960,"kb_used_data":752,"kb_used_omap":1,"kb_used_meta":27198,"kb_avail":41915080,"utilization":0.066661834716796875,"var":1.005996641880547,"pgs":15,"status":"up"}, - {"id":2,"device_class":"hdd","name":"osd.2","type":"osd","type_id":0,"crush_weight":0.039093017578125,"depth":2,"pool_weights":{},"reweight":1,"kb":41943040,"kb_used":27780,"kb_used_data":564,"kb_used_omap":1,"kb_used_meta":27198,"kb_avail":41915260,"utilization":0.066232681274414062,"var":0.99952026864955634,"pgs":8,"status":"up"}], + {"id":1,"device_class":"hdd","name":"osd.1","type":"osd","type_id":0,"crush_weight":0.039093017578125,"depth":2,"pool_weights":{},"reweight":1,"kb":0,"kb_used":27960,"kb_used_data":752,"kb_used_omap":1,"kb_used_meta":27198,"kb_avail":41915080,"utilization":0.066661834716796875,"var":1.005996641880547,"pgs":15,"status":"up"}, + {"id":2,"device_class":"hdd","name":"osd.1","type":"osd","type_id":0,"crush_weight":0.039093017578125,"depth":2,"pool_weights":{},"reweight":1,"kb":42333872,"kb_used":27960,"kb_used_data":752,"kb_used_omap":1,"kb_used_meta":27198,"kb_avail":41915080,"utilization":0.066661834716796875,"var":1.005996641880547,"pgs":15,"status":"up"}, + {"id":3,"device_class":"hdd","name":"osd.1","type":"osd","type_id":0,"crush_weight":0.039093017578125,"depth":2,"pool_weights":{},"reweight":1,"kb":9841943040,"kb_used":27960,"kb_used_data":752,"kb_used_omap":1,"kb_used_meta":27198,"kb_avail":41915080,"utilization":0.066661834716796875,"var":1.005996641880547,"pgs":15,"status":"up"}, + {"id":4,"device_class":"hdd","name":"osd.2","type":"osd","type_id":0,"crush_weight":0.039093017578125,"depth":2,"pool_weights":{},"reweight":1,"kb":9991943040,"kb_used":27780,"kb_used_data":564,"kb_used_omap":1,"kb_used_meta":27198,"kb_avail":41915260,"utilization":0.066232681274414062,"var":0.99952026864955634,"pgs":8,"status":"up"}], "stray":[],"summary":{"total_kb":125829120,"total_kb_used":83380,"total_kb_used_data":1748,"total_kb_used_omap":3,"total_kb_used_meta":81596,"total_kb_avail":125745740,"average_utilization":0.066264470418294266,"min_var":0.99448308946989694,"max_var":1.005996641880547,"dev":0.00031227879054369131}}` ) @@ -370,12 +379,14 @@ func TestAddRemoveNode(t *testing.T) { assert.True(t, k8serrors.IsNotFound(err)) } -func TestUpdateDeviceClass(t *testing.T) { +func TestPostReconcileUpdateOSDProperties(t *testing.T) { namespace := "ns" clientset := fake.NewSimpleClientset() removedDeviceClassOSD := "" setDeviceClassOSD := "" setDeviceClass := "" + var crushWeight []string + var osdID []string executor := &exectest.MockExecutor{ MockExecuteCommandWithOutput: func(command string, args ...string) (string, error) { logger.Infof("ExecuteCommandWithOutput: %s %v", command, args) @@ -389,6 +400,9 @@ func TestUpdateDeviceClass(t *testing.T) { } else if args[2] == "set-device-class" { setDeviceClass = args[3] setDeviceClassOSD = args[4] + } else if args[2] == "reweight" { + osdID = append(osdID, args[3]) + crushWeight = append(crushWeight, args[4]) } } } @@ -401,7 +415,6 @@ func TestUpdateDeviceClass(t *testing.T) { Name: "testing", Namespace: namespace, }, - Spec: cephv1.ClusterSpec{Storage: cephv1.StorageScopeSpec{AllowDeviceClassUpdate: true}}, } // Objects to track in the fake client. object := []runtime.Object{ @@ -425,11 +438,22 @@ func TestUpdateDeviceClass(t *testing.T) { 1: {ID: 1, DeviceClass: "hdd"}, 2: {ID: 2, DeviceClass: "newclass"}, } - err := c.postReconcileUpdateOSDProperties(desiredOSDs) - assert.Nil(t, err) - assert.Equal(t, "newclass", setDeviceClass) - assert.Equal(t, "osd.2", setDeviceClassOSD) - assert.Equal(t, "osd.2", removedDeviceClassOSD) + t.Run("test device class change", func(t *testing.T) { + c.spec.Storage = cephv1.StorageScopeSpec{AllowDeviceClassUpdate: true} + err := c.postReconcileUpdateOSDProperties(desiredOSDs) + assert.Nil(t, err) + assert.Equal(t, "newclass", setDeviceClass) + assert.Equal(t, "osd.2", setDeviceClassOSD) + assert.Equal(t, "osd.2", removedDeviceClassOSD) + }) + t.Run("test resize Osd Crush Weight", func(t *testing.T) { + c.spec.Storage = cephv1.StorageScopeSpec{AllowOsdCrushWeightUpdate: true} + err := c.postReconcileUpdateOSDProperties(desiredOSDs) + assert.Nil(t, err) + // only osds with more than 1% change in utilization should be reweighted + assert.Equal(t, []string([]string{"osd.3", "osd.4"}), osdID) + assert.Equal(t, []string([]string{"9.166024", "9.305722"}), crushWeight) + }) } func TestAddNodeFailure(t *testing.T) { diff --git a/pkg/operator/ceph/cr_manager.go b/pkg/operator/ceph/cr_manager.go index 0652b16630ff..b3398d681a18 100644 --- a/pkg/operator/ceph/cr_manager.go +++ b/pkg/operator/ceph/cr_manager.go @@ -44,12 +44,14 @@ import ( "github.com/rook/rook/pkg/operator/ceph/object/zonegroup" "github.com/rook/rook/pkg/operator/ceph/pool" "github.com/rook/rook/pkg/operator/ceph/pool/radosnamespace" + "github.com/rook/rook/pkg/operator/k8sutil" "k8s.io/apimachinery/pkg/runtime" cephv1 "github.com/rook/rook/pkg/apis/ceph.rook.io/v1" clientgoscheme "k8s.io/client-go/kubernetes/scheme" ctrl "sigs.k8s.io/controller-runtime" "sigs.k8s.io/controller-runtime/pkg/cache" + "sigs.k8s.io/controller-runtime/pkg/config" "sigs.k8s.io/controller-runtime/pkg/manager" metricsserver "sigs.k8s.io/controller-runtime/pkg/metrics/server" ) @@ -141,15 +143,23 @@ func (o *Operator) startCRDManager(context context.Context, mgrErrorCh chan erro } } + metricsBindAddress, err := k8sutil.GetOperatorSetting(context, o.context.Clientset, opcontroller.OperatorSettingConfigMapName, "ROOK_OPERATOR_METRICS_BIND_ADDRESS", "0") + if err != nil { + mgrErrorCh <- errors.Wrap(err, "failed to get configmap value `ROOK_OPERATOR_METRICS_BIND_ADDRESS`.") + return + } + skipNameValidation := true // Set up a manager mgrOpts := manager.Options{ LeaderElection: false, Metrics: metricsserver.Options{ - // BindAddress is the bind address for controller runtime metrics server default is 8080. Since we don't use the - // controller runtime metrics server, we need to set the bind address 0 so that port 8080 is available. - BindAddress: "0", + // BindAddress is the bind address for controller runtime metrics server. Defaulted to "0" which is off. + BindAddress: metricsBindAddress, }, Scheme: scheme, + Controller: config.Controller{ + SkipNameValidation: &skipNameValidation, + }, } if o.config.NamespaceToWatch != "" { diff --git a/pkg/operator/ceph/csi/operator_config.go b/pkg/operator/ceph/csi/operator_config.go index 9acef73dff56..82f27c19a9ba 100644 --- a/pkg/operator/ceph/csi/operator_config.go +++ b/pkg/operator/ceph/csi/operator_config.go @@ -81,9 +81,6 @@ func (r *ReconcileCSI) generateCSIOpConfigSpec(cluster cephv1.CephCluster, opCon } opConfig.Spec = csiopv1a1.OperatorConfigSpec{ - Log: &csiopv1a1.OperatorLogSpec{ - Verbosity: int(CSIParam.LogLevel), - }, DriverSpecDefaults: &csiopv1a1.DriverSpec{ Log: &csiopv1a1.LogSpec{ Verbosity: int(CSIParam.LogLevel), diff --git a/pkg/operator/ceph/csi/spec.go b/pkg/operator/ceph/csi/spec.go index ff8ea2b3ae6e..631b0446f01c 100644 --- a/pkg/operator/ceph/csi/spec.go +++ b/pkg/operator/ceph/csi/spec.go @@ -152,11 +152,11 @@ var ( var ( // image names DefaultCSIPluginImage = "quay.io/cephcsi/cephcsi:v3.12.0" - DefaultRegistrarImage = "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.1" - DefaultProvisionerImage = "registry.k8s.io/sig-storage/csi-provisioner:v4.0.1" - DefaultAttacherImage = "registry.k8s.io/sig-storage/csi-attacher:v4.5.1" - DefaultSnapshotterImage = "registry.k8s.io/sig-storage/csi-snapshotter:v7.0.2" - DefaultResizerImage = "registry.k8s.io/sig-storage/csi-resizer:v1.10.1" + DefaultRegistrarImage = "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.11.1" + DefaultProvisionerImage = "registry.k8s.io/sig-storage/csi-provisioner:v5.0.1" + DefaultAttacherImage = "registry.k8s.io/sig-storage/csi-attacher:v4.6.1" + DefaultSnapshotterImage = "registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1" + DefaultResizerImage = "registry.k8s.io/sig-storage/csi-resizer:v1.11.1" DefaultCSIAddonsImage = "quay.io/csiaddons/k8s-sidecar:v0.9.0" // image pull policy diff --git a/pkg/operator/k8sutil/customresource.go b/pkg/operator/k8sutil/customresource.go index 1298a7fc0fed..487d088b685c 100644 --- a/pkg/operator/k8sutil/customresource.go +++ b/pkg/operator/k8sutil/customresource.go @@ -54,19 +54,7 @@ func WatchCR(resource CustomResource, namespace string, handlers cache.ResourceE resource.Plural, namespace, fields.Everything()) - _, controller := cache.NewInformer( - source, - - // The object type. - objType, - - // resyncPeriod - // Every resyncPeriod, all resources in the cache will retrigger events. - // Set to 0 to disable the resync. - 0, - - // Your custom resource event handlers. - handlers) + _, controller := cache.NewInformerWithOptions(cache.InformerOptions{ListerWatcher: source, ObjectType: objType, ResyncPeriod: 0, Handler: handlers}) go controller.Run(done) <-done diff --git a/tests/framework/utils/snapshot.go b/tests/framework/utils/snapshot.go index 68dd2023d53d..d0bb73ad78d0 100644 --- a/tests/framework/utils/snapshot.go +++ b/tests/framework/utils/snapshot.go @@ -27,7 +27,7 @@ import ( const ( // snapshotterVersion from which the snapshotcontroller and CRD will be // installed - snapshotterVersion = "v7.0.2" + snapshotterVersion = "v8.0.1" repoURL = "https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter" rbacPath = "deploy/kubernetes/snapshot-controller/rbac-snapshot-controller.yaml" controllerPath = "deploy/kubernetes/snapshot-controller/setup-snapshot-controller.yaml" diff --git a/tests/integration/ceph_upgrade_test.go b/tests/integration/ceph_upgrade_test.go index 2f77b6edc99f..e7dcb1c0f8cc 100644 --- a/tests/integration/ceph_upgrade_test.go +++ b/tests/integration/ceph_upgrade_test.go @@ -380,7 +380,7 @@ func (s *UpgradeSuite) verifyOperatorImage(expectedImage string) { // verify that the operator spec is updated version, err := k8sutil.GetDeploymentImage(context.TODO(), s.k8sh.Clientset, systemNamespace, operatorContainer, operatorContainer) assert.NoError(s.T(), err) - assert.Equal(s.T(), "rook/ceph:"+expectedImage, version) + assert.Contains(s.T(), version, "rook/ceph:"+expectedImage) } func (s *UpgradeSuite) verifyRookUpgrade(numOSDs int) {