Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resync from upstream release-1.12 to downstream release-4.14 #509

Merged
merged 17 commits into from
Aug 16, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 0 additions & 11 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,6 @@ jobs:
with:
fetch-depth: 0

- name: consider debugging
uses: ./.github/workflows/tmate_debug
with:
use-tmate: ${{ secrets.USE_TMATE }}

- uses: actions/setup-go@v4
with:
go-version: "1.20"
Expand Down Expand Up @@ -76,7 +71,6 @@ jobs:
- name: validate gen-rbac
run: tests/scripts/validate_modified_files.sh gen-rbac


linux-build-all:
runs-on: ubuntu-20.04
if: "!contains(github.event.pull_request.labels.*.name, 'skip-ci')"
Expand All @@ -90,11 +84,6 @@ jobs:
with:
fetch-depth: 0

- name: consider debugging
uses: ./.github/workflows/tmate_debug
with:
use-tmate: ${{ secrets.USE_TMATE }}

- name: setup golang ${{ matrix.go-version }}
uses: actions/setup-go@v4
with:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/canary-integration-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -292,6 +292,7 @@ jobs:
run: |
export ALLOW_LOOP_DEVICES=true
tests/scripts/github-action-helper.sh deploy_cluster loop
tests/scripts/github-action-helper.sh create_operator_toolbox

- name: wait for prepare pod
run: tests/scripts/github-action-helper.sh wait_for_prepare_pod
Expand All @@ -301,7 +302,6 @@ jobs:

- name: test toolbox-operator-image pod
run: |
tests/scripts/github-action-helper.sh create_operator_toolbox
# waiting for toolbox operator image pod to get ready
kubectl -n rook-ceph wait --for=condition=ready pod -l app=rook-ceph-tools-operator-image --timeout=180s

Expand Down
5 changes: 0 additions & 5 deletions .github/workflows/push-build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -25,11 +25,6 @@ jobs:
with:
fetch-depth: 0

- name: consider debugging
uses: ./.github/workflows/tmate_debug
with:
use-tmate: ${{ secrets.USE_TMATE }}

- uses: actions/setup-go@v4
with:
go-version: "1.20"
Expand Down
4 changes: 4 additions & 0 deletions Documentation/Getting-Started/ceph-openshift.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,10 @@ oc create -f operator-openshift.yaml
oc create -f cluster.yaml
```

## Helm Installation

Configuration required for Openshift is automatically created by the Helm charts, such as the SecurityContextConstraints. See the [Rook Helm Charts](../Helm-Charts/helm-charts.md).

## Rook Privileges

To orchestrate the storage platform, Rook requires the following access in the cluster:
Expand Down
2 changes: 1 addition & 1 deletion Documentation/Getting-Started/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ To configure the Ceph storage cluster, at least one of these local storage optio
A simple Rook cluster is created for Kubernetes with the following `kubectl` commands and [example manifests](https://github.com/rook/rook/blob/master/deploy/examples).

```console
$ git clone --single-branch --branch v1.12.1 https://github.com/rook/rook.git
$ git clone --single-branch --branch v1.12.2 https://github.com/rook/rook.git
cd rook/deploy/examples
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ There are two sources for metrics collection:
From the root of your locally cloned Rook repo, go the monitoring directory:

```console
$ git clone --single-branch --branch v1.12.1 https://github.com/rook/rook.git
$ git clone --single-branch --branch v1.12.2 https://github.com/rook/rook.git
cd rook/deploy/examples/monitoring
```

Expand Down
30 changes: 15 additions & 15 deletions Documentation/Upgrade/rook-upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,11 +68,11 @@ With this upgrade guide, there are a few notes to consider:

Unless otherwise noted due to extenuating requirements, upgrades from one patch release of Rook to
another are as simple as updating the common resources and the image of the Rook operator. For
example, when Rook v1.12.1 is released, the process of updating from v1.12.0 is as simple as running
example, when Rook v1.12.2 is released, the process of updating from v1.12.0 is as simple as running
the following:

```console
git clone --single-branch --depth=1 --branch v1.12.1 https://github.com/rook/rook.git
git clone --single-branch --depth=1 --branch v1.12.2 https://github.com/rook/rook.git
cd rook/deploy/examples
```

Expand All @@ -84,7 +84,7 @@ Then, apply the latest changes from v1.12, and update the Rook Operator image.

```console
kubectl apply -f common.yaml -f crds.yaml
kubectl -n rook-ceph set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.12.1
kubectl -n rook-ceph set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.12.2
```

As exemplified above, it is a good practice to update Rook common resources from the example
Expand Down Expand Up @@ -117,7 +117,7 @@ In order to successfully upgrade a Rook cluster, the following prerequisites mus
## Rook Operator Upgrade

The examples given in this guide upgrade a live Rook cluster running `v1.11.10` to
the version `v1.12.1`. This upgrade should work from any official patch release of Rook v1.11 to any
the version `v1.12.2`. This upgrade should work from any official patch release of Rook v1.11 to any
official patch release of v1.12.

Let's get started!
Expand All @@ -144,7 +144,7 @@ by the Operator. Also update the Custom Resource Definitions (CRDs).
Get the latest common resources manifests that contain the latest changes.

```console
git clone --single-branch --depth=1 --branch v1.12.1 https://github.com/rook/rook.git
git clone --single-branch --depth=1 --branch v1.12.2 https://github.com/rook/rook.git
cd rook/deploy/examples
```

Expand Down Expand Up @@ -183,7 +183,7 @@ The largest portion of the upgrade is triggered when the operator's image is upd
When the operator is updated, it will proceed to update all of the Ceph daemons.

```console
kubectl -n $ROOK_OPERATOR_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.12.1
kubectl -n $ROOK_OPERATOR_NAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.12.2
```

### **3. Update Ceph CSI**
Expand Down Expand Up @@ -213,16 +213,16 @@ watch --exec kubectl -n $ROOK_CLUSTER_NAMESPACE get deployments -l rook_cluster=
```

As an example, this cluster is midway through updating the OSDs. When all deployments report `1/1/1`
availability and `rook-version=v1.12.1`, the Ceph cluster's core components are fully updated.
availability and `rook-version=v1.12.2`, the Ceph cluster's core components are fully updated.

```console
Every 2.0s: kubectl -n rook-ceph get deployment -o j...

rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.12.1
rook-ceph-mon-a req/upd/avl: 1/1/1 rook-version=v1.12.1
rook-ceph-mon-b req/upd/avl: 1/1/1 rook-version=v1.12.1
rook-ceph-mon-c req/upd/avl: 1/1/1 rook-version=v1.12.1
rook-ceph-osd-0 req/upd/avl: 1// rook-version=v1.12.1
rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.12.2
rook-ceph-mon-a req/upd/avl: 1/1/1 rook-version=v1.12.2
rook-ceph-mon-b req/upd/avl: 1/1/1 rook-version=v1.12.2
rook-ceph-mon-c req/upd/avl: 1/1/1 rook-version=v1.12.2
rook-ceph-osd-0 req/upd/avl: 1// rook-version=v1.12.2
rook-ceph-osd-1 req/upd/avl: 1/1/1 rook-version=v1.11.10
rook-ceph-osd-2 req/upd/avl: 1/1/1 rook-version=v1.11.10
```
Expand All @@ -234,13 +234,13 @@ An easy check to see if the upgrade is totally finished is to check that there i
# kubectl -n $ROOK_CLUSTER_NAMESPACE get deployment -l rook_cluster=$ROOK_CLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{"rook-version="}{.metadata.labels.rook-version}{"\n"}{end}' | sort | uniq
This cluster is not yet finished:
rook-version=v1.11.10
rook-version=v1.12.1
rook-version=v1.12.2
This cluster is finished:
rook-version=v1.12.1
rook-version=v1.12.2
```

### **5. Verify the updated cluster**

At this point, the Rook operator should be running version `rook/ceph:v1.12.1`.
At this point, the Rook operator should be running version `rook/ceph:v1.12.2`.

Verify the CephCluster health using the [health verification doc](health-verification.md).
1 change: 1 addition & 0 deletions deploy/charts/rook-ceph-cluster/prometheus/localrules.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,7 @@ groups:
description: "{{ $value | humanize }}% or {{ with query \"count(ceph_osd_up == 0)\" }}{{ . | first | value }}{{ end }} of {{ with query \"count(ceph_osd_up)\" }}{{ . | first | value }}{{ end }} OSDs are down (>= 10%). The following OSDs are down: {{- range query \"(ceph_osd_up * on(ceph_daemon) group_left(hostname) ceph_osd_metadata) == 0\" }} - {{ .Labels.ceph_daemon }} on {{ .Labels.hostname }} {{- end }}"
summary: "More than 10% of OSDs are down"
expr: "count(ceph_osd_up == 0) / count(ceph_osd_up) * 100 >= 10"
for: "5m"
labels:
oid: "1.3.6.1.4.1.50495.1.2.1.4.1"
severity: "critical"
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# scc for the Rook and Ceph daemons
# for creating cluster in openshift
{{- if .Capabilities.APIVersions.Has "security.openshift.io/v1" }}
kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
name: rook-cluster-{{ .Release.Namespace }}
allowPrivilegedContainer: true
allowHostDirVolumePlugin: true
allowHostPID: false
# set to true if running rook with host networking enabled
allowHostNetwork: false
# set to true if running rook with the provider as host
allowHostPorts: false
priority:
allowedCapabilities: ["MKNOD"]
allowHostIPC: true
readOnlyRootFilesystem: false
# drop all default privileges
requiredDropCapabilities: ["All"]
defaultAddCapabilities: []
runAsUser:
type: RunAsAny
seLinuxContext:
type: MustRunAs
fsGroup:
type: MustRunAs
supplementalGroups:
type: RunAsAny
volumes:
- configMap
- downwardAPI
- emptyDir
- hostPath
- persistentVolumeClaim
- projected
- secret
users:
# A user needs to be added for each rook service account.
- system:serviceaccount:{{ .Release.Namespace }}:default
- system:serviceaccount:{{ .Release.Namespace }}:rook-ceph-mgr
- system:serviceaccount:{{ .Release.Namespace }}:rook-ceph-osd
- system:serviceaccount:{{ .Release.Namespace }}:rook-ceph-rgw
{{- end }}
5 changes: 5 additions & 0 deletions deploy/charts/rook-ceph/templates/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -79,8 +79,13 @@ spec:
value: {{ .Values.discover.resources }}
{{- end }}
{{- end }}
{{- if .Capabilities.APIVersions.Has "security.openshift.io/v1" }}
- name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED
value: "true"
{{- else }}
- name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED
value: "{{ .Values.hostpathRequiresPrivileged }}"
{{- end }}
- name: ROOK_DISABLE_DEVICE_HOTPLUG
value: "{{ .Values.disableDeviceHotplug }}"
- name: DISCOVER_DAEMON_UDEV_BLACKLIST
Expand Down
Loading
Loading