Skip to content

Commit

Permalink
Merge pull request #666 from red-hat-storage/sync_ds--master
Browse files Browse the repository at this point in the history
Syncing latest changes from master for rook
  • Loading branch information
travisn authored Jun 11, 2024
2 parents f13acdc + 95de065 commit 0ad5eca
Show file tree
Hide file tree
Showing 34 changed files with 2,114 additions and 1,802 deletions.
3 changes: 3 additions & 0 deletions Documentation/CRDs/Cluster/ceph-cluster-crd.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,9 @@ For more details on the mons and when to choose a number other than `3`, see the
* For non-PVCs: `placement.all` and `placement.osd`
* For PVCs: `placement.all` and inside the storageClassDeviceSets from the `placement` or `preparePlacement`
* `flappingRestartIntervalHours`: Defines the time for which an OSD pod will sleep before restarting, if it stopped due to flapping. Flapping occurs where OSDs are marked `down` by Ceph more than 5 times in 600 seconds. The OSDs will stay down when flapping since they likely have a bad disk or other issue that needs investigation. If the issue with the OSD is fixed manually, the OSD pod can be manually restarted. The sleep is disabled if this interval is set to 0.
* `fullRatio`: The ratio at which Ceph should block IO if the OSDs are too full. The default is 0.95.
* `backfillFullRatio`: The ratio at which Ceph should stop backfilling data if the OSDs are too full. The default is 0.90.
* `nearFullRatio`: The ratio at which Ceph should raise a health warning if the cluster is almost full. The default is 0.85.
* `disruptionManagement`: The section for configuring management of daemon disruptions
* `managePodBudgets`: if `true`, the operator will create and manage PodDisruptionBudgets for OSD, Mon, RGW, and MDS daemons. OSD PDBs are managed dynamically via the strategy outlined in the [design](https://github.com/rook/rook/blob/master/design/ceph/ceph-managed-disruptionbudgets.md). The operator will block eviction of OSDs by default and unblock them safely when drains are detected.
* `osdMaintenanceTimeout`: is a duration in minutes that determines how long an entire failureDomain like `region/zone/host` will be held in `noout` (in addition to the default DOWN/OUT interval) when it is draining. The default value is `30` minutes.
Expand Down
8 changes: 8 additions & 0 deletions Documentation/CRDs/Cluster/network-providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,14 @@ Ceph daemons will use any network available on the host for communication. To re
only a specific specific host interfaces or networks, use `addressRanges` to select the network
CIDRs Ceph will bind to on the host.

If the Ceph mons are expected to bind to a public network that is different from the IP address
assign to the K8s node where the mon is running, the IP address for the mon can be set by
adding an annotation to the node:

```yaml
network.rook.io/mon-ip: <IPAddress>
```

If the host networking setting is changed in a cluster where mons are already running, the existing mons will
remain running with the same network settings with which they were created. To complete the conversion
to or from host networking after you update this setting, you will need to
Expand Down
36 changes: 36 additions & 0 deletions Documentation/CRDs/specification.md
Original file line number Diff line number Diff line change
Expand Up @@ -12170,6 +12170,42 @@ User needs to manually restart the OSD pod if they manage to fix the underlying
The sleep will be disabled if this interval is set to 0.</p>
</td>
</tr>
<tr>
<td>
<code>fullRatio</code><br/>
<em>
float64
</em>
</td>
<td>
<em>(Optional)</em>
<p>FullRatio is the ratio at which the cluster is considered full and ceph will stop accepting writes. Default is 0.95.</p>
</td>
</tr>
<tr>
<td>
<code>nearFullRatio</code><br/>
<em>
float64
</em>
</td>
<td>
<em>(Optional)</em>
<p>NearFullRatio is the ratio at which the cluster is considered nearly full and will raise a ceph health warning. Default is 0.85.</p>
</td>
</tr>
<tr>
<td>
<code>backfillFullRatio</code><br/>
<em>
float64
</em>
</td>
<td>
<em>(Optional)</em>
<p>BackfillFullRatio is the ratio at which the cluster is too full for backfill. Backfill will be disabled if above this threshold. Default is 0.90.</p>
</td>
</tr>
</tbody>
</table>
<h3 id="ceph.rook.io/v1.StoreType">StoreType
Expand Down
26 changes: 10 additions & 16 deletions ROADMAP.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,26 +8,20 @@ We hope that the items listed below will inspire further engagement from the com
Any dates listed below and the specific issues that will ship in a given milestone are subject to change but should give a general idea of what we are planning.
See the [GitHub project boards](https://github.com/rook/rook/projects) for the most up-to-date issues and their status.

## Rook Ceph 1.14
## Rook Ceph 1.15

The following high level features are targeted for Rook v1.14 (April 2024). For more detailed project tracking see the [v1.14 board](https://github.com/rook/rook/projects/31).
The following high level features are targeted for Rook v1.15 (July 2024). For more detailed project tracking see the [v1.15 board](https://github.com/rook/rook/projects/32).

* Support for Ceph Squid (v19)
* Allow setting the application name on a CephBlockPool [#13744](https://github.com/rook/rook/pull/13744)
* Pool sharing for multiple object stores [#11411](https://github.com/rook/rook/issues/11411)
* DNS subdomain style access to RGW buckets [#4780](https://github.com/rook/rook/issues/4780)
* Replace a single OSD when a metadataDevice is configured with multiple OSDs [#13240](https://github.com/rook/rook/issues/13240)
* Create a default service account for all Ceph daemons [#13362](https://github.com/rook/rook/pull/13362)
* Enable the rook orchestrator mgr module by default for improved dashboard integration [#13760](https://github.com/rook/rook/issues/13760)
* Option to run all components on the host network [#13571](https://github.com/rook/rook/issues/13571)
* Multus-enabled clusters to begin "holder" pod deprecation [#13055](https://github.com/rook/rook/issues/13055)
* Separate CSI image repository and tag for all images in the helm chart [#13585](https://github.com/rook/rook/issues/13585)
* Ceph-CSI [v3.11](https://github.com/ceph/ceph-csi/issues?q=is%3Aopen+is%3Aissue+milestone%3Arelease-v3.11.0)
* Add build support for Go 1.22 [#13738](https://github.com/rook/rook/pull/13738)
* Add topology based provisioning for external clusters [#13821](https://github.com/rook/rook/pull/13821)
* Multus-enabled clusters will potentially remove "holder" pods [#14289](https://github.com/rook/rook/issues/14289)
* Key rotation for Ceph object store users [#11563](https://github.com/rook/rook/issues/11563)
* CSI Driver
* Integrate the new Ceph-CSI operator [#14260](https://github.com/rook/rook/issues/14260)
* Ceph-CSI [v3.12](https://github.com/ceph/ceph-csi/issues?q=is%3Aopen+is%3Aissue+milestone%3Arelease-v3.12.0)
* Support log rotation for the Ceph-CSI pods [#12809](https://github.com/rook/rook/issues/12809)

## Kubectl Plugin

Features are planned in the 1.14 time frame for the [Kubectl Plugin](https://github.com/rook/kubectl-rook-ceph).
Features are planned for the [Kubectl Plugin](https://github.com/rook/kubectl-rook-ceph), though without a committed timeline.
* Collect details to help troubleshoot the csi driver [#69](https://github.com/rook/kubectl-rook-ceph/issues/69)
* Command to flatten an RBD image [#222](https://github.com/rook/kubectl-rook-ceph/issues/222)
* Support `radosgw-admin` commands from the plugin [#253](https://github.com/rook/kubectl-rook-ceph/issues/253)
15 changes: 15 additions & 0 deletions build/csv/ceph/ceph.rook.io_cephclusters.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1502,6 +1502,11 @@ spec:
storage:
nullable: true
properties:
backfillFullRatio:
maximum: 1
minimum: 0
nullable: true
type: number
config:
additionalProperties:
type: string
Expand Down Expand Up @@ -1531,6 +1536,16 @@ spec:
x-kubernetes-preserve-unknown-fields: true
flappingRestartIntervalHours:
type: integer
fullRatio:
maximum: 1
minimum: 0
nullable: true
type: number
nearFullRatio:
maximum: 1
minimum: 0
nullable: true
type: number
nodes:
items:
properties:
Expand Down
Loading

0 comments on commit 0ad5eca

Please sign in to comment.