Skip to content

Commit

Permalink
release v1.5.0 docs (#128)
Browse files Browse the repository at this point in the history
Signed-off-by: liheng.zms <[email protected]>
  • Loading branch information
zmberg authored Sep 12, 2023
1 parent 1f03b77 commit c33a011
Show file tree
Hide file tree
Showing 86 changed files with 14,603 additions and 846 deletions.
7 changes: 5 additions & 2 deletions docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ $ helm repo add openkruise https://openkruise.github.io/charts/
$ helm repo update

# Install the latest version.
$ helm install kruise openkruise/kruise --version 1.4.0
$ helm install kruise openkruise/kruise --version 1.5.0
```
**Note:** [Changelog](https://github.com/openkruise/kruise/blob/master/CHANGELOG.md).

Expand All @@ -30,7 +30,7 @@ $ helm repo add openkruise https://openkruise.github.io/charts/
$ helm repo update

# Upgrade to the latest version.
$ helm upgrade kruise openkruise/kruise --version 1.4.0 [--force]
$ helm upgrade kruise openkruise/kruise --version 1.5.0 [--force]
```

Note that:
Expand Down Expand Up @@ -122,6 +122,9 @@ Feature-gate controls some influential features in Kruise:
| `SidecarSetPatchPodMetadataDefaultsAllowed` | Allow SidecarSet patch any annotations to Pod Object | `false` | Annotations are not allowed to patch randomly and need to be configured via SidecarSet_PatchPodMetadata_WhiteList |
| `SidecarTerminator` | SidecarTerminator enables SidecarTerminator to stop sidecar containers when all main containers exited | `false` | SidecarTerminator disabled |
| `CloneSetEventHandlerOptimization` | CloneSetEventHandlerOptimization enable optimization for cloneset-controller to reduce the queuing frequency cased by pod update | `false` | optimization for cloneset-controller to reduce the queuing frequency cased by pod update disabled |
| `ImagePullJobGate` | Enables ImagePullJob to pre-download images | `false` | ImagePullJob disabled |
| `ResourceDistributionGate` | Enables ResourceDistribution to distribute configmaps or secret resources | `false` | ResourceDistribution disabled |
| `DeletionProtectionForCRDCascadingGate` | Enables DeletionProtection for crd cascading deletion | `false` | DeletionProtection for crd cascading deletion disabled |

If you want to configure the feature-gate, just set the parameter when install or upgrade. Such as:

Expand Down
40 changes: 40 additions & 0 deletions docs/user-manuals/imagepulljob.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,14 @@ Users can create an ImagePullJob to declare an image should be downloaded on whi
Note that the NodeImage is quite **a low-level API**. You should only use it when you prepare to pull an image on a definite Node.
Otherwise, you should **use the ImagePullJob to pull an image on a batch of Nodes.**


## Feature-gate
**Since kruise v1.5.0** ImagePullJob/ImageListPullJob feature is turned off by default to reduce the privilege of default installation. One can turn it on by setting feature-gate ImagePullJobGate.

```bash
$ helm install/upgrade kruise https://... --set featureGates="ImagePullJobGate=true"
```

## ImagePullJob (high-level)

ImagePullJob is a **namespaced-scope** resource.
Expand Down Expand Up @@ -97,6 +105,38 @@ spec:
io.kubernetes.image.app: "foo"
```

## ImageListPullJob

**FEATURE STATE:** Kruise v1.5.0

ImagePullJob can only support a single image pre-download, one can use multiple ImagePullJob to download multiple images, or use ImageListPullJob to pre-download multiple images in a single job, as follows:

```yaml
apiVersion: apps.kruise.io/v1alpha1
kind: ImageListPullJob
metadata:
name: job-with-always
spec:
images:
- nginx:1.9.1 # [required] image to pull
- busybox:1.29.2
- ...
parallelism: 10 # [optional] the maximal number of Nodes that pull this image at the same time, defaults to 1
selector: # [optional] the names or label selector to assign Nodes (only one of them can be set)
names:
- node-1
- node-2
matchLabels:
node-type: xxx
completionPolicy:
type: Always # [optional] defaults to Always
activeDeadlineSeconds: 1200 # [optional] no default, only work for Always type
ttlSecondsAfterFinished: 300 # [optional] no default, only work for Always type
pullPolicy: # [optional] defaults to backoffLimit=3, timeoutSeconds=600
backoffLimit: 3
timeoutSeconds: 300
```

## NodeImage (low-level)

NodeImage is a **cluster-scope** resource.
Expand Down
25 changes: 16 additions & 9 deletions docs/user-manuals/resourcedistribution.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,22 @@
title: ResourceDistribution
---

For the scenario, where the namespace-scoped resources such as Secret and ConfigMap need to be distributed or synchronized to different namespaces, the native k8s currently only supports manual distribution and synchronization by users one-by-one, which is very inconvenient.
For the scenario, where the namespace-scoped resources such as Secret and ConfigMap need to be distributed or synchronized to different namespaces, the native k8s currently only supports manual distribution and synchronization by users one-by-one, which is very inconvenient.

Typical examples:
Typical examples:
- When users want to use the imagePullSecrets capability of SidecarSet, they must repeatedly create corresponding Secrets in relevant namespaces, and ensure the correctness and consistency of these Secret configurations;
- When users want to configure some common environment variables, they probably need to distribute ConfigMaps to multiple namespaces, and the subsequent modifications of these ConfigMaps might require synchronization among these namespaces.

Therefore, in the face of these scenarios that require the resource distribution and **continuously synchronization across namespaces**, we provide a tool, namely **ResourceDistribution**, to do this automatically.
Therefore, in the face of these scenarios that require the resource distribution and **continuously synchronization across namespaces**, we provide a tool, namely **ResourceDistribution**, to do this automatically.

Currently, ResourceDistribution supports the two kind resources --- **Secret & ConfigMap**.
Currently, ResourceDistribution supports the two kind resources --- **Secret & ConfigMap**.

## Feature-gate
**Since kruise v1.5.0** ResourceDistribution feature is turned off by default due to permissions, if you want to turn it on set feature-gate *ResourceDistributionGate*.

```bash
$ helm install/upgrade kruise https://... --set featureGates="ResourceDistributionGate=true"
```

## API Description

Expand All @@ -30,7 +37,7 @@ spec:
```
### Resource Field
The `resource` field must be a **complete** and **correct** resource description in YAML style.
The `resource` field must be a **complete** and **correct** resource description in YAML style.

An example of a correctly configuration of `resource` is as follows:
```yaml
Expand Down Expand Up @@ -111,7 +118,7 @@ In the above example, the target namespaces of the ResourceDistribution will con

## A Complete Use Case
### Distribute Resource
When the user correctly configures the `resource` and `targets` fields, the ResourceDistribution controller will execute the distribution, and this resource will be automatically created in each target namespaces.
When the user correctly configures the `resource` and `targets` fields, the ResourceDistribution controller will execute the distribution, and this resource will be automatically created in each target namespaces.

A complete configuration is as follows:
```yaml
Expand Down Expand Up @@ -149,7 +156,7 @@ spec:
```

### Tracking Failures After The Distribution
Of course, resource distribution may not be always successful.
Of course, resource distribution may not be always successful.

In the process of distribution, various errors may occur. To this end, we record some conditions of distribution failures in the `status` field so that users can track them.

Expand Down Expand Up @@ -208,7 +215,7 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: game-demo
annotations:
annotations:
kruise.io/resourcedistribution.resource.from: sample
kruise.io/resourcedistribution.resource.distributed.timestamp: 2021-09-06 08:44:52.7861421 +0000 UTC m=+12896.810364601
kruise.io/resourcedistribution.resource.hashcode: 0821a13321b2c76b5bd63341a0d97fb46bfdbb2f914e2ad6b613d10632fa4b63
Expand All @@ -226,4 +233,4 @@ In particular, we **DO NOT** recommend that users bypass the ResourceDistributio

## Kustomize ResourceDistribution Generator

ResourceDistribution Generator is a third-party plug-in of kustomize, similar to kustomize's configmap generator and secret generator. Using this plug-in, you can complete the work of reading files as data content to create ResourceDistribution. Refer to [this page](/docs/next/cli-tool/kustomize-plugin) for details.
ResourceDistribution Generator is a third-party plug-in of kustomize, similar to kustomize's configmap generator and secret generator. Using this plug-in, you can complete the work of reading files as data content to create ResourceDistribution. Refer to [this page](/docs/next/cli-tool/kustomize-plugin) for details.
71 changes: 71 additions & 0 deletions docs/user-manuals/sidecarset.md
Original file line number Diff line number Diff line change
Expand Up @@ -392,6 +392,45 @@ spec:
```
**Note: If you use Scatter, it is recommended to set only a pair of key-values for scatter. It will be easier to understand.**

#### priority
**FEATURE STATE:** Kruise v1.5.0

This strategy defines rules for calculating the priority of updating pods. All update candidates will be applied with the priority terms.
`priority` can be calculated either by weight or by order.

- `weight`: Priority is determined by the sum of weights for terms that match selector. For example,

```yaml
apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
spec:
# ...
updateStrategy:
priorityStrategy:
weightPriority:
- weight: 50
matchSelector:
matchLabels:
test-key: foo
- weight: 30
matchSelector:
matchLabels:
test-key: bar
```

- `order`: Priority will be determined by the value of the orderKey. The update candidates are sorted based on the "int" part of the value string. For example, 5 in string "5" and 10 in string "sts-10".

```yaml
apiVersion: apps.kruise.io/v1alpha1
kind: SidecarSet
spec:
# ...
updateStrategy:
priorityStrategy:
orderPriority:
- orderedKey: some-label-key
```

### Hot Upgrade Sidecar
**FEATURE STATE:** Kruise v0.9.0

Expand Down Expand Up @@ -555,3 +594,35 @@ Status:
Ready Pods: 8 # Matched Pods pod.status.condition.Ready = true number
Updated Ready Pods: 3 # Updated Pods && Ready Pods number
```

## How to troubleshoot SidecarSet in-place upgrade blocking

The community kubernetes only allows patch pod.spec image fields, so SidecarSet upgrading sidecar containers independently can only support **image fields**.
SidecarSet will not trigger in-place upgrades if non-Image fields are changed, e.g. Env, Resources, etc.

To make it easier for you to locate similar issues, **since v1.5.0** kruise will report information to **pod condition** and **sidecarSet event** as follows:

```
# kubectl describe sidecarsets test-sidecarset
Status:
Collision Count: 0
Latest Revision: test-sidecarset-5f6d95f777
Matched Pods: 1
Observed Generation: 2
Ready Pods: 1
Updated Pods: 0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NotUpgradablePods 63s sidecarset-controller SidecarSet in-place update detected 1 not upgradable pod(s) in this round, will skip them
# kubectl get pods test-pod -oyaml
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2023-09-09T11:10:17Z"
message: '{"test-sidecarset":false}'
reason: UpdateImmutableField
status: "False"
type: SidecarSetUpgradable
```
79 changes: 79 additions & 0 deletions docs/user-manuals/uniteddeployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,85 @@ spec:
...
```

## Customize pod configuration of subset
**FEATURE STATE:** Kruise v1.5.0

Since v1.5.0, one can customize pod spec field other than nodeSelectorTerm and tolerations, e.g. env, resources.

**Note:** it is not recommended to customize subset image since it may cause chaos into update function.

```yaml
apiVersion: apps.kruise.io/v1alpha1
kind: UnitedDeployment
metadata:
name: sample-ud
spec:
replicas: 6
revisionHistoryLimit: 10
selector:
matchLabels:
app: sample
template:
# statefulSetTemplate or advancedStatefulSetTemplate or cloneSetTemplate or deploymentTemplate
statefulSetTemplate:
...
topology:
subsets:
- name: subset-a
...
# patch container resources, env:
patch:
spec:
containers:
- name: main
resources:
limits:
cpu: "2"
memory: 800Mi
env:
- name: subset
value: subset-a
- name: subset-b
...
# patch container resources, env:
patch:
spec:
containers:
- name: main
resources:
limits:
cpu: "2"
memory: 800Mi
env:
- name: subset
value: subset-b
```
## HPA UnitedDeployment
**FEATURE STATE:** Kruise v1.5.0
Horizontal Pod Autoscaler can support Custom Resource workload which has [scale subresource](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#scale-subresource).
Since v1.5.0 you can HPA UnitedDeployment directly, as follows:
```yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: example-hpa
namespace: default
spec:
minReplicas: 1
maxReplicas: 3
metrics:
- resource:
name: cpu
targetAverageUtilization: 2
type: Resource
scaleTargetRef:
apiVersion: apps.kruise.io/v1alpha1
kind: UnitedDeployment
name: sample-ud
```
## Pod Distribution Management
This controller provides `spec.topology` to describe the pod distribution specification.
Expand Down
16 changes: 9 additions & 7 deletions docs/user-manuals/workloadspread.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,10 +23,12 @@ WorkloadSpread injects the domain configuration into the Pod by Webhook, and it

Kruise with version lower than `1.3.0` supports `CloneSet`, `Deployment`, `ReplicaSet`.

Sine Kruise `1.3.0`, WorkloadSpread supports `StatefulSet`.
Since Kruise `1.3.0`, WorkloadSpread supports `StatefulSet`.

In particular, for `StatefulSet`, WorkloadSpread supports manage its subsets only when `scale up`. The order of `scale down` is still controlled by StatefulSet controller. The subset management of StatefulSet is based on ordinals of Pods, and more details can be found [here](https://github.com/openkruise/kruise/blob/f46097db1fa5a4ed9c002eba050b888344884e11/pkg/util/workloadspread/workloadspread.go#L305).

Since Kruise `1.5.0`, WorkloadSpread supports `customized workloads that have [scale sub-resource](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#scale-subresource)`.

## Demo

```yaml
Expand Down Expand Up @@ -103,7 +105,7 @@ tolerations:
effect: "NoSchedule"
```

- `patch`: customize the Pod configuration of `subset`, such as Annotations, Labels, Env.
- `patch`: customize the Pod configuration of `subset`, such as Annotations, Labels, Env.

Example:

Expand Down Expand Up @@ -149,10 +151,10 @@ WorkloadSpread provides two kind strategies, the default strategy is `Fixed`.
rescheduleCriticalSeconds: 30
```

- Fixed:
- Fixed:

Workload is strictly spread according to the definition of the subset.

Workload is strictly spread according to the definition of the subset.

- Adaptive:

**Reschedule**: Kruise will check the unschedulable Pods of subset. If it exceeds the defined duration, the failed Pods will be rescheduled to the other `subset`.
Expand Down Expand Up @@ -183,8 +185,8 @@ The workload managed by WorkloadSpread will scale according to the defined order

### Scale out

- The Pods are scheduled in the subset order defined in the `spec.subsets`. It will be scheduled in the next `subset` while the replica number reaches the maxReplicas of `subset`
- The Pods are scheduled in the subset order defined in the `spec.subsets`. It will be scheduled in the next `subset` while the replica number reaches the maxReplicas of `subset`

### Scale in

- When the replica number of the `subset` is greater than the `maxReplicas`, the extra Pods will be removed in a high priority.
Expand Down
2 changes: 1 addition & 1 deletion i18n/zh/docusaurus-plugin-content-docs/current.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"version.label": {
"message": "v1.5",
"message": "v1.6",
"description": "The label for next version"
},
"sidebar.docs.category.Getting Started": {
Expand Down
Loading

0 comments on commit c33a011

Please sign in to comment.