Skip to content

Releases: Altinity/clickhouse-operator

release-0.18.1

02 Feb 12:24
Compare
Choose a tag to compare

What's Changed

  • fix non pointer mistake in metrics-exporter by @adu-bricks in #870
  • Helper files for operatorhub.io integration

New Contributors

Full Changelog: 0.18.0...0.18.1

release-0.18.0

26 Jan 09:50
Compare
Choose a tag to compare

New features

  • arm64 packages are published (closes #852)
  • 'access_management' can be specified when defining users in CHI.
  • k8s secrets can be referenced when defining user passwords. See 05-settings-01-overview.yaml.

Changed

  • When CRD is deleted, operator keeps all dependent objects now (statefulsets, volumes). That prevents from incidental delete of a cluster.
  • When operator restarts it does not run a reconcile cycle anymore if CHI has not been changed. That prevents unneeded pod restarts. (Closes #855)
  • Operator configuration file format has been changed. See https://github.com/Altinity/clickhouse-operator/blob/0.18.0/config/config.yaml. Configuration in old format is supported for backwards compatibility.

Fixed

  • Fixed a bug 'unable to decode watch event: no kind "ClickHouseOperatorConfiguration" is registered' that could appear in some k8s configurations.
  • Removed INFORMATION_SCHEMA from schema propagation. (closes #854)
  • Added a containerPort to metrics-exporter (#834)

New Contributors

Full Changelog: 0.17.0...0.18.0

release-0.17.0

02 Dec 08:45
Compare
Choose a tag to compare

New features:

  • Labels and annotations from auto templates are now supported
  • Macros can be used in service annotations the same way as in generateName. Fixes #795

Changed:

  • Status object has been cleaned up

Fixed:

  • Database engine is now respected during schema migration
  • Fixed schema migration for ClickHouse 21.11+
  • Removed extra waits for single-node CHI changes
  • Fixed a possible race conditions with labeling on operator startup

release-0.16.1

01 Nov 13:49
Compare
Choose a tag to compare

This is a bugfixing release with a number of internal changes:

  • CRD definition of Status has been modified. Note: CRD needs to be updated with this release
  • Default terminationGracePeriod for ClickHouse pod templates was moved to operator configuration. Default is 30 seconds as in 0.15.0 and before.
  • Improved installation templates
  • Fixed a bug with replicas not being correctly added when CHI has been modified from a Python Kubernetes client.

Upgrade notes:

  • CRD needs to be updated with this release
  • 0.16.0 had hardcoded 60 seconds for terminationGracePeriod that resulted in ClickHouse restarts when upgrading from 0.15.0 to 0.16.0. Upgrade from 0.15.0 to 0.16.1 does result in ClickHouse restarts. If you are upgrading from 0.16.0 to 0.16.1 set terminationGracePeriod to 60 in operator config file. Refer to Operator Configuration for more details.

release-0.16.0

28 Sep 10:19
Compare
Choose a tag to compare

New features:

  • PodDisruption budget is configured automatically in order to prevent multiple pods being shut down by k8s
  • topologyKey is now configurable. It addresses #772
  • spec.restart: "RollingRestart" attribute has been added in order to initiate graceful restart of a cluster. Use it with a patch command.
  • Added a support for Kubernetes secrets in settings section, e.g. for ClickHouse user/password.

Changed:

  • Pod maintenance logic during all operations that require pod restart has been improved:
    1. Wait condition is enabled for a pod to be removed from ClickHouse cluster.
    2. Wait condition is added for running queries to complete (up to 5 minutes).
    3. terminationGracePeriod has been increased to 60 seconds.
  • Timeout for DDL operations has been increased to 60 seconds. That addresses issues with schema management on slow clusters.
  • Base image is switched to RedHat UBI
  • ZooKeeper image version has been rolled back to 3.6.1 since we have some problems with 3.6.3
  • CRD apiVersion has been upgraded from apiextensions.k8s.io/v1beta1 to apiextensions.k8s.io/v1. The v1beta1 manifests are still available under descriptive names.

Fixed:

  • Fixed a bug with non-working custom ports
  • Fixed a bug with replicas being incorrectly deleted from ZooKeeper for retained volumes

Note: Upgrade from the previous version will result in restart of ClickHouse clusters.

release-0.15.0

19 Jul 11:41
0926d00
Compare
Choose a tag to compare

New features:

  • Added 90s delay before restarting the stateful set, when ClickHouse server settings are modified. That makes sure ClickHouse server starts with an updated configuration. Before that there was a race condition between configmap updates and statefulset restart. The default timeout can be modified by 'spec.reconciling.configMapPropagationTimeout' property. See example
  • Added 'troubleshooting' mode that allows pod to start even if ClickHouse server is failing. In troubleshooting mode liveness check is removed, and extra 'sleep' is added to the startup command. Controlled by 'spc.troubleshoot' property. See example
  • Added a cleanup reconcilation logic that will remove k8s object that are labeled by a particular CHI but do not exist in the CHI manifest. The default behaviour can be altered with 'spec.reconciling.cleanup.unknownObjects' . See example
  • ZooKeeper manifests have been modified to use 'standaloneEnabled=false' for single node setups as recommended in ZooKeeper documentation. ZooKeeper version has been bumped to 3.6.3

Bug fixes:

  • Fixed a bug when replica has not been deleted in ZooKeeper when scaling down (#735). The bug has been introduced in 0.14.0
  • Fixed a bug when PVCs modified outside of operator could be re-created during the reconcile (#730).
  • Fixed a bug when schema was not created on newly added replica sometimes

release-0.14.0

26 Apr 11:40
581cf31
Compare
Choose a tag to compare

New features:

  • Operator reconciles CHI with the actual state of objects in k8s. In previous releases it compared new CHI and old CHI, but not with k8s state.
  • The current reconcile cycle can be interrupted with a new CHI update now. In previous releases user had to wait until reconcile is complete for all nodes..
  • Added volumeClaimTemplate annotations. Closes #578
  • clickhouse_operator user/password can be stored in a secret instead of a configuration file. Closes: #386
  • Added 'excludeFromPropagationLabels' option
  • Added readiness check
  • LabelScope metrics are removed by default since it causes pods to restart when changing a cluster with circular replication. Closes: #666. If you need those labels, they can be turned on with 'appendScopeLabels' configuration option
  • Monitoring of detached parts
  • Operator ClusterRole is restricted. Closes #646
  • Logging improvements

Bug fixes:

  • Fixed a bug when CHI pods could get stuck in ErrImagePull status when wrong image has been used
  • Fixed a bug when operator tried to apply schema to non-existing pods, and entered a lengthy retry cycle

Upgrade notes:

  • Existing clusters will be restarted with the operator upgrade due to signature change
  • Due to ClusterRole change the upgrade with re-applying installation manifest may fail. See #684

release-0.13.5

15 Feb 12:12
9242677
Compare
Choose a tag to compare

This is a follow-up release to 0.13.0:

Improvements:

  • Readiness check has been removed in favour of custom readiness controller, liveness check has been adjusted for slower startup times
  • Operator currently uses 'ready' label for graceful pods creation and modifications. That reduces possible service downtime
  • More graceful re-scale and other operations
  • New StatefulSet fingerprint
  • Operator log has been improved and properly annotated
  • Test suite has been upgraded to the recent TestFlows version

Note 1: We recommend using 0.13.5 instead of 0.13.0. We have found that implementation of liveness/readiness probes in 0.13.0 release was not optimal for healthy production operation.
Note 2: ClickHouse clusters will be restarted after operator upgrade due to pod template and labels changes.

release-0.13.0

24 Dec 08:49
4e3a6cc
Compare
Choose a tag to compare

New features:

  • Added liveness (/ping) and readiness (/replicas_status) probes to ClickHouse pods. Those can be overwritten on podTemplate level
  • Customizable graceful reconcile logic. Now it is possible to turn-on 'wait' behaviour that would apply configuration changes and upgrades using the following algorithm:
    • Host is (optionally) excluded from ClickHouse remote_servers
    • Operator (optionally) waits for exclusion to take an effect
    • Host is updated
    • If it is a new host, schema is created at this stage. That ensures host is not in the cluster until schema is created. Fixes #561
    • Host is added back to remote_servers
    • Operator (optionally) waits for inclusion to take an effect before moving to the next host.

'Optional' steps are turned-off by default and can be turned on in operator configuration:

reconcileWaitInclude: false
reconcileWaitExclude: false

or enabled for particular CHI update:

spec:
  reconciling:
    policy: "nowait"|"wait"
  • Cluster 'stop' operation now correctly removes CHI from monitoring and deletes LoadBalancer service
  • podTemplate metadata is now supported (annotations, labels). Fixes #554
  • 'auto' templates are now available. If templating.policy=auto is specified for ClickHouseInstallationTemplate object, those templates are automatically applied to all ClickHouseInstallations.
  • Minor changes to ClickHouse default profile settings.

Bug fixes:

  • External labels, annotations and finalizers are correctly preserved for services.
  • It was possible that finalizer has been inserted multiple times to CHI

** Note: existing ClickHouse clusters will be restarted after operator upgrade because of adding Liveness/Readiness probes **

release-0.12.0

18 Sep 13:24
6d0a39d
Compare
Choose a tag to compare

This release includes a number of improvements in order to eliminate unneeded restarts and reduce service downtime:

  • Pods are no longer restarted when new shards/replicas are added
  • Cluster configuration (remote_servers.xml) is now updated after shards and replicas are added. That reduces a chance to get an error querying distributed tables.
  • LoadBalancer node ports are no longer modified on service upgrade. That reduces possible downtime
  • Service is re-created if it can not be updated for some reason (e.g. change from ClusterIP to LoadBalancer or vice versa)
  • Fixed several race conditions when creating/updating a cluster