BUG FIXES
- fix possible wrong merging after ATTACH PART for Collapsing and Replacing engines without version, look ClickHouse/ClickHouse#71009 for details
- add
log_queries
properly during connection initialization
IMPROVEMENTS
- implement new format for *.state2 files boltdb key value (please, check memory RSS usage)
- clean resumable state if backup parameters changed, fix 840
- switch to golang 1.23
- add
clickhouse_backup_local_data_size
metric as alias forTotalBytesOfMergeTreeTablesm
fromsystem.asychnrous_metrics
, fix 573 - API refactoring, query options with snake case, also allow with dash case.
- add
--resume
parameter tocreate
andrestore
command to avoid unnecessary copy object disk data fix 828
BUG FIXES
- after drop table, before create table, will check if replica path already exists, and will try to, helpfull for restoring Replicated tables which not contains macros in replication parameters fix 849
- fix
TestLongListRemote
for properly time measurement - fix log_pointer handle from system.replicas during restore, fix 967
- fix
use_embedded_backup_restore: true
behavior for azblob, fix 1031 - fix
Nullable(Enum())
types corner case forcheck_parts_columns: true
, fix 1033 - fix deletions for versioned s3 buckets, multiple object versions created during internal retry
BUG FIXES
- fix rare corner case, for system.disks query behavior fix 1007
- fix --partitions and --restore-database-mapping, --restore-table-mapping works together, fix 1018
- fix wrong slices initialization for
shardFuncByName
(rare used function for backup different tables from different shards), fix 1019, thanks @cuishuang
BUG FIXES
- fix unnecessary warnings in
allow_object_disk_streaming: true
behavior during restore - fix stuck with
gcs.clientPool.BorrowObject error: Timeout waiting for idle object
causeOBJECT_DISK_SERVER_SIDE_COPY_CONCURRENCY
has default value 32, but it much more than calculated default pool
IMPROVEMENTS
- add
rbac-only
andconfigs-only
parameters toPOST /backup/create
andPOST /backup/restore
API calls - add
allow_object_disk_streaming
config option which will make object disk backup when CopyObject failed or when Object Storage have incompatible types, fix 979 - add
operation_id
to callback, fix 995 thanks @manasmulay
BUG FIXES
- fix corner case for backup/restore RBAC object with trailing slash, warn /clickhouse/access//uuid have no children, skip Dump
BUG FIXES
- fix corner cases for wrong _last metrics calculation after restart, fix 980
IMPROVEMENTS
- update Dockerfile and Makefile to speedup cross-platform building
BUG FIXES
- update clickhouse-go/v2, try fix 970
BUG FIXES
- fix corner cases when /var/lib/clickhouse/access already broken, fix 977
- finish migrate from
apex/log
tors/zerolog
, fix 624, thanks @rdmrcv
BUG FIXES
- fix corner cases for wrong parsing RBAC name, during resolve conflict for complex multi line RBAC objects, fix 976
BUG FIXES
- fix corner cases object disk parse endpoint for S3, to avoid wrong
.amazonaws.amazonaws.com
suffix
BUG FIXES
- fix corner case for LOG_LEVEL + --env, fix 972
IMPROVEMENTS
- redirect logs into stderr instead of stdout, fix 969
- migrate from
apex/log
tors/zerolog
, fix RaceConditions, fix 624,see details apex/log#103
IMPROVEMENTS
- switch from
docker-compose
(python) todocker compose
(golang) - add parallel integration test execution fix 888
BUG FIXES
- properly handle log_pointer=1 corner case for
check_replica_before_attach: true
, fix 967 - properly handle empty output for
list
command whenremote_storage: custom
, fix 963, thanks @straysh - fix corner cases when connect to S3 provider with self-signed TLS certificates, check
S3_DISABLE_CERT_VALIDATION=true
in tests fix 960
IMPROVEMENTS
- add
--restore-table-mapping
CLI and API parameter torestore
andrestore_remote
command, fix 937, thanks @nithin-vunet and @raspreet-vunet
BUG FIXES
- remove trailing
/
fromobject_disk_path
to properlycreate
andrestore
, fix 946
BUG FIXES
- fix
restore --rbac
behavior when RBAC objects contains-
,.
or any special characters new fixes for 930
BUG FIXES
- add
clean
command toPOST /backup/actions
API handler, fix 945
BUG FIXES
- Fix wrong restoration of Materialized views with view name starting with digits for
--restore-table-mapping
, fix 942, thanks @praveenthuwat
BUG FIXES
- allow backup/restore tables and databases which contains additional special characters set, fix 938
- properly restore environment variables to avoid failures in config.ValidateConfig in REST API mode, fix 940
IMPROVEMENTS
- increase
s3_request_timeout_ms
(23.7+) and turn offs3_use_adaptive_timeouts
(23.11+) whenuse_embedded_backup_restore: true
BUG FIXES
- fix hangs
create
andrestore
when CLICKHOUSE_MAX_CONNECTIONS=0, fix 933 - remove obsolete
CLICKHOUSE_EMBEDDED_BACKUP_THREADS
,CLICKHOUSE_EMBEDDED_BACKUP_THREADS
these settings could configure only via server level, not profile and query settings after 23.3
IMPROVEMENTS
- add http_send_timeout=300, http_receive_timeout=300 to embedded backup/restore operations
- explicitly set
allow_s3_native_copy
andallow_azure_native_copy
settings whenuse_embedding_backup_restore: true
BUG FIXES
- remove too aggressive logs for object disk upload and download operations during create and restore commands execution
IMPROVEMENTS
- return error instead of warning when replication in progress during restore operation
BUG FIXES
- fixed wrong persistent behavior override for
--env
parameter when use it with API server, all 2.5.x versions was affected - fixed errors during drop exists RBAC objects which contains special character, fix 930
IMPROVEMENTS
- added "object_disk_size" to upload and download command logs
BUG FIXES
- fixed corner case in
API server
hang whenwatch
background command failures, fix 929 thanks @tadus21 - removed requirement
compression: none
foruse_embedded_backup_restore: true
- refactored to fix corner case for backup size calculation for Object disks and Embedded backup, set consistent algorithm for CLI and API
list
command behavior
BUG FIXES
- fixed another corner case for
restore --data=1 --env=CLICKHOUSE_SKIP_TABLE_ENGINES=liveview,WindowView
BUG FIXES
- fixed corner case when
use_resumable_state: true
and trying download already present local backup don't return error backup already exists 926 - fixed another corner case for
restore --data=1 --env=CLICKHOUSE_SKIP_TABLE_ENGINES=dictionary,view
IMPROVEMENTS
- added to
--partitions
CLI and API parameter additional formattablesPattern:partition1,partitionX
ortablesPattern:(partition1),(partitionX)
fix #916 - added system.backup_version and version into logs fix 917
- added progress=X/Y to logs fix 918
BUG FIXES
- allow stopping api server when watch command is stopped, fix 922, thanks @tadus21
- fixed corner case for --env=CLICKHOUSE_SKIP_TABLE_ENGINES=dictionary,view
IMPROVEMENTS
- added OCI compliant labels to containers, thanks https://github.com/denisok
- increased default clickhouse queries timeout from
5m
to30m
for allow freeze very large tables with object disks
BUG FIXES
- fix corner cases for
ResumeOperationsAfterRestart
andkeep_backup_local: -1
behavior - fix wrong file extension recognition during download for
access
andconfigs
, fix #921
BUG FIXES
- wrong skip tables by engine when empty variables value
CLICKHOUSE_SKIP_TABLE_ENGINES=engine,
instead ofCLICKHOUSE_SKIP_TABLE_ENGINES=engine
fix 915 - restore stop works, if RBAC objects present in backup but user which used for connect to clickhouse don't have RBAC GRANTS or
access_management
, 2.5.0+ affected, fix 914
BUG FIXES
- skip
ValidateObjectDiskConfig
for--diff-from-remote
when object disk doesn't contain data fix 910
IMPROVEMENTS
- added
object_disk_server_side_copy_concurrency
with default value32
, to avoid slowcreate
orrestore
backup process which was restricted byupload_concurrency
ordownload_concurrency
options, fix 903
BUG FIXES
- fixed
create --rbac
behavior when /var/lib/clickhouse/access not exists but present onlyreplicated
system.user_directories, fix 904
IMPROVEMENTS
- add
info
logging foruploadObjectDiskParts
anddownloadObjectDiskParts
operation
BUG FIXES
- fixed
Unknown setting base_backup
foruse_embedded_backup_restore: true
andcreate --diff-from-remote
, affected 2.5.0+ versions, fix 735
BUG FIXES
- fixed issue after 865 we can't use
create_remote --diff-from-remote
forremote_storage: custom
, affected versions 2.5.0, 2.5.1, fix 900
BUG FIXES
- fixed issue when set both
AWS_ROLE_ARN
andS3_ASSUME_ROLE_ARN
thenS3_ASSUME_ROLE_ARN
shall have more priority thanAWS_ROLE_ARN
fix 898
IMPROVEMENTS
- complete removed support for legacy backups, created with version prior v1.0
- removed
disable_progress_bar
config option and related progress bar code - added
--delete-source
parameter forupload
andcreate_remote
commands to explicitly delete local backup during upload, fix 777 - added support for
--env ENV_NAME=value
cli parameter for allow dynamically overriding any config parameter, fix 821 - added support for
use_embedded_backup_restore: true
with emptyembedded_backup_disk
value, tested on S3/GCS over S3/AzureBlobStorage, fix 695 --rbac, --rbac-only, --configs, --configs-only
now works withuse_embedded_backup_restore: true
--data
forrestore
withuse_embedded_backup_restore: true
will useallow_non_empty_tables=true
, fix 756- added
--diff-from-remote
parameter forcreate
command, will copy only new data parts object disk data, also allows downloading properly object disk data from required backup duringrestore
, fix 865 - added support of native Clickhouse incremental backup for
use_embedded_backup_restore: true
fix 735 - added
GCS_CHUNK_SIZE
config parameter, try to speed up GCS upload fix 874, thanks @dermasmid - added
--remote-backup
cli parameter totables
command andGET /backup/table
, fix 778 - added
rbac_always_backup: true
option to default config, will create backup for RBAC objects automatically, restore still require--rbac
to avoid destructive actions, fix 793 - added
rbac_conflict_resolution: recreate
option for RBAC object name conflicts during restore, fix 851 - added
upload_max_bytes_per_seconds
anddownload_max_bytes_per_seconds
config options to allow throttling without CAP_SYS_NICE, fix 817 - added
clickhouse_backup_in_progress_commands
metric, fix 836 - switched to golang 1.22
- updated all third-party SDK to latest versions
- added
clickhouse/clickhouse-server:24.3
to CI/CD
BUG FIXES
- continue
S3_MAX_PARTS_COUNT
default value from2000
to4000
to continue decrease memory usage for S3 - changed minimal part size for multipart upload in CopyObject from
5Mb
to10Mb
- restore SQL UDF functions after restore tables
- execute
ALTER TABLE ... DROP PARTITION
instead ofDROP TABLE
forrestore
andrestore_remote
with parameters--data --partitions=...
, fix 756 - fix wrong behavior for
freeze_by_part
+freeze_by_part_where
, fix 855 - apply
CLICKHOUSE_SKIP_TABLES_ENGINES
duringcreate
command - fixed behavior for upload / download when .inner. table missing for
MATERIALIZED VIEW
by table pattern, fix 765 - fixed
ObjectDisks
+CLICKHOUSE_USE_EMBEDDED_BACKUP_RESTORE: true
- shall skip upload object disk content, fix 799 - fixed connection to clickhouse-server behavior when long clickhouse-server startup time and
docker-entrypoint.d
processing, will infinitely reconnect each 5 seconds, until success, fix 857 - fixed
USE_EMBEDDED_BACKUP_RESTORE=true
behavior to allow using backup disk with typelocal
, fix 882 - fixed wrong list command behavior, it shall scann all
system.disks
path not only default disk to find partially created backups, fix 873 - fixed create
--rbac
behavior, don't create access folder if no RBAC objects are present - fixed behavior when
system.disks
contains disk which not present in anystorage_policies
, fix 845
IMPROVEMENTS
- set part size for
s3:CopyObject
minimum 128Mb, look details https://repost.aws/questions/QUtW2_XaALTK63wv9XLSywiQ/s3-sync-command-is-slow-to-start-on-some-data
BUG FIXES
- fixed wrong behavior for CLICKHOUSE_SKIP_TABLES_ENGINES for engine=EngineName without parameters
BUG FIXES
- fixed wrong anonymous authorization for serviceAccount in GCS, added
GCS_SKIP_CREDENTIALS
fix 848, fix 847, thanks @sanadhis
IMPROVEMENTS
- added ability to make custom endpoint for
GCS
, fix 837, thanks @sanadhis
BUG FIXES
- fixed wrong config validation for
object_disk_path
even when no object disk present in backup duringrestore
, fix 842
IMPROVEMENTS
- added
check_sum_algorithm
parameter fors3
config section with "" default value, to avoid useless CPU usage during upload toS3
storage, additional fix 829 upload
will delete local backup if upload successful, fix 834
BUG FIXES
- fixed miss checksum for CopyObject in
s3
, fix 835, affected 2.4.30 - fixed wrong behavior for
restore --rbac-only
andrestore --rbac
for backups which not contains any schema, fix 832
BUG FIXES
- fixed
download
command corner cases for increment backup for tables with projections, fix 830 - added more informative error during try to
restore
not exists local backup - fixed
upload
command for S3 when object lock policy turned on, fix 829
IMPROVEMENTS
- added
AZBLOB_DEBUG
environment anddebug
config parameter inazblob
section
BUG FIXES
- force set
RefCount
to 0 duringrestore
for parts in S3/GCS over S3/Azure disks, for properly works DROP TABLE / DROP DATABASE - use
os.Link
insteados.Rename
for ClickHouse 21.4+, to properly create backup object disks - ignore
frozen_metadata
during, create, upload, download and restore commands, fix 826 allow_parallel: true
doesn't work after execute list command, fix 827- fixed corner cases, when disk has encrypted type and underlying disk is object storage
IMPROVEMENT
- refactoring
watch
command, after #804 BUG FIXES - fixed deletion for
object_disk_path
andembedded
backups, afterupload
to properly respectbackups_to_keep_remote
BUG FIXES
- fixed deletion for
object_disk_path
(all backups with S3, GCS over S3, AZBLOB disks from 2.4.0 to2.4.25 didn't properly delete their data from backup bucket)
IMPROVEMENTS
- improved re-balance disk during download if disk does not exist in
system.disks
. Use least used forlocal
disks andrandom
for object disks, fix 561
BUG FIXES
- fixed regression
check_parts_columns
for Enum types (2.4.24+), fix 823 - properly applying marcos to
object_disk_path
duringdelete
BUG FIXES
- fixed
--restore-table-mapping
corner cases for when a destination database contains special characters, fix 820
BUG FIXES
- fixed
check_parts_columns
corner cases forAggregateFunction
andSimpleAggregateFunction
versioning, fix 819
IMPROVEMENTS
- refactored of
restore
command to allow parallel execution ofALTER TABLE ... ATTACH PART
and improve parallelization of CopyObject during restore.
BUG FIXES
- changed
S3_MAX_PARTS_COUNT
default value from256
to2000
to fix memory usage for s3 which increased for 2.4.16+
BUG FIXES
- refactored execution UpdateBackupMetrics, to avoid context canceled error, fix 814
IMPROVEMENTS
- refactored of
create
command to allow parallel execution ofFREEZE
andUNFREEZE
and table level parallelizationobject_disk.CopyObject
- added
CLICKHOUSE_MAX_CONNECTIONS
config parameter to allow parallel executionFREEZE
/UNFREEZE
- change
go.mod
to allowGO111MODULE=on go install github.com/Altinity/clickhouse-backup/v2/cmd/clickhouse-backup@latest
BUG FIXES
- use single
s3:CopyObject
call insteads3:CreateMultipartUpload+s3:UploadCopyPart+s3:CompleteMultipartUpload
for files with size less 5Gb
BUG FIXES
- removed
HeadObject
request to calculate source key size inCopyObject
, to allow cross region S3 disks backup, fix 813 - make optional
/backup/kill
query parametercommand
and optional arguments forkill
command handled via/backup/actions
, if omitted then will kill first command in "In progress" status, fix 808
BUG FIXES
- skip
CopyObject
execution for keys which have zero sizes, to allow properly backupS3
,GCS over S3
andAzure
disks
BUG FIXES
- increased
AZBLOB_TIMEOUT
to 4h, instead 15m to allow downloading long size data parts - changed
S3_MAX_PARTS_COUNT
default from5000
to256
and minimalS3_PART_SIZE
from 5Mb to 25Mb from by default to improve speedup S3 uploading / downloading
BUG FIXES
- fixed
create
andrestore
command for ReplicatedMergeTree tables withfrozen_metadata.txt
parsing
IMPROVEMENTS
- refactored
semaphore.NewWeighted()
toerrgroup.SetLimit()
- added parallelization to
create
andrestore
command during callCopyObject
BUG FIXES
- fixed
object_disk.CopyObject
during restore to allow use properly S3 endpoint - fixed AWS IRSA environments handler, fix 798
BUG FIXES
- fixed
object_disk.CopyObject
to use simpleCopyObject
call, instead of multipart for zero object size, for backup S3 disks
BUG FIXES
- fixed
CopyObject
multipart upload complete Parts must be ordered by part number, for backup S3 disks
IMPROVEMENTS
- updated go modules to latest versions
- added
S3_REQUEST_PAYER
config parameter, look https://docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html for details, fix 795
BUG FIXES
- fixed list remote command date parsing for all
remote_storage: custom
integration examples clickhouse-backup
should not fail when--rbac
used but rbac object is not present in backup, but it should log warnings/errors, partial fix 793
BUG FIXES
- fixed Object Disks path parsing from config, remove unnecessary "/"
- if
S3_ACL
is empty then will not use ACL in PutObject, fix 785
BUG FIXES
--partitions=(v1,'v2')
could calculate wrong partition expression ifsystem.columns
will return fields in different order than they described in PARTITION BY clause, fix 791
IMPROVEMENTS
- make 'kopia' custom scripts really increment fix 781
- added
force_http
and improve retries in GCS upload 784, thanks @minguyen9988
BUG FIXES
- added
Array(Tuple())
to exclude list forcheck_parts_columns:true
, fix 789 - fixed
delete remote
command for s3 buckets with enabled versioning, fix 782 - fixed panic during create integration tables when
API_LISTEN
doesn't contain ":" character, fix 790
BUG FIXES
- added aws.LogResponse for
S3_DEBUG
(affected 2.4.4+ versions)
BUG FIXES
- removed
aws.LogResponseWithBody
forS3_DEBUG
to avoid too many logs (affected 2.4.0+ versions)
IMPROVEMENTS
- added
list
command to API /backup/actions, fix 772
BUG FIXES
- fixed behavior for
restore_as_attach: true
for non-replicated MergeTree, fix 773 - tables with
ENGINE=Dictionary
shall create after alldictionaries
to avoid retry, fix 771
IMPROVEMENTS
- added
cpu_nice_priority
andio_nice_priority
to config, which allow us to throttle CPU and IO usage for the wholeclickhouse-backup
process, fix 757
BUG FIXES
- fixed restore for object disk frozen_metadata.txt, fix 752
- fixed more corner cases for
check_parts_columns: true
, fix 747 - fixed applying macros to s3 endpoint in object disk during restoring embedded backups, fix 750
- rewrote
GCS
clients pool, set defaultGCS_CLIENT_POOL_SIZE
asmax(upload_concurrency, download_concurrency) * 3
to avoid stuck, fix 753, thanks @minguyen-jumptrading
IMPROVEMENTS
- switched to go-1.21
- added clickhouse-server:23.8 for integration and testflows tests
- added
FTP_SKIP_TLS_VERIFY
config option fix 742
BUG FIXES
- fixed calculation part size for
S3
and buffer size forAzure
to avoid errors for upload big files, fix 739 thanks @rodrigargar - fixed GetFileReader for SSE encryption in
S3
, again fix 709
IMPROVEMENTS
- first implementation for properly backup S3/GCS/Azure disks, support server-side copy to back up bucket during
clickhouse-backup
create and duringclickhouse-backup restore
, requires addobject_disk_path
tos3
,gcs
,azblob
section, fix 447 - implementation blocklist for table engines during backup / download / upload / restore 537
- restore RBAC / configs, refactoring restart clickhouse-server via
sql:SYSTEM SHUTDOWN
orexec:systemctl restart clickhouse-server
, add--rbac-only
and--configs-only
options tocreate
,upload
,download
,restore
command. fix [706]#706 - Backup/Restore RBAC related objects from Zookeeper via direct connection to zookeeper/keeper, fix 604
- added
SHARDED_OPERATION_MODE
option, to easily create backup for sharded cluster, available valuesnone
(no sharding),table
(table granularity),database
(database granularity),first-replica
(on the lexicographically sorted first active replica), thanks @mskwon, fix 639, fix 648 - added support for
compression_format: none
for upload and download backups created with--rbac
/--rbac-only
or--configs
/--configs-only
options, fix 713 - added support for s3
GLACIER
storage class, when GET return error, then, it requires 5 minutes per key and restore could be slow. UseGLACIER_IR
, it looks more robust, fix 614 - restore functions via
CREATE OR REPLACE
for more atomic behavior - prepared to make
./tests/integration/
test parallel execution fix 721 - touch
/var/lib/clickhouse/flags/force_drop_table
before every DROP TABLE execution, fix 683 - added support connection pool for Google Cloud Storage,
GCS_CLIENT_POOL_SIZE
, fix 724
BUG FIXES
- fixed possible create backup failures during UNFREEZE not exists tables, affected 2.2.7+ version, fix 704
- fixed too strict
system.parts_columns
check when backup creates, excludeEnum
andTuple (JSON)
andNullable(Type)
vsType
corner cases, fix 685, fix 699 - fixed
--rbac
behavior when /var/lib/clickhouse/access not exists - fixed
skip_database_engines
behavior - fixed
skip_databases
behavior during restore for corner casedb.prefix*
and corner cases when conflict with--tables="*pattern.*"
, fix 663 - fixed S3 head object Server Side Encryption parameters, fix 709
BUG FIXES
- fix error when
backups_to_keep_local: -1
, fix 698 - minimal value for
download_concurrency
andupload_concurrency
1, fix 688 - do not create UDF when use --data flag, fix 697
IMPROVEMENTS
- add support
use_environment_credentials
option insideclickhouse-server
backup object disk definition, fix 691 - add but skip tests for
azure_blob_storage
backup disk foruse_embbeded_backup_restore: true
, it works, but slow, look ClickHouse/ClickHouse#52088 for details
BUG FIXES
- fix static build for FIPS compatible mode fix 693
- complete success/failure server callback notification even when the main context canceled, fix 680
clean
command will not return error when shadow directory not exists, fix 686
IMPROVEMENTS
- add FIPS compatible builds and examples, fix 656, fix 674
- improve support for
use_embedded_backup_restore: true
, applied ugly workaround in test to avoid ClickHouse/ClickHouse#43971, and applied restore workaround to resolve ClickHouse/ClickHouse#42709 - migrate to
clickhouse-go/v2
, fix 540, close 562 - add documentation for
AWS_ARN_ROLE
andAWS_WEB_IDENTITY_TOKEN_FILE
, fix 563
BUG FIXES
- hotfix wrong empty files when disk_mapping contains don't exist during creation, affected 2.2.7 version, look details 676
- add
FTP_ADDRESS
andSFTP_PORT
in Default config Readme.md section fix 668 - when use
--tables=db.materialized_view
pattern, then create/restore backup also for.inner.materialized_view
or.inner_id.uuid
, fix 613
BUG FIXES
- hotfix wrong empty files when disk_mapping contains don't exist during creation, affected 2.2.7 version, look details 676
IMPROVEMENTS
- Auto-tuning concurrency and buffer size related parameters depending on the remote storage type, fix 658
- add
CLICKHOUSE_BACKUP_MUTATIONS
andCLICKHOUSE_RESTORE_AS_ATTACH
config options to allow backup and properly restore table withsystem.mutations
where is_done=0 status. fix 529 - add
CLICKHOUSE_CHECK_PARTS_COLUMNS
config option and--skip-check-parts-column
CLI parameter towatch
,create
andcreate_remote
commands to disallow backup with inconsistent column data types fix 529 - add test coverage reports for unit, testflows and integration tests, fix 644
- use UNFREEZE TABLE in ClickHouse after backup finished to allow s3 and other object storage disks to unlock and delete remote keys during merge, fix 423
BUG FIXES
- apply
SETTINGS check_table_dependencies=0
toDROP DATABASE
statement, when pass--ignore-dependencies
together with--rm
inrestore
command, fix 651 - add support for masked secrets for ClickHouse 23.3+, fix 640
BUG FIXES
- fix panic for resume upload after restart API server for boolean parameters, fix 653
- apply
SETTINGS check_table_dependencies=0
toDROP DATABASE
statement, when pass--ignore-dependencies
together with--rm
inrestore
command, fix 651
BUG FIXES
- fix error after restart API server for boolean parameters, fix 646
- fix corner cases when
restore_schema_on_cluster: cluster
, fix 642, error happens from 2.2.0 to 2.2.4 - fix
Makefile
targetsbuild-docker
andbuild-race-docker
for old clickhouse-server version - fix typo
retries_pause
config definition in ageneral
config section
BUG FIXES
- fix wrong deletion on S3 for versioned buckets, use s3.HeadObject instead of s3.GetObjectAttributes, fix 643
BUG FIXES
- fix wrong parameters parsing from *.state file for resumable upload \ download after restart, fix 641
IMPROVEMENTS
- add
callback
parameter to upload, download, create, restore API endpoints, fix 636
BUG FIXES
- add
system.macros
could be applied topath
config section to ReadMe.md, fix 638 - fix connection leaks for S3 versioned buckets during execution upload and delete command, fix 637
IMPROVEMENTS
- add additional server-side encryption parameters to s3 config section, fix 619
restore_remote
will not return error when backup already exists in local storage during download check, fix 625
BUG FIXES
- fix error after restart API server when .state file present in backup folder, fix 623
- fix upload / download files from projections multiple times, cause backup create wrong create *.proj as separate data part, fix 622
IMPROVEMENTS
- switch to go 1.20
- after API server startup, if
/var/lib/clickhouse/backup/*/(upload|download).state
present, then operation will continue in background, fix 608 - make
use_resumable_state: true
behavior forupload
anddownload
, fix 608 - improved behavior
--partitions
parameter, for cases when PARTITION BY clause return hashed value instead of numeric prefix forpartition_id
insystem.parts
, fix 602 - apply
system.macros
values when userestore_schema_on_cluster
and replace cluster name in engine=Distributed tables, fix 574 - switch S3 storage backend to https://github.com/aws/aws-sdk-go-v2/, fix 534
- added
S3_OBJECT_LABLES
andGCS_OBJECT_LABELS
to allow setting each backup object metadata during upload fix 588 - added
clickhouse-keeper
as zookeeper replacement for integration test during reproducing 416 - decrease memory buffers for S3 and GCS, change default value for
upload_concurrency
anddownload_concurrency
toround(sqrt(MAX_CPU / 2))
, fix 539 - added ability to set up custom storage class for GCS and S3 depends on backupName pattern, fix 584
BUG FIXES
- fix ssh connection leak for SFTP remote storage, fix 578
- fix wrong Content-Type header, fix 605
- fix wrong behavior for
download
with--partitions
fix 606 - fix wrong size of backup in list command if upload or download was break and resume, fix 526
- fix
_successful_
and_failed_
issue related to metrics counter, happens after 2.1.0, fix 589 - fix wrong calculation date of last remote backup during startup
- fix wrong duration, status for metrics after 2.1.0 refactoring, fix 599
- fix panic on
LIVE VIEW
tables with option--restore-database-mapping db:db_new
enabled, thanks @php53unit
IMPROVEMENTS
- during upload sort tables descending by
total_bytes
if this field present - improve ReadMe.md add description for all CLI commands and parameters
- add
use_resumable_state
to config to allow default resumable behavior increate_remote
,upload
,restore_remote
anddownload
commands, fix 576
BUG FIXES
- fix
--watch-backup-name-template
command line parsing, overridden after config reload, fix 548 - fix wrong regexp, when
restore_schema_on_cluster: cluster_name
, fix 552 - fix wrong
clean
command and API behavior, fix 533 - fix getMacro usage in Examples for backup / restore sharded cluster.
- fix deletion files from S3 versioned bucket, fix 555
- fix
--restore-database-mapping
behavior forReplicatedMergeTree
(replace database name in a replication path) andDistributed
(replace database name in underlying table) tables, fix 547 MaterializedPostgreSQL
doesn't support FREEZE, fix 550, see also ClickHouse/ClickHouse#32902, ClickHouse/ClickHouse#44252create
andrestore
commands will respectskip_tables
config options and--table
cli parameter, to avoid creating unnecessary empty databases, fix 583- fix
watch
unexpected connection closed behavior, fix 568 - fix
watch
validation parameters corner cases, close 569 - fix
--restore-database-mapping
behavior forATTACH MATERIALIZED VIEW
,CREATE VIEW
andrestore --data
corner cases, fix 559
IMPROVEMENTS
- add
watch
description to Examples.md
BUG FIXES
- fix panic when use
--restore-database-mapping=db1:db2
, fix 545 - fix panic when use
--partitions=XXX
, fix 544
BUG FIXES
- return
bash
andclickhouse
user group toDockerfile
image short, fix 542
IMPROVEMENTS
- complex refactoring to use contexts,
AWS
andSFTP
storage are not fully supported - complex refactoring for logging to avoid race condition when change log level during config reload
- improve kubernetes example for adjust incremental backup, fix 523
- add storage independent retries policy, fix 397
- add
clickhouse-backup-full
docker image with integratedkopia
,rsync
,restic
andclickhouse-local
, fix 507 - implement
GET /backup/kill?command=XXX
API to allow kill, fix 516 - implement
kill "full command"
inPOST /backup/actions
handler, fix 516 - implement
watch
inPOST /backup/actions
handler API and CLI command, fix 430 - implement
clickhouse-backup server --watch
to allow server start watch after start, fix 430 - update metric
last_{create|create_remote|upload}_finish
metrics values during API server startup, fix 515 - implement
clean_remote_broken
command andPOST /backup/clean/remote_broken
API request, fix 520 - add metric
number_backups_remote_broken
to calculate broken remote backups, fix 530
BUG FIXES
- fix
keep_backups_remote
behavior for recursive incremental sequences, fix 525 - for
restore
command callDROP DATABASE IF EXISTS db SYNC
when pass--schema
and--drop
together, fix 514 - close persistent connections for remote backup storage after command execution, fix 535
- lot of typos fixes
- fix all commands was always return
200
status (expect errors) and ignore status which passed from application code in API server
IMPROVEMENTS
- implements
remote_storage: custom
, which allow us to adopt any external backup system likerestic
,kopia
,rsync
, rclone etc. fix 383 - add example workflow how to make backup / restore on sharded cluster, fix 469
- add
use_embedded_backup_restore
to allowBACKUP
andRESTORE
SQL commands usage, fix 323, need 22.7+ and resolve ClickHouse/ClickHouse#39416 - add
timeout
toazure
configAZBLOB_TIMEOUT
to allow download with bad network quality, fix 467 - switch to go 1.19
- refactoring to remove legacy
storage
package - add
table
parameter totables
cli command and/backup/tables
API handler, fix 367 - add
--resumable
parameter tocreate_remote
,upload
,restore_remote
,donwload
commands to allow resuming upload or download after break. Ignored forremote_storage: custom
, fix 207 - add
--ignore-dependencies
parameter torestore
andrestore_remote
, to allow drop object during restore schema on server where schema objects already exist and contain dependencies which not present in backup, fix 455 - add
restore --restore-database-mapping=<originDB>:<targetDB>[,<...>]
, fix 269, thanks @mojerro
BUG FIXES
- fix wrong upload / download behavior for
compression_format: none
andremote_storage: ftp
IMPROVEMENTS
- add Azure to every CI/CD run, testing with
Azurite
BUG FIXES
- fix azblob.Walk with recursive=True, for properly delete remote backups
BUG FIXES
- fix system.macros detect query
IMPROVEMENTS
- add
storage_class
(GCS_STORAGE_CLASS) support forremote_storage: gcs
fix 502 - upgrade aws golang sdk and gcp golang sdk to latest versions
IMPROVEMENTS
- switch to go 1.19
- refactoring to remove legacy
storage
package
BUG FIXES
- properly execute
CREATE DATABASE IF NOT EXISTS ... ON CLUSTER
when setuprestore_schema_on_cluster
, fix 486
IMPROVEMENTS
- try to improve implementation
check_replicas_before_attach
configuration to avoid concurrently ATTACH PART execution duringrestore
command on multi-shard cluster, fix 474 - add
timeout
toazure
configAZBLOB_TIMEOUT
to allow download with bad network quality, fix 467
BUG FIXES
- fix
download
behavior for parts which contains special characters in name, fix 462
IMPROVEMENTS
- add
check_replicas_before_attach
configuration to avoid concurrent ATTACH PART execution duringrestore
command on multi-shard cluster, fix 474 - allow a backup list when clickhouse server offline, fix 476
- add
use_custom_storage_class
(S3_USE_CUSTOM_STORAGE_CLASS
) option tos3
section, thanks @realwhite
BUG FIXES
- resolve
{uuid}
marcos during restore forReplicatedMergeTree
table and ClickHouse server 22.5+, fix 466
IMPROVEMENTS
- PROPERLY restore to default disk if disks are not found on destination clickhouse server, fix 457
BUG FIXES
- fix infinite loop
error can't acquire semaphore during Download: context canceled
, anderror can't acquire semaphore during Upload: context canceled
all 1.4.x users recommend upgrade to 1.4.6
IMPROVEMENTS
- add
CLICKHOUSE_FREEZE_BY_PART_WHERE
option which allow freezes by part with WHERE condition, thanks @vahid-sohrabloo
IMPROVEMENTS
- download and restore to default disk if disks are not found on destination clickhouse server, fix 457
IMPROVEMENTS
- add
API_INTEGRATION_TABLES_HOST
option to allow using DNS name in integration tables system.backup_list, system.backup_actions
BUG FIXES
- fix
upload_by_part: false
max file size calculation, fix 454
BUG FIXES
- fix
--partitions
parameter parsing, fix 425
BUG FIXES
- fix upload data go routines waiting, expect the same upload speed as 1.3.2
IMPROVEMENTS
- add
S3_ALLOW_MULTIPART_DOWNLOAD
to config, to improve download speed, fix 431 - add support backup/restore user defined functions, fix 420
- add
clickhouse_backup_number_backups_remote
,clickhouse_backup_number_backups_local
,clickhouse_backup_number_backups_remote_expected
,clickhouse_backup_number_backups_local_expected
prometheus metric, fix 437 - add ability to apply
system.macros
values topath
field in all types ofremote_storage
, fix 438 - use all disks for upload and download for multi-disk volumes in parallel when
upload_by_part: true
fix #400
BUG FIXES
- fix wrong warning for .gz, .bz2, .br archive extensions during download, fix 441
IMPROVEMENTS
- add TLS certificates and TLS CA support for clickhouse connections, fix 410
- switch to go 1.18
- add clickhouse version 22.3 to integration tests
- add
S3_MAX_PARTS_COUNT
andAZBLOB_MAX_PARTS_COUNT
for properly calculate buffer sizes during upload and download for custom S3 implementation like Swift - add multithreading GZIP implementation
BUG FIXES
- fix 406, properly handle
path
for S3, GCS for cases when it begins from "/" - fix 409, avoid delete partially uploaded backups via
backups_keep_remote
option - fix 422, avoid cache broken (partially uploaded) remote backup metadata.
- fix 404, properly calculate S3_PART_SIZE to avoid freeze after 10000 multi parts uploading, properly handle error when upload and download go-routine failed to avoid pipe stuck
IMPROVEMENTS
- fix 387, improve documentation related to memory and CPU usage
BUG FIXES
- fix 392, correct download for a recursive sequence of diff backups when
DOWNLOAD_BY_PART
true - fix 390, respect skip_tables patterns during restore and skip all INFORMATION_SCHEMA related tables even skip_tables don't contain INFORMATION_SCHEMA pattern
- fix 388, improve restore ON CLUSTER for VIEW with TO clause
- fix 385, properly handle multiple incremental backup sequences +
BACKUPS_TO_KEEP_REMOTE
IMPROVEMENTS
- Add
API_ALLOW_PARALLEL
to support multiple parallel execution calls for, WARNING, control command names don't try to execute multiple same commands and be careful, it could allocate much memory during upload / download, fix #332 - Add support for
--partitions
on create, upload, download, restore CLI commands and API endpoint fix #378 properly implementation of #356 - Add implementation
--diff-from-remote
forupload
command and properly handlerequired
on download command, fix #289 - Add
print-config
cli command fix #366 - Add
UPLOAD_BY_PART
(default: true) option for improve upload/download concurrency fix #324 - Add a support ARM platform for Docker images and pre-compiled binary files, fix #312
- KeepRemoteBackups should respect differential backups, fix #111
- Add
SFTP_DEBUG
option, fix #335 - Add the ability to restore schema
ON CLUSTER
, fix #145 - Add support for encrypted disk (include s3 encrypted disks), fix #260
- API Server optimization for speed of
last_backup_size_remote
metric calculation to make it async during REST API startup and after download/upload, fix #309 - Improve
list remote
speed via local metadata cache in$TEMP/.clickhouse-backup.$REMOTE_STORAGE
, fix #318 - Add
CLICKHOUSE_IGNORE_NOT_EXISTS_ERROR_DURING_FREEZE
option, fix #319 - Add support for PROJECTION, fix #320
- Return
clean
cli command and APIPOST /backup/clean
endpoint, fix #379
BUG FIXES
- fix #300, allow GCP properly work with empty
GCP_PATH
value - fix #340, properly handle errors on S3 during Walk() and delete old backup
- fix #331, properly restore tables where have table name with the same name as database name
- fix #311, properly run clickhouse-backup inside docker container via entrypoint
- fix #317, properly upload large files to Azure Blob Storage
- fix #220, properly handle total_bytes for
uint64
type - fix #304, properly handle an archive extension during download instead of use config settings
- fix #375, properly
REMOTE_STORAGE=none
error handle - fix #379, will try to clean
shadow
ifcreate
fail duringmoveShadow
- more precise calculation backup size during
upload
, for backups created with--partitions
, fix bug after #356 - fix
restore --rm
behavior for 20.12+ for tables which have dependent objects (like dictionary) - fix concurrency by
FTP
creation directories during upload, reduce connection pool usage - properly handle
--schema
parameter for show local backup size afterdownload
- fix restore bug for WINDOW VIEW, thanks @zvonand
EXPERIMENTAL
- Try to add experimental support for backup
MaterializedMySQL
andMaterializedPostgeSQL
tables, restore MySQL tables not impossible now without replacetable_name.json
toEngine=MergeTree
, PostgresSQL not supported now, see ClickHouse/ClickHouse#32902
HOT FIXES
- fix 409, avoid delete partially uploaded backups via
backups_keep_remote
option
HOT FIXES
- fix 390, respect skip_tables patterns during restore and skip all INFORMATION_SCHEMA related tables even skip_tables don't contain INFORMATION_SCHEMA pattern
IMPROVEMENTS
- Add REST API
POST /backup/tables/all
, fixPOST /backup/tables
to respectCLICKHOUSE_SKIP_TABLES
BUG FIXES
- fix #297, properly restore tables where have fields with the same name as table name
- fix #298, properly create
system.backup_actions
andsystem.backup_list
integration tables for ClickHouse before 21.1 - fix #303, ignore leading and trailing spaces in
skip_tables
and--tables
parameters - fix #292, lock clickhouse connection pool to single connection
IMPROVEMENTS
- Add REST API integration tests
BUG FIXES
- fix #290
- fix #291
- fix
CLICKHOUSE_DEBUG
settings behavior (now we can see debug log from clickhouse-go)
INCOMPATIBLE CHANGES
- REST API
/backup/status
now return only the latest executed command with a status and error message
IMPROVEMENTS
- Added REST API
/backup/list/local
and/backup/list/remote
to allow listing backup types separately - Decreased background backup creation time via REST API
/backup/create
, during avoiding listing remote backups for update metrics value - Decreased backup creation time, during avoid scan whole
system.tables
when settable
query string parameter or--tables
cli parameter - Added
last
andfilter
query string parameters to REST API/backup/actions
, to avoid pass to client long JSON documents - Improved
FTP
remote storage parallel upload / download - Added
FTP_CONCURRENCY
to allow, by default MAX_CPU / 2 - Added
FTP_DEBUG
setting, to allow debug FTP commands - Added
FTP
to CI/CD on any commit - Added race condition check to CI/CD
BUG FIXES
- environment variable
LOG_LEVEL
now apply toclickhouse-backup server
properly - fix #280, incorrect prometheus metrics measurement for
/backup/create
,/backup/upload
,/backup/download
- fix #273, return
S3_PART_SIZE
back, but calculates it smartly - fix #252, now you can pass
last
andfilter
query string parameters - fix #246, incorrect error messages when use
REMOTE_STORAGE=none
- fix #283, properly handle error message from
FTP
server - fix #268, properly restore legacy backup for schema without a database name
BUG FIXES
- fix broken
system.backup_list
integration table after addrequired field
in #263 - fix #274 invalid
SFTP_PASSWORD
environment usage
IMPROVEMENTS
- Added concurrency settings for upload and download, which allow loading table data in parallel for each table and each disk for multi-disk storages
- Up golang version to 1.17
- Updated go libraries dependencies to actual version (exclude azure)
- Add Clickhouse 21.8 to test matrix
- Now
S3_PART_SIZE
not restrict upload size, partSize calculate depends onMAX_FILE_SIZE
- improve logging for delete operation
- Added
S3_DEBUG
option to allow debug S3 connection - Decrease the number of SQL queries to
system.*
during backup commands - Added options for RBAC and CONFIG backup, look to
clickhouse-backup help create
andclickhouse-backup help restore
for details - Add
S3_CONCURRENCY
option to speedup backup upload toS3
- Add
SFTP_CONCURRENCY
option to speedup backup upload toSFTP
- Add
AZBLOB_USE_MANAGED_IDENTITY
support for ManagedIdentity for azure remote storage, thanks https://github.com/roman-vynar - Add clickhouse-operator kubernetes manifest which run
clickhouse-backup
inserver
mode on each clickhouse pod in kubernetes cluster - Add detailed description and restrictions for incremental backups.
- Add
GCS_DEBUG
option - Add
CLICKHOUSE_DEBUG
option to allow low-level debug forclickhouse-go
BUG FIXES
- fix #266 properly restore a legacy backup format
- fix #244 add
read_timeout
,write_timeout
to client-side timeout forclickhouse-go
- fix #255 restrict connection pooling to 1 in
clickhouse-go
- fix #256
remote_storage: none
, was broke compression - fix #266 legacy backups from version prior 1.0 can't restore without
allow_empty_backup: true
- fix #223 backup only database metadata for proxy integrated database engines like MySQL, PostgresSQL
- fix
GCS
global buffer wrong usage during UPLOAD_CONCURRENCY > 1 - Remove unused
SKIP_SYNC_REPLICA_TIMEOUTS
option
BUG FIXES
- Fixed silent cancel uploading when table has more than 4k files, fix #203, #163. Thanks mastertheknife
- Fixed download error for
zstd
andbrotli
compression formats - Fixed bug when old-format backups hadn't cleared
IMPROVEMENTS
- Added diff backups
- Added retries to restore operation for resolve complex tables dependencies (Thanks @Slach)
- Added SFTP remote storage (Thanks @combin)
- Now databases will be restored with the same engines (Thanks @Slach)
- Added
create_remote
andrestore_remote
commands - Changed of a compression format list. Added
zstd
,brotli
and disabledbzip2
,sz
,xz
BUG FIXES
- Fixed empty backup list when S3_PATH and AZBLOB_PATH is root
- Fixed azblob container issue (Thanks @atykhyy)
IMPROVEMENTS
- Added 'allow_empty_backups' and 'api.create_integration_tables' options
- Wait for clickhouse in server mode (fix #169)
- Added Disk Mapping feature (fix #162)
BUG FIXES
- Fixed 'ftp' remote storage (#164)
It is the last release of v0.x.x
IMPROVEMENTS
- Added 'create_remote' and 'restore_remote' commands
- Changed update config behavior in API mode
BUG FIXES
IMPROVEMENTS
- Support for new versions of ClickHouse (#155)
- Support of Atomic Database Engine (#140, #141, #126)
- Support of multi disk ClickHouse configurations (#51)
- Ability to upload and download specific tables from backup
- Added partitions backup on remote storage (#83)
- Added support for backup/upload/download schema only (#138)
- Added a new backup format select it by
compression_format: none
option
BROKEN CHANGES
- Changed a backup format
- Incremental backup on remote storage feature is not supported now, but will be supported in future versions
IMPROVEMENTS
- Added
CLICKHOUSE_AUTO_CLEAN_SHADOW
option for cleaning shadow folder before backup. Enabled by default. - Added
CLICKHOUSE_SYNC_REPLICATED_TABLES
option for sync replicated tables before backup. Enabled by default. - Improved statuses of operations in server mode
BUG FIXES
- Fixed bug with semaphores in server mode