Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backed table with PARTITION BY (a,b) cannot be deleted from remote #1015

Open
lesandie opened this issue Sep 27, 2024 · 4 comments
Open

Backed table with PARTITION BY (a,b) cannot be deleted from remote #1015

lesandie opened this issue Sep 27, 2024 · 4 comments
Assignees
Milestone

Comments

@lesandie
Copy link
Contributor

We have a table with a tuple partitioning like this:

PARTITION BY (toStartOfInterval(timeStamp, toIntervalHour(1)), timePeriod)

Partition will look like this: 1727434800-4 and file name generated in bucket is default_1727434800%2D4_0_83_14.tar

If we issue clickhouse-backup delete remote <backupname> all prefixes in the s3 bucket will be deleted but the one with these filenames.

clickhouse-backup 2.6.1

@Slach Slach added this to the 2.6.2 milestone Sep 27, 2024
@Slach Slach self-assigned this Sep 27, 2024
@Slach
Copy link
Collaborator

Slach commented Sep 27, 2024

thanks for reporting
looks weird, actually clickhouse-backup just list keys by prefix based on backup name and shall just delete key as is, will try to reproduce in integration tests

@Slach Slach modified the milestones: 2.6.2, 2.6.3 Oct 8, 2024
@Slach
Copy link
Collaborator

Slach commented Oct 8, 2024

can't reproduce

CREATE TABLE t1 (timeStamp DateTime, timePeriod UInt64) ENGINE=MergeTree ORDER BY timeStamp PARTITION BY (toStartOfInterval(timeStamp, toIntervalHour(1)), timePeriod);
INSERT INTO t1 SELECT now()+INTERVAL number MINUTE, number FROM numbers(10);

in clickhouse-backup

LOG_LEVEL=debug clickhouse-backup -c /etc/clickhouse-backup/config-s3.yml create test_partition_by
LOG_LEVEL=debug clickhouse-backup -c /etc/clickhouse-backup/config-s3.yml upload test_partition_by
LOG_LEVEL=debug S3_DEBUG=1 clickhouse-backup -c /etc/clickhouse-backup/config-s3.yml delete remote test_partition_by

after that

ls -la /bitnami/minio/data/clickhouse/

in minio
and catalog is empty
which remote_storage type did you use?

@lesandie
Copy link
Contributor Author

lesandie commented Oct 10, 2024

which remote_storage type did you use?

I'll check

@lesandie
Copy link
Contributor Author

general:
  allow_empty_backups: false
  backups_to_keep_local: {N_KEEP} 
  backups_to_keep_remote: {N_KEEP} 
  disable_progress_bar: false
  log_level: info
  max_file_size: 1099511627776
  remote_storage: s3
clickhouse:
  host: localhost
  port: 9000
  username: {USERNAME}
  password: {PASSWORD}
  disk_mapping: {}
  freeze_by_part: false
  log_sql_queries: false
  secure: false
  skip_sync_replica_timeouts: true
  skip_tables:
    - system.*
    - INFORMATION_SCHEMA.*
    - information_schema.*
  skip_verify: false
  sync_replicated_tables: true
  timeout: 5m
s3:
  bucket: dr-backup
  endpoint: "https://{URL}"
  access_key: {ACCESS_KEY}
  secret_key: {SECRET_KEY}
  region: {REGION}
  acl: private
  compression_format: tar
  compression_level: 1
  disable_cert_verification: true
  disable_ssl: false
  force_path_style: false
  part_size: 536870912
  path: {BP_APP}
  sse: ''
  storage_class: STANDARD

@Slach Slach modified the milestones: 2.6.3, 2.6.4 Nov 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants