-
Notifications
You must be signed in to change notification settings - Fork 226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need better detection when clickhouse-backup doesn't have the same disk with clickhouse-server #1037
Comments
share |
could you also check ? |
@Slach, for the second command, there's only two files at /var/lib/clickhouse/backup/shard{shard}-full-20240425171537 download.state But at the remote backup I can see both the shadow and metadata folders. |
Hi @Slach The output for the first command is: ls -la /var/lib/clickhouse/backup/shard{shard}-full-20241030033305/metadata/vector_storagetotal 108 |
We have detected that our most recent backup, despite the list indicating a download size of 737.47 GiB, contains only the metadata folder, with no shadow folder present. Is there any known reason for this issue? Thank you in advance for your assistance. |
could you share
|
could you share
|
|
|
It means upload was begin, but not successful.
this is contradict with following information
you can't download 737Gib in 612ms check your source cluster logs how did you run did you read |
The list command displays the backup size, but in my AZBLOB container, there's no shadow folder, only the metadata. This led me to believe that the backups were intact, but to my surprise, the backup actually failed. I'm now inclined to disregard the results from the list command and instead always verify my backup by restoring it on a separate machine to ensure it's complete. It’s more work, but necessary.
Yes, I did. To run the backup, I launch a container from the altinity/clickhouse-backup:2.6.2 image, which accesses the ClickHouse server data through a shared volume mounted at /var/lib/clickhouse/. Are there any concerns with this setup? |
could you share |
Hi @Slach I have identified the cause of the error. ClickHouse container was creating a dynamic volume named clickhouse_clickhouse_data, while the ClickHouse-backup container expected a volume named clickhouse_data. This discrepancy in volume names resulted in a configuration error within the ClickHouse-backup stack, preventing access to the necessary data directory and leading to backups that contained only metadata. To prevent similar issues in the future, I recommend modifying the ClickHouse-backup tool to halt the backup operation if it cannot access the data directory. This adjustment would help eliminate false positives during execution, as experienced in my environment, which ultimately contributed to this incident. Thank you for your assistance and support throughout this process! |
maybe we need some detection, and two different cases, it was empty table after FREEZE, or we just not have access to disk (you have /var/lib/clickhouse in your case) |
Hello everyone,
I'm facing issues with restoring backups using Altinity/clickhouse-backup. The first issue involves a full backup from May/24. This backup contains both the shadow and metadata directories with data, but when I run the restore command, the following warning appears in the logs:
I noticed that this backup lacks a metadata.json file in the root directory, which differentiates it from recent backups. Could this missing file be causing the restore issue?
For a more recent backup, listed as 700 GB in size, the restore operation only retrieves metadata, excluding the actual data. Here’s the sequence of commands executed, the output from the download command and also the config.yml custom config.
config.yml configuration:
Any ideas on why the restoration for both backups is incomplete—one failing to locate tables and the other only retrieving metadata? Could this be related to configuration, storage, or version differences in Altinity/clickhouse-backup?
I'm using clickhouse-backup version 2.6.2 with ClickHouse version 24.9.2.42.
The text was updated successfully, but these errors were encountered: