-
Notifications
You must be signed in to change notification settings - Fork 249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
snapshots cleaned up globally #67
Comments
@kpande do you mean to use bookmarks on the synchronization code? if yes, do you have a working synchronizing code that uses bookmarks, and if yes, could you post it? that would be very useful! :) |
@kpande using bookmarks doesn't solve the issue, at least I don't see how? Not only that, the This is a case where the script causes data-loss, it removes snapshots it shouldn't touch at all - which imho is pretty serious. The case is pretty simple, it's the equivalent to The snapshots follow the same naming scheme by default everywhere, and the cleanup procedure only takes into account the name of the snapshots, not the dataset/filesystem. So if you have 2 machines syncing snapshots to each-other, the It simply should not touch snapshots in datasets that are explicitly not being handled by |
When specific volumes are specified instead of using
//
, the 'old' snapshots are cleaned up from the entire system.This is problematic since I have 2 systems both running
zfs-auto-snapshot
which do a periodic zfs send/receive to each other, and zfs-auto-snapshot destroys parts of the synced content.Example: the following command is executed in cron:
I do a
zfs recv
totank/backup/...
- which meanszfs-auto-snap_hourly-...
snapshots exist there too. These are however also cleaned up - screwing up the send/receive.Currently I work around this by changing the prefix, but I still think this is serious issue which is destroying data unexpectedly.
The text was updated successfully, but these errors were encountered: