Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zevo #11

Open
wants to merge 20 commits into
base: zevo
Choose a base branch
from
Open

Zevo #11

wants to merge 20 commits into from

Conversation

mk01
Copy link

@mk01 mk01 commented Jan 21, 2013

Darik, this is your master patched to zevo. Don't know, if I can send to zevo in a different state. If it's usable, use it

mk01 added 2 commits January 21, 2013 08:35
…install scripts and "Tower of Hanoi" backup rotation scheme
… changed hanoi class calculation. is now simplier and faster.
@dajhorn
Copy link
Member

dajhorn commented Jan 23, 2013

This pull request doesn't automatically merge. I will try to rediff or rebase it.

@mk01
Copy link
Author

mk01 commented Jan 23, 2013

Darik, the commits were generated agains "clean" zevo - fork of your master. So i suppose, rebasing against master should work. Sorry for the hassle.

mk01 and others added 18 commits February 3, 2013 06:53
Add second path to getopt as fallback (Homebrew support)
… are

using the same server for backups, could be handy.

During early connect to backup destination, check for load is performed.
If it's over the limit, process sleeps for 5 minutes. This will happen 3
times, then script will continue with local run only. So we are not going
to miss backup schedule.

Based on the -i/-I parameter, this snapshots will be transferred to remote
system on next run.
it could happen that local snapshot was deleted without successful send
during hanoi rotation and --send-xxxx.
@FransUrbo
Copy link
Contributor

I think it would be better to split all these PRs into separate PRs instead of bunching them up in one big one with unrelated commits.

@mk01
Copy link
Author

mk01 commented Apr 7, 2015

@FransUrbo

Turbo,

actually that "just" happened over the years by progressing my fork and not removing the very first pull request.
y
anyhow just red some latest discussions and you reimplemented most of the wanted functionalities, did you?
in that case you wouldn't profit from the rearranging of changes into new PR, or ... ?

but let me know if there is still something you could make use of. just generally - as I have not checked the new code in detail:

  • I implemented the hanoi "rotation". Thanks to it's nature it can provide elegant solution/answer for those requesting "auto-destroy" solution as a tool to clean some space. Because the coverage of historical changes doesn't depend on snapshotting interval nor amount of extra space does, one doesn't need to spend extra time on creating special rotation plans nor historical cleanups. Problem is nicely minimised into simple destroy of the oldest snapshot existing.
  • I also implemented a variant of destroy, handling the source filesystem in mode with no local history. assuming we utilise incremental snapshots to send data offsite - and locally accessible history is not needed, or only minimal. then the --keep parameter considers remote snap lists, local history is kept until --send is successful. then it is removed. so you can have a week of regular snapshots be nicely (in the same regularity) transferred offsite - so is replicated as was created, then only --remove-local # snapshots are kept locally (1 at minimum to provide a sync point) if all older are sent over. of course the snapshots are preserved (and none deleted) until next --send session (doesn't matter if just --send if failing or the plan is daily snapshots and --send adhoc or with wider intervals. Beside the benefit of extra free space it is used to provide a TimeMachine like backups to external media (which on darwin/macosx is by apple restricted to work on top HFS+ only), like usb drives. So the user attaches the backup disk once a while and the first snapshot event (always started with the same parameters which includes --send succeeds) and literally moves the history out of local storage. For this no extra code is needed, only a parameter needed to tell the tool we skip a pipe with tunnel to another machine (as the destination is local, just different pool/zfs). a creative user can just put symlink of this cron job into usbmount post-mount hook what together creates fully automated USB backup tool for simple user-level requirements.
  • last thing is the topic of recursive build up for snapshot target list creation. honestly I considered the @dajhorn's implementation "unfit" to my requirements too - so although the code in my fork might look the same, it works the way as is expected from recursive walk through tree (that means starting the evaluation with any TARGET(s) specified on command line, recursing down the tree while propagating the parent(s) snapshot attribute down the subtree to it's children).
    I'm writing "unfit" desirably instead of "wrong" because we discussed that with @dajhorn and he also pointed to wanted 1:1 functionality with the original Solaris implementation.
    Later I also asked colleague for the Solaris installer providing the tool and I tested on few scenarios - but could not replicate the same behaviour (maybe trigger the bug/quirk which @dajhorn mentions). While this must not necessarily mean it is a bug, I consider it (again purely myself) a maybe-bug (to call it your way). Practically this itself is not THAT much important if correctly documented and -R exists, but at that time (2y back) -R was very much inconsistent between platforms (from fs properties preserving point of view). It was implemented differently for each platform I worked with bsd/mac/linux and if I recall correctly - the linux implementation at that point wasn't even conforming to it's own documentation.

maybe it is the right moment now (for you) to reevaluate it from this various angles again.

last remark considering the other discussion yours and user about the multi-target --send destination and different snapshots history - I know the implications are clear to you but it's worth to mention that the cross platform differences in --send and --receive behaviour are interesting - of course all of them do what/how it was designed for (still as Solaris documented) - but none describes in detail what will happen if .... (now considering all the various combinations of histories being completely different, have some common history and then being diverged, source and/or destination being younger or older (having additional never snapshots or be missing some). for instance on mac (before the openzfs - now derived from zfsonlinux) - if top most snap wasn't the same on source as on dest, it failed. so was older zfsonlinux, but never ones can send to dest if they have some "common" history. destination could be rolled back to a common snapshot and new increment created and sent. so literally in depend snapshot schedules could be running at destination and it still can be used as destination for incremental send (I mean by zfs;s native logic, not by doing magic on consolidating histories manually before). specifically that one was documented, but the point stays - let's be care-full as one never knows when a BUG/undesired behaviour accidentally fits into others functionality/feature so goes undetected making system owner cry for weeks.

mlk

@dajhorn
Copy link
Member

dajhorn commented Apr 7, 2015

I'm writing "unfit" desirably instead of "wrong" because we discussed that with @dajhorn and he also pointed to wanted 1:1 functionality with the original Solaris implementation.

The requirement for bug and quirk compatibility is relaxed because OpenZFS is diverging from Solaris. Keeping strict posix conformance is, however, still a desirable thing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants