Skip to content

This will backup specified paths and databases from specified configuration

Notifications You must be signed in to change notification settings

apudiu/server-backup

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

87 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

What

This is a utility to perform websites backup from remote servers. But this can be used to backup any file/ directory from any machine. Assuming you've right permissions to do that over SSH.

Why

Just for internal use-case. We've (in my work) lots of websites in lots of servers. I'm responsible to keep backups of those as per requirement. Unfortunately we've no dev-ops person for that.
So I've written this utility which will do automated backups of those websites. If anybody are in similar situation can use this tool :)

How

  1. Download appropriate binary from Releases. for ex: server-backup-linux-amd64 (calling it bin) or clone the repo & run build.sh to build from source. If you're not in linux then you might need to run go build ./cmd
  2. Execute bin gen to generate sample configuration.
  3. Customize parameters in ./config/servers.yml & ./config/[server-ip]/[project-name].yml with your data
  4. Run backup by executing ./bin

#4 can be added in cron for automated execution. So this can trigger automatic backups at desired intervals.

Features

  1. Backup project files as zip
  2. Parallel backups of all websites in all servers (with single connection to each server)
  3. Parallel download of backups in local, sequential upload in S3
  4. Can specify ignore list in the zip
  5. Export database (currently MySQL/ MariaDB is supported) as zip
  6. Transfer files & DB backup zip in local
  7. Upload this in S3
  8. Keep specified number of backups for each project (website) from each server
  9. A-Z logs available, in run.log & project logs
  10. Project backup logs are included in backup directory (uploaded in S3 too)
  11. There's [backup-dir]/run.log where a log summery is available (this is not available in S3)

Config parameters ./config/servers.yml

The configuration works like this. First you define a server and list which directories should be backed up.
This file can contain config for multiple servers. Following is example for one server

Key Required? Description
privateKeyPath y If you've a private key (PK) for the server specify it's location (in local fs)
ip y Server IP address (where your websites are deployed)
port y SSH port
user y username who can:
1. SSH into the server
2. Access those directories that need to be backed up
3. Write to those directories where backup files will be saved
password y Password of the provided user. if you don't have PK or the PK is password protected, you need to specify the password.
* if the PK is password protected this will be used to parse the PK
* if no key provided this password will be used to log in as password auth
projectRoot y Working directory in the server where projects are located. Projects must be under this directory.
backupSources y List directories (project/ websites). This must be immediate child of projectRoot.Only specified projects will be backed up.
If you specify a project here, corresponding project config should reside in ./config/[ip]/[backupSource[n]].yml.

For ex: if your server ip is 192.168.16.18 and your website folder is foo-website. Then foo-website should be listed in this config and project config should reside in ./config/192.168.16.18/foo-website.yml. You need to create it by copying existing one or customizing generated one
backupDestPath n File backups will be placed in this path.
Specify custom backup path in local fs (if you'd like), If not specified this will use ./backups as local backup directory
s3User n If you need to transfer your backups in AWS S3 then you need to specify it.
Locally configured AWS credential profile name (this is not an IAM user's name)
Specified profile should've appropriate permission to the bucket s3Bucket
s3Bucket n Required if you specified s3User
AWS S3 bucket name where the provided user has rw permission

Following is an example of servers.yml

servers:
  # If you've a private key (PK) for the server specify it's location
  - privateKeyPath: /home/user/serverKey.pem
    # server ip address for ssh
    ip: 192.168.0.100
    # ssh port
    port: 22
    # ssh user
    user: privilegedUserWhoCanDoYourTasks
    # Provide password if you don't have PK or the PK is password protected
    # if the PK is password protected this password will be used to parse the PK
    # if no key provided this password will be used to log in as password auth
    password: "123456"
    # working directory in the server where projects are located
    # projects must be under this dir
    projectRoot: /var/www/php80
    # list project (dir) names under @projectRoot
    # only specified projects will be backed up
    # if you specify a project here, corresponding project config should reside in @ip/<project name>.yml
    backupSources:
      - order-online
      - buy-sell
    # If you like specify custom backup path
    backupDestPath: ""
    # AWS user who has appropriate permissions for uploading files
    # this user/ profile need to be configured in accessible way in the runner machine
    s3User: s3-user-who-can-upload-to-the-bucket
    # AWS S3 bucket name where the provided user can upload files
    s3Bucket: s3-bucket-name

You can find this in ./config_sample directory or can generate sample one in above mentioned way.

Config parameters for project ./config/[server-ip]/[project-dir].yml

This file can contain config for one project/ website

</tbody>
Key Required? Description
path y Directory name (inside server.projectRoot).
This directory will be zipped & downloaded
excludePaths n List of paths to exclude while zipping.
* If you'd like to exclude whole directory you should do it like dir/to/exclude/*
zipFileName n Customize zip name if needed. By default this will be like yyyy-mm-dd_path.zip. It's recommended not to change this unless necessary.
envFileInfo n This section is required when you want to backup your DB and don't have / want to provide DB credentials explicitly in dbInfo section.
This section is used to parse provided env file to backup your DB (currently MySQL/ MariaDB) is supported).
If you do not need DB backup just leave this empty or delete this section
envFileInfo.path n Provide path of a .env file inside project path.
Contents of this file will be parsed by provided keys & used to dump the db
envFileInfo.dbHostKeyName n Inside `.env` file, the key name that holds value for DB host IP address
envFileInfo.dbPortKeyName n Inside `.env` file, the key name that holds value for DB port number
envFileInfo.dbUserKeyName n Inside `.env` file, the key name that holds value for DB username
envFileInfo.dbPassKeyName n Inside `.env` file, the key name that holds value for DB password for the user (envFileInfo.dbUserKeyName)
envFileInfo.dbNameKeyName n Inside `.env` file, the key name that holds value for DB name
dbInfo n This section is required when you want to backup your DB and don't have .env file or don't want to provide it in envFileInfo section.
This section is used to login to database for dumping.
You can provide both the envFileInfo and this section so if env parsing fails then this values will be used. When both section provided, (parsed)value from envFileInfo section will override this section values.
dbInfo.hostIp n DB host IP address
dbInfo.port n DB port number
dbInfo.user n DB username
dbInfo.pass n DB dbInfo.user's password
dbInfo.name n DB name
backupCopies n Number of backup copies to keep, if not specified or 0 (zero) is provided then by default 3 latest copies of backup will be kept and rest will be deleted. It'll keep provided number of copies in local & S3 (if provided).

For ex: if you specify 5, to keep latest 5 copies of this project then this will backup first and then check if there's more than 5 copies in local & S3, If any extra copy is found, it'll delete that (form local & S3 in). It'll delete oldest copies to keep latest n backups

Following is an example of ./config/[server-ip]/[project-dir].yml

# project path inside server.projectRoot
path: order-online
# exclude this paths when zipping
excludePaths:
  # for ignoring whole directory it must end with "/*"
  - api/vendor/*
  - api/storage/framework/*
  - api/storage/logs/*
  - api/.rsyncIgnore
  - www/vendor/*
# customize zip name if needed
# by default this will be like [email protected]
zipFileName: ""
# For db backup
envFileInfo:
  # Provide path of a .env file,
  # contents of this fill will be parsed by provided keys & used to dump the db
  path: api/.env
  dbHostKeyName: DB_HOST
  dbPortKeyName: DB_PORT
  dbUserKeyName: DB_USERNAME
  dbPassKeyName: DB_PASSWORD
  dbNameKeyName: DB_DATABASE
# if .env is not provided, provide info directly
# when env file path provided that will be used instead values provided here
dbInfo:
  hostIp: ""
  port: 0
  user: ""
  pass: ""
  name: ""
# number of backup copies to keep, if not specified of 0 is provided
# then by default 3 latest copies of backup will be kept & rest will be deleted
backupCopies: 5

You can find this in ./config/[server-ip]/[project-dir].yml directory or can generate sample one in above mentioned way.

Contribution

I've built what we need till now. But I think there's possibility that this tool can be a great tool which works with many different DB's. Supports hostnames instead server ip's etc. If anybody need this kind of features, go ahead and extend/ modify this. If you think you'r work can be useful to others/ improve this tool then submit a PR. I'm open to appreciate that.

Getting Help

Something not working for you? Open an issue, I'll try to help :)

About

This will backup specified paths and databases from specified configuration

Resources

Stars

Watchers

Forks

Packages

No packages published