-
Notifications
You must be signed in to change notification settings - Fork 0
Storages
Currently supported storages:
- Amazon Simple Storage Service (S3)
- Rackspace Cloud Files (Mosso)
- Ninefold Cloud Storage
- Dropbox Web Service
- Remote Server (Protocols: FTP, SFTP, SCP, RSync)
- Local Storage
The following examples should be placed in your Backup configuration file.
Backup::Model.new(:my_backup, 'My Backup') do
# examples go here...
end
store_with S3 do |s3|
s3.access_key_id = 'my_access_key_id'
s3.secret_access_key = 'my_secret_access_key'
s3.region = 'us-east-1'
s3.bucket = 'bucket-name'
s3.path = '/path/to/my/backups'
s3.keep = 10
end
Available regions:
ap-northeast-1
ap-southeast-1
eu-west-1
us-east-1
us-west-1
You will need an Amazon AWS (S3) account. You can get one here.
store_with CloudFiles do |cf|
cf.api_key = 'my_api_key'
cf.username = 'my_username'
cf.container = 'my_container'
cf.path = '/path/to/my/backups'
cf.keep = 5
cf.auth_url = 'lon.auth.api.rackspacecloud.com'
end
The cf.auth_url
option allows you to provide a non-standard auth URL for the Rackspace API. By default the US API will
be used; to use a different region's API, provide the relevant URL for that region. The example above demonstrates usage
for the London region.
You will need a Rackspace Cloud Files account. You can get one here.
store_with Ninefold do |nf|
nf.storage_token = 'my_storage_token'
nf.storage_secret = 'my_storage_secret'
nf.path = '/path/to/my/backups'
nf.keep = 10
end
You will need a Ninefold account. You can get one here.
store_with Dropbox do |db|
db.api_key = 'my_api_key'
db.api_secret = 'my_api_secret'
# Dropbox Access Type
# The default value is :app_folder
# Change this to :dropbox if needed
# db.access_type = :dropbox
db.path = '/path/to/my/backups'
db.keep = 25
# db.timeout = 300
end
To use the Dropbox service as a backup storage, you need two things:
- A Dropbox Account (Get one for free here: dropbox.com)
- A Dropbox App (Create one for free here: developer.dropbox.com)
The default db.access_type
is :app_folder
. This is the default for Dropbox accounts.
If you have contacted Dropbox and upgraded your account to Full Dropbox Access, then you will need to set the
db.access_type
to :dropbox
.
NOTE The first link I provided is a referral link. If you create your account through that link, then you should receive an additional 500MB storage (2.5GB total, instead of 2GB) for your newly created account.
FOR YOUR INFORMATION you must run your backup to Dropbox manually the first time to authorize your machine with your Dropbox account. When you manually run your backup, backup will provide you with a URL which you must visit with your browser. Once you've authorized your machine, Backup will write out the session to a cache file and from there on Backup will use the cache file and won't prompt you to manually authorize, meaning you can run it in the background as normal using for example a Cron task.
A chunk size (see Splitter wiki page) of 250mb seems to be too large for Dropbox - 100mb works.
store_with FTP do |server|
server.username = 'my_username'
server.password = 'my_password'
server.ip = '123.45.678.90'
server.port = 21
server.path = '~/backups/'
server.keep = 5
end
TIP use SFTP if possible, it's a more secure protocol.
store_with SFTP do |server|
server.username = 'my_username'
server.password = 'my_password'
server.ip = '123.45.678.90'
server.port = 22
server.path = '~/backups/'
server.keep = 5
end
store_with SCP do |server|
server.username = 'my_username'
server.password = 'my_password'
server.ip = '123.45.678.90'
server.port = 22
server.path = '~/backups/'
server.keep = 5
end
store_with RSync do |server|
server.username = 'my_username'
server.password = 'my_password'
server.ip = '123.45.678.90'
server.port = 22
server.path = '~/backups/'
server.local = false # true if you want to store locally
end
NOTE If you only want to sync particular folders on your filesystem to a backup server then be sure to take a look at Syncers. They are, in most cases, more suitable for this purpose. Especially if you're planning to transfer large (gigabytes) of data in folders such as images, music, videos, and other heavy formats.
Example: Say you just transferred a backup of about 2000MB in size. 12 hours later the Backup gem packages a new backup file for you and it appears to be 2050MB in size. Rather than transferring the whole 2050MB to the remote server, it'll lookup the difference between the source and destination backups and only transfer the bytes that changed. In this case it'll transfer only around 50MB rather than the full 2050MB.
Note that you should NOT use any compressor (like Gzip) or encryptor (like OpenSSL or GPG) when using RSync. RSync has a hard time determining the difference between the source and destination files when compressed or encrypted, which'll resort in high bandwidth usage again.
TIP use this if you find that your bandwidth usage/server load is too high, if you want to backup more frequently, if you prefer incremental backups over cycling, if you've outgrown Amazon S3 or Rackspace Cloud Files etc. etc.
Cycling
The RSync Storage option does not support cycling, so you cannot specify the server.keep = amount
here. The reason
you use RSync is probably because you want to benefit from the incremental backup feature it provides to reduce
bandwidth costs, heavy transfer load, etc. the RSync protocol allows you to transfer only the changes of each backup,
rather than the whole backup. This means only ONE copy of your backup will be stored at the remote location.
If you want to store multiple, here's an idea. Create multiple Backup::Model
's.
# Backup configuration file
1.upto(4) do |n|
Backup::Model.new("my_backup_#{n}".to_sym, "My Backup") do
store_with RSync do |rsync|
rsync.path = "/backups/rsync_#{n}/"
...
end
end
end
# Whenever Gem for managing the crontab
every 1.day, :at => ('06'..'11').to_a.map {|x| "#{x}:00" } do
command "backup --trigger my_backup_1"
end
every 1.day, :at => ('12'..'17').to_a.map {|x| "#{x}:00" } do
command "backup --trigger my_backup_2"
end
every 1.day, :at => ('18'..'23').to_a.map {|x| "#{x}:00" } do
command "backup --trigger my_backup_3"
end
every 1.day, :at => ('0'..'05').to_a.map {|x| "#{x}:00" } do
command "backup --trigger my_backup_4"
end
This will ensure you have 4 copies of your backup on the remote server that sync once each hour. The nice thing is that in case your production server crashes, or has a power outage, you'll only lose an hour worth of data (max) if anything gets corrupted. The other nice thing is that in case your database becomes corrupt for some reason and it still manages to dump and RSync the corrupt data, you'll still have 3 copies unharmed, given you don't take more than 6-24 hours (or more) to act after your applications doesn't work anymore.
Or of course, think of your own use cases (and let me know if you figure out any good ones!).
store_with Local do |local|
local.path = '~/backups/'
local.keep = 5
end
If multiple Storage options are configured for your backup, then the Local Storage option should be listed last. This is so the Local Storage option can transfer the final backup package file(s) using a move operation. If you configure a Local Storage and it is not the last Storage option listed in your backup model, then a warning will be issued and the final backup package file(s) will be transferred locally using a copy operation. This is due to the fact that the each Storage configured is performed in the order in which you configure it in you model.
Most storage services place restrictions on the size of files being stored. To work around these limits, see the Splitter page.
Each Storage (except for RSync) supports the keep
setting, which specifies how many backups to keep at this location.
store_with SFTP do |sftp|
sftp.keep = 5
end
Once the keep
limit has been reached, the oldest backup will be removed.
Note that if keep
is set to 5, then the 6th backup will be transferred and stored, before the oldest is removed.
For more information, see the Cycling page.
If you are backing up to multiple storage locations, you may want to specify default configuration so that you don't
have to rewrite the same lines of code for each of the same storage types. For example, say that the Amazon S3 storage
always has the same access_key_id
and secret_access_key
. You could add the following to your ~/Backup/config.rb
:
Backup::Storage::S3.defaults do |s3|
s3.access_key_id = "my_access_key_id"
s3.secret_access_key = "my_secret_access_key"
end
So now for every S3 database you wish to back up that requires the access_key_id
and secret_access_key
to be filled
in with the defaults we just specified above, you may omit them in the actual store_with
block, like so:
store_with S3 do |s3|
s3.bucket = "some-bucket"
# no need to specify access_key_id
# no need to specify my_secret_access_key
end
You would set defaults for CloudFiles
by using:
Backup::Storage::CloudFiles.defaults do |storage|
# ...and so forth for every supported storage location.
end