You will notice in the images attached below that the security policies of the security groups have been set in a generic way in order to allow incoming and outgoing connections to and from all addresses, we will then configure them specifically with the IPs randomly assigned by the ISPs at the time of writing the text. And we will also configure AWS-CLI specifying the IAM user needed to operate from the command line and proceed.
- User creation is performed
- AWS CLI access key is created in security credentials, access key, creation
- Policy application is performed to the user above and the specified endpoint on VPC (dedicated for backend), to allow access, reading and writing on S3 bucket “pegasoial”:
{
“Statement”: [
{
“Sid”: “ListObjectsInBucket”,
“Effect”: “Allow”,
“Action”: [“s3:ListBucket”],
“Resource”: [“arn:aws:s3:::pegasoial”]
},
{
“Sid”: “AllObjectActions”,
“Effect”: “Allow”,
“Action”: “s3:*Object”,
“Resource”: [“arn:aws:s3:::pegasoial/*”]
}
]
}
has not been specified in the project work track but it is possible to create a “virtual private gateaway” on host in order to generate a private channel for data exchange.
We will proceed anyway via CLI to perform the Backup and Q&A.
Installation of the compression app is performed:
apt install rar && apt install unrar
Installation of the aws-cli is performed:
apt install aws-cli –classic
Please note that the cronjob program is already pre-installed in Ubuntu desktop systems
For the first access to the aws cli, it is configured with the command:
aws configure
In order, the following are entered:
- account ID
- root/user password
- default region name
- output text format
Now let's move on to the Backup&DisasterRecovery programming
is schematized unidirectionally, that is:
- backup of the front and back end is performed from the Andromeda machine
- sent to S3 storage
- recovery and deployment of the D&Recovery on the Cassiopeia machine
Note
it is not necessary to automate the modification at each backup recovery subsequent ownership and permissions as they remain saved in the properties of > the individual files, but it will be necessary to check whether the user privileges on the databases remain.
backup.sh
:
andromeda will generate a compressed file called archive.rar that will contain the website and database every 1h via crontab, then send it to the S3 bucket.
restore.sh
:
takes the compressed file archive.rar from the S3 bucket, unzips it, exports the website and imports the .sql database overwriting the current one, every 1h via crontab.
For the occasion, we will install the rar program on each machines:
apt install rar && apt install unrar
andromeda/backup.sh:
mysqldump -u root -pialer test > /var/www/html/Sito_test/test-dump.sql
cd /var/www/html
rar a -r archive.rar TestSite
echo ialer | sudo -S aws s3 cp archive.rar s3://pegasoial
cassiopea/restore.sh:
aws s3 cp s3://pegasoial/archive.rar /home/ubuntu
echo ialer | sudo -S mv archive.rar /var/www/html
cd /var/www/html
echo ialer | sudo -S unrar x -YA archive.rar
echo ialer | sudo -S mysql -u root -pialer test < /var/www/html/Sito_test/test-dump.sql
crontab per machine:
By modifying /etc/crontab
directly among the system tables we guarantee the effective functioning despite the reboots, set as follows, per machine:
10 * * * * andromeda sh /home/andromeda/backup.sh && echo "$(date)" >> /home/andromeda/crontabbedA.txt && aws s3 cp /home/andromeda/crontabbedA.txt s3://pegasoial;
20 * * * * ubuntu sh /home/ubuntu/backup.sh && echo "$(date)" >> /home/ubuntu/crontabbedC.txt && aws s3 cp /home/ubuntu/crontabbedC.txt s3://pegasoial;
crontabbedA.txt & crontabbedC.txt
files on S3 storage and on-premise actually verifies the completion of the various scripts, it's been used as a success LOG.
Sent the following command after the changes made to crontab:
systemctl restart crond

