Secure your data: Automate server files, configurations, and databases, then upload to cloud storage services.
Estimated Time: Approximately 60 - 120 minutes (initial setup & configuration)
Backups are not just important; they are absolutely essential for any server or website. Data loss can occur due to various reasons: hardware failure, accidental deletion, malicious attacks, or even software bugs. A robust, automated backup strategy is your ultimate insurance policy.
This guide combines the power of:
You'll learn how to safeguard your server's core components: crucial files, system configurations, and databases, ensuring peace of mind and business continuity.
60 - 120 minutes
(Includes tool installation, script creation, cloud storage setup, and testing.)
Intermediate
Assumes familiarity with Linux terminal, basic scripting, and either AWS or Google Cloud concepts.
sudo
privileges.Ensure your server is up-to-date and you have secure SSH access.
1. Update system packages:
sudo apt update && sudo apt upgrade -y
2. Secure SSH access: If not already, consider setting up SSH key-based authentication. Refer to our FTP guide (Step 1) for detailed instructions on connecting via SSH with key files.
You'll need `tar` and `gzip` (usually pre-installed) for archiving/compression, and client tools for your database and cloud storage.
1. Install database client tools (if applicable):
2. Install cloud storage upload tools:
sudo apt install awscli -y
sudo apt install rclone -y
curl https://rclone.org/install.sh | sudo bash
Your backup script needs credentials to authenticate with your chosen cloud storage service. **Keep these credentials highly secure!**
You'll need an AWS IAM user with S3 write permissions (e.g., `s3:PutObject`) for your backup bucket, along with its Access Key ID and Secret Access Key. Refer to our AWS S3 guide (Step 1) for IAM user creation.
Configure the AWS CLI on your server:
aws configure
Configuring `rclone` for Google Drive involves an interactive process that authenticates via your web browser. This must be done manually once.
rclone config
Now, we'll create a single Bash script that will handle all your backup tasks: file archiving, database dumping, compression, cleanup, and cloud upload.
1. Create a directory for your backup scripts:
mkdir -p /opt/backup_scripts
2. Create the backup script file:
sudo nano /opt/backup_scripts/daily_server_backup.sh
3. Paste the following script content. Remember to customize all variables marked `YOUR_...` and uncomment sections relevant to your setup:
#!/bin/bash
# --- Configuration Variables ---
BACKUP_DIR="/var/backups/daily" # Local directory to store backups temporarily
RETENTION_DAYS=7 # How many days to keep local backups
LOG_FILE="/var/log/daily_backup.log" # Log file for this script's output
DATE=$(date +"%Y%m%d_%H%M%S")
HOSTNAME=$(hostname)
FULL_BACKUP_NAME="${HOSTNAME}_backup_${DATE}.tar.gz"
# --- Cloud Storage (Choose one and uncomment. Remember to configure credentials first!) ---
# AWS S3 Configuration (Requires 'aws configure' in Step 3A)
# CLOUD_DESTINATION="s3://YOUR_S3_BUCKET_NAME/server_backups/"
# Google Drive Configuration (Requires 'rclone config' in Step 3B)
CLOUD_DESTINATION="gdrive_backup:server_backups/" # gdrive_backup is remote name, server_backups is folder on Drive
# --- Database Configuration (Uncomment and set if backing up databases) ---
# MySQL Database Configuration
# DB_HOST="localhost"
# DB_USER="YOUR_MYSQL_USER" # User with backup privileges (NOT root if possible)
# DB_PASSWORD="YOUR_MYSQL_PASSWORD"
# MYSQLDUMP_OPTIONS="--single-transaction --skip-lock-tables" # Recommended for InnoDB
# PostgreSQL Database Configuration
# PG_USER="YOUR_PG_USER" # User with backup privileges
# PG_PASSWORD="YOUR_PG_PASSWORD"
# PG_DUMP_ALL_OPTIONS="-c -Fc" # -c: clean, -Fc: custom format
# --- Script Logic ---
exec > >(tee -a $LOG_FILE) 2>&1 # Redirect all output to log file (stdout and stderr)
echo "--- Backup started at $(date) ---"
# Create backup directory if it doesn't exist
mkdir -p $BACKUP_DIR
if [ $? -ne 0 ]; then echo "Error: Could not create backup directory $BACKUP_DIR"; exit 1; fi
# Remove old local backups based on RETENTION_DAYS
echo "Cleaning old local backups..."
find "$BACKUP_DIR" -type f -name "*.tar.gz" -mtime +$RETENTION_DAYS -exec rm {} \;
echo "Old local backups cleaned."
# --- Files and Configurations Backup ---
echo "Backing up files and configurations..."
tar -czvf "${BACKUP_DIR}/${FULL_BACKUP_NAME}" \
--exclude="${BACKUP_DIR}" \
--exclude="/proc/*" \
--exclude="/sys/*" \
--exclude="/dev/*" \
--exclude="/run/*" \
--exclude="/mnt/*" \
--exclude="/media/*" \
--exclude="/tmp/*" \
--exclude="/var/tmp/*" \
--exclude="/var/cache/*" \
/var/www \
/etc \
/home/ubuntu \
# Add more directories/files here to backup (e.g., /opt/my-app-data)
# /path/to/your/important/files
# ...
if [ $? -ne 0 ]; then echo "Error: File backup failed!"; exit 1; fi
echo "Files and configurations backup created: ${FULL_BACKUP_NAME}"
# --- Database Backup (Uncomment relevant section) ---
# MySQL Backup
# if [ -n "$DB_USER" ]; then
# echo "Backing up MySQL databases..."
# export MYSQL_PWD="$DB_PASSWORD" # For security, pass password this way
# /usr/bin/mysqldump -u "$DB_USER" -h "$DB_HOST" $MYSQLDUMP_OPTIONS --all-databases > "${BACKUP_DIR}/${HOSTNAME}_mysql_${DATE}.sql"
# if [ $? -ne 0 ]; then echo "Error: MySQL backup failed!"; unset MYSQL_PWD; exit 1; fi
# unset MYSQL_PWD
# gzip "${BACKUP_DIR}/${HOSTNAME}_mysql_${DATE}.sql"
# echo "MySQL backup created: ${HOSTNAME}_mysql_${DATE}.sql.gz"
# fi
# PostgreSQL Backup
# if [ -n "$PG_USER" ]; then
# echo "Backing up PostgreSQL databases..."
# export PGPASSWORD="$PG_PASSWORD" # For security, pass password this way
# /usr/bin/pg_dumpall -U "$PG_USER" $PG_DUMP_ALL_OPTIONS > "${BACKUP_DIR}/${HOSTNAME}_postgresql_${DATE}.sql.gz"
# if [ $? -ne 0 ]; then echo "Error: PostgreSQL backup failed!"; unset PGPASSWORD; exit 1; fi
# unset PGPASSWORD
# # pg_dumpall with -Fc is already compressed, so no extra gzip needed unless you changed options.
# echo "PostgreSQL backup created: ${HOSTNAME}_postgresql_${DATE}.sql.gz"
# fi
# --- Upload to Cloud Storage ---
echo "Uploading backups to cloud storage..."
if [ -n "$CLOUD_DESTINATION" ]; then
# AWS S3 Upload
# /usr/bin/aws s3 cp "${BACKUP_DIR}/${FULL_BACKUP_NAME}" "$CLOUD_DESTINATION"
# if [ $? -ne 0 ]; then echo "Error: AWS S3 upload failed!"; exit 1; fi
# echo "Uploaded to S3: ${CLOUD_DESTINATION}${FULL_BACKUP_NAME}"
# Google Drive (or other rclone remote) Upload
/usr/bin/rclone copy "${BACKUP_DIR}/${FULL_BACKUP_NAME}" "$CLOUD_DESTINATION" --progress --log-file="$LOG_FILE" --log-level INFO
if [ $? -ne 0 ]; then echo "Error: Rclone upload failed!"; exit 1; fi
echo "Uploaded to Google Drive (or Rclone remote): ${CLOUD_DESTINATION}${FULL_BACKUP_NAME}"
# Optional: Upload database backups separately if created (uncomment if applicable)
# if [ -f "${BACKUP_DIR}/${HOSTNAME}_mysql_${DATE}.sql.gz" ]; then
# /usr/bin/rclone copy "${BACKUP_DIR}/${HOSTNAME}_mysql_${DATE}.sql.gz" "$CLOUD_DESTINATION"
# echo "Uploaded MySQL backup to cloud."
# fi
# if [ -f "${BACKUP_DIR}/${HOSTNAME}_postgresql_${DATE}.sql.gz" ]; then
# /usr/bin/rclone copy "${BACKUP_DIR}/${HOSTNAME}_postgresql_${DATE}.sql.gz" "$CLOUD_DESTINATION"
# echo "Uploaded PostgreSQL backup to cloud."
# fi
else
echo "No cloud destination configured. Skipping cloud upload."
fi
echo "--- Backup finished at $(date) ---"
Save the file (`Ctrl+O`, `Enter`) and exit `nano` (`Ctrl+X`).
4. Make the script executable:
sudo chmod +x /opt/backup_scripts/daily_server_backup.sh
Before scheduling with cron, run the script manually to ensure it works correctly and doesn't produce errors. This is crucial for debugging.
1. Run the script:
sudo /opt/backup_scripts/daily_server_backup.sh
2. Check the log file for errors:
sudo cat /var/log/daily_backup.log
3. Verify local backup files:
ls -lh /var/backups/daily/
4. Verify cloud upload: Log in to your AWS S3 console or Google Drive to confirm the backup file(s) arrived in the correct destination.
Now that your script is working, we'll schedule it to run automatically using cron. We'll use the root user's crontab for system-wide backups.
1. Open the root user's crontab for editing:
sudo crontab -e
2. Add the cron job entry:
Add the following line to the end of the file to run the backup script daily at 2:00 AM.
0 2 * * * /opt/backup_scripts/daily_server_backup.sh
Save (`Ctrl+O`, `Enter`) and exit `nano` (`Ctrl+X`).
Since your backup script writes its own log file, it's important to configure `logrotate` to prevent it from growing indefinitely and consuming disk space.
1. Create a logrotate configuration file for your backup log:
sudo nano /etc/logrotate.d/daily_backup
2. Paste the following content:
/var/log/daily_backup.log {
daily
rotate 7
compress
delaycompress
missingok
notifempty
create 640 root adm
postrotate
systemctl reload rsyslog >/dev/null 2>&1 || true
endscript
}
Save (`Ctrl+O`, `Enter`) and exit `nano` (`Ctrl+X`).
A backup is only as good as its ability to be restored. This is the **most important step** to validate your entire backup strategy.
1. Download a backup file: Access your cloud storage (S3 console, Google Drive, or `aws s3 cp` / `rclone copy` from another server) and download one of your recent backup `.tar.gz` files and any database `.sql.gz` files.
2. Restore to a *different* server or local machine:
Confirm your automated backup system is functional, secure, and reliable:
You have successfully implemented a comprehensive and automated backup solution for your Ubuntu server, securing your critical files, configurations, and databases. By leveraging cron jobs and cloud storage, you've built a robust foundation for disaster recovery.
Consider these advanced steps and best practices to further enhance your backup strategy:
Need Expert Disaster Recovery Planning or Backup Solutions? Contact Us!