Build a Bash Script that Monitors Disk Space and Sends Alerts

Build a Bash Script that Monitors Disk Space and Sends Alerts

Build a Bash Script that Monitors Disk Space and Sends Alerts

 

Server reliability is paramount in any production environment. A common cause of server issues is running out of disk space, which leads to performance degradation and even system failures. In this tutorial, we’ll create a Bash script that monitors disk space, logs usage history for diagnostics, and sends email alerts when usage exceeds defined thresholds. This script is a great tool to automate server monitoring and enhance DevOps workflows.

1. Understanding the Goal: Disk Monitoring Essentials

Before diving into code, let’s clarify our script’s responsibilities:

  • Check disk usage using standard Unix tools like df.
  • Define a usage threshold (e.g., 80%) to trigger alerts.
  • Log disk usage into a historical file (e.g., daily growth tracking).
  • Send an email alert if usage exceeds our threshold.

We’ll accomplish all of the above using native Bash features and common CLI utilities, making it simple to deploy on any Unix-like server without installing extra dependencies.

2. Capturing Disk Usage with df

The df command stands for ‘disk filesystem’ and provides a summary of disk space usage. We can use this along with awk to parse the percentage used.

#!/bin/bash

# File system to monitor
FILESYSTEM="/dev/sda1"

# Get the current usage percentage
USAGE=$(df -h | grep "$FILESYSTEM" | awk '{print $5}' | sed 's/%//g')

echo "Current disk usage for $FILESYSTEM is ${USAGE}%"

Here, we:

  • Specify the device (e.g., /dev/sda1) or mount point (e.g., /).
  • Use df -h for human-readable output.
  • Extract the usage column and remove the ‘%’ symbol using sed.

3. Logging Disk Usage Over Time

Monitoring over time reveals trends, enabling proactive action. Let’s log disk usage with timestamps:

# Define log file
LOG_FILE="/var/log/disk_usage.log"

# Append usage with timestamp
TIMESTAMP=$(date +"%Y-%m-%d %H:%M:%S")
echo "$TIMESTAMP - $FILESYSTEM usage: ${USAGE}%" >> "$LOG_FILE"

It’s best practice to configure log rotation using logrotate once your logs grow too large to manage manually.

4. Sending Email Alerts When Thresholds Are Exceeded

Once disk usage exceeds a certain critical threshold (say, 85%), we want to send an alert to an admin’s email (e.g., via Postfix or mailx).

# Usage threshold
THRESHOLD=85

# Email setup
EMAIL="admin@example.com"
SUBJECT="Disk Space Alert: $FILESYSTEM usage at ${USAGE}%"
MESSAGE="Warning: $FILESYSTEM is at ${USAGE}% capacity as of $TIMESTAMP. Please investigate."

# Check and send alert if threshold crossed
if [ "$USAGE" -ge "$THRESHOLD" ]; then
  echo "$MESSAGE" | mail -s "$SUBJECT" "$EMAIL"
fi

Make sure your server has a valid mail system configured. You can test if sending works using:

echo "Test message" | mail -s "Test Subject" your_email@example.com

5. Wrapping It into a Reusable Script with Automation

Let’s combine all steps into a script and schedule it via cron for regular checks (e.g., every hour):

#!/bin/bash

FILESYSTEM="/dev/sda1"
THRESHOLD=85
EMAIL="admin@example.com"
LOG_FILE="/var/log/disk_usage.log"

USAGE=$(df -h | grep "$FILESYSTEM" | awk '{print $5}' | sed 's/%//g')
TIMESTAMP=$(date +"%Y-%m-%d %H:%M:%S")
echo "$TIMESTAMP - $FILESYSTEM usage: ${USAGE}%" >> "$LOG_FILE"

if [ "$USAGE" -ge "$THRESHOLD" ]; then
  SUBJECT="Disk Space Critical: $FILESYSTEM usage ${USAGE}%"
  MESSAGE="Critical Alert: $FILESYSTEM is at ${USAGE}% capacity as of $TIMESTAMP. Immediate action is needed."
  echo "$MESSAGE" | mail -s "$SUBJECT" "$EMAIL"
fi

Make the script executable:

chmod +x disk_monitor.sh

Schedule with cron (edit using crontab -e):

0 * * * * /path/to/disk_monitor.sh

This runs the script at the top of every hour.

Bonus: Enhancements and Best Practices

To make the script production-ready, consider:

  • Monitoring multiple file systems by looping through df output.
  • Using syslog instead of custom files: logger can log to /var/log/syslog.
  • Integrating with Slack or PagerDuty for alerting via APIs.
  • Adding dry run flags or verbosity controls for debugging.
  • Containerizing the script to run in Kubernetes or Docker.

Disk usage monitoring may not be sexy, but it saves teams from silent data-related disasters. With just a few lines of Bash, you gain powerful visibility into one of the core aspects of server health.

 

Useful links: