Monitor System Resources Automatically Using Bash + Cron
Introduction
System performance monitoring is a fundamental maintenance routine every developer, sysadmin, or DevOps engineer should automate. Whether you’re tracking CPU spikes, analyzing memory leaks, or auditing disk usage growth, building a lightweight monitoring system using Bash and Cron is a simple yet powerful approach. This guide will walk you through creating an automated script that logs CPU, memory, and disk usage — using just native Linux tools and scheduled tasks.
1. Setting Up Your Monitoring Environment
Before writing code, let’s create a structured environment for storing logs and scripts. Organizing properly helps maintain clarity and avoids configuration chaos.
# Create directories for scripts and logs
mkdir -p ~/sys_monitor/scripts
mkdir -p ~/sys_monitor/logs
# Navigate to the script folder
cd ~/sys_monitor/scripts
The ~/sys_monitor/scripts directory stores our monitoring bash script, while ~/sys_monitor/logs stores the daily logs generated by Cron. You’ll eventually review these logs to spot performance trends over time.
2. Writing the Resource Monitoring Script
Now, let’s write a Bash script that captures system resource usage including CPU, memory, and disk utilization. Use built-in commands like top, free, and df to extract metrics.
#!/bin/bash
# Log file path based on current date
LOG_FILE="~/sys_monitor/logs/system_$(date +%F).log"
# Capture timestamp
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
echo "------ System Resource Report: $TIMESTAMP ------" >> $LOG_FILE
# CPU Usage
echo "CPU Usage:" >> $LOG_FILE
mpstat 1 1 | awk '/Average/ {print 100-$NF"%"}' >> $LOG_FILE
echo "\nMemory Usage:" >> $LOG_FILE
free -h >> $LOG_FILE
echo "\nDisk Usage:" >> $LOG_FILE
df -h --total | grep total >> $LOG_FILE
echo "-----------------------------------------------\n" >> $LOG_FILE
Explanation: The script calculates CPU utilization using mpstat (from the sysstat package), prints memory stats via free, and summarizes disk usage with df. Using $(date +%F) ensures daily logs.
3. Automating Execution with Cron
Manual execution is great for testing, but automation is the real goal. Cron allows us to schedule this script to run at regular intervals — for example, every hour.
Edit your cron table with:
crontab -e
Add this line to schedule the script:
0 * * * * /bin/bash ~/sys_monitor/scripts/monitor.sh
This job runs the script at the top of each hour, logging CPU, memory, and disk utilization automatically. To confirm cron jobs are running properly, check system logs:
grep CRON /var/log/syslog
Tip: Use relative paths with caution. Cron often runs with limited environment variables, so use full paths (/usr/bin/mpstat instead of mpstat) to avoid command-not-found errors.
4. Parsing and Reviewing the Logged Data
After running for some time, you’ll have multiple daily logs in your logs directory. Parsing these logs helps identify performance changes. A simple example:
# Get average CPU usage from all daily logs
grep 'CPU Usage' ~/sys_monitor/logs/*.log
# Or extract disk usage trends
grep 'total' ~/sys_monitor/logs/*.log | awk '{print $5}'
For deeper analysis, you can import logs into a spreadsheet or use CLI tools like gnuplot for charting usage patterns. You could even extend the script to output metrics in CSV format for automation pipelines.
5. Enhancing the Script for Alerts and Optimization
To make your system monitoring truly proactive, you can include thresholds and notifications. For example, send an email alert if CPU usage exceeds 90%.
CPU_USAGE=$(mpstat 1 1 | awk '/Average/ {print 100-$NF}')
if (( ${CPU_USAGE%.*} > 90 )); then
echo "High CPU alert: $CPU_USAGE% on $(hostname)" | mail -s "CPU Alert" admin@example.com
fi
Performance Optimization Tip: Avoid running the monitoring script too frequently on lower-end systems, as frequent execution can slightly impact performance. For most servers, an hourly schedule balances granularity with efficiency.
Conclusion
With just Bash and Cron, you’ve built a self-maintaining system resource monitor suitable for both personal servers and small production environments. This lightweight approach minimizes dependencies and works across nearly any Linux system. As you expand, consider exporting metrics to visualization tools like Grafana or Prometheus for real-time dashboards — but remember, simplicity and reliability often beat complexity.
Useful links:

