Practical Bash Logging: Capturing Script Output Like a Pro

Practical Bash Logging: Capturing Script Output Like a Pro

Practical Bash Logging: Capturing Script Output Like a Pro

 

Introduction

Logging is an often-overlooked but essential part of building robust Bash automation scripts. Whether you’re managing servers, automating deployments, or running scheduled maintenance tasks, adding structured logging can provide traceability, simplify debugging, and make your scripts production-ready. In this article, we’ll walk through practical ways to capture Bash script output, include timestamps, redirect errors effectively, and maintain a clean, understandable log structure.

1. Foundations of Bash Logging

In its simplest form, logging in Bash means redirecting output to a file. Every Bash command produces two types of output: standard output (stdout) and standard error (stderr). By default, both print to the terminal — but with redirection operators, you can separate and store them.

#!/usr/bin/env bash
# simple_logging.sh

echo "Starting script..."
ls /nonexistent_directory

echo "Script completed." > output.log 2> error.log

Here, > redirects stdout to output.log, while 2> redirects stderr to error.log. This separation lets you monitor normal and error output independently. For many production scripts, it’s wise to keep these logs distinct — success messages separate from errors.

2. Adding Timestamps to Log Files

To make logs more readable, timestamps can mark when each event occurred. Bash’s date command provides flexible formatting that can be prepended to log lines. Defining a reusable logging function makes this even cleaner.

#!/usr/bin/env bash
# timestamp_logging.sh

LOGFILE="script_$(date +%Y%m%d).log"

log() {
    local level="$1"; shift
    echo "$(date '+%Y-%m-%d %H:%M:%S') [$level] $@" | tee -a "$LOGFILE"
}

log "INFO" "Starting backup process..."
if cp /etc/passwd /tmp/passwd_backup; then
    log "INFO" "Backup completed successfully."
else
    log "ERROR" "Backup failed!"
fi

In this example, each line receives a timestamp, a log level, and a message. The tee -a command writes both to the log file and the terminal, keeping you informed in real time. The log function pattern helps maintain consistency across all log statements in larger scripts.

3. Combining stdout and stderr in One Log

Sometimes, you may want all output in a single file. Bash’s redirection operators can merge these streams as follows:

#!/usr/bin/env bash
# combined_logging.sh

exec >> script_output.log 2>&1

echo "Running system report..."
uname -a
ls /no/such/path

The line exec >> script_output.log 2>&1 redirects all stdout and stderr for the entire script into script_output.log. >> appends output (useful for long-running scripts), and 2>&1 merges stderr into stdout. This setup is especially valuable for cron jobs or background daemons that don’t have a visible terminal.

4. Structured Logging with Function Wrappers

When scripts grow in complexity, using structured log levels improves traceability and debugging. Here’s an example of a logging module that categorizes messages by severity.

#!/usr/bin/env bash
# structured_logging.sh

LOGFILE="/var/log/myapp.log"
mkdir -p $(dirname "$LOGFILE")

log() {
    local level="$1"; shift
    local msg="$@"
    printf '%s [%s] %s\n' "$(date +'%Y-%m-%d %H:%M:%S')" "$level" "$msg" >> "$LOGFILE"
}

info() { log INFO "$@"; }
warn() { log WARN "$@"; }
error() { log ERROR "$@"; }

info "Initialization started."
warn "Low memory detected."
error "Failed to fetch configuration file."

This structure lets you filter logs efficiently with commands like grep 'ERROR' /var/log/myapp.log. It’s also compatible with log aggregation tools if you move to centralized monitoring in the future (such as the ELK stack or Loki).

5. Advanced Tips: Log Rotation and Performance

Logging over time can balloon file sizes quickly. Use Linux’s logrotate utility to rotate and compress old logs automatically:

/var/log/myapp.log {
    weekly
    rotate 4
    compress
    missingok
    notifempty
}

For performance, avoid excessive echo calls inside loops. Instead, store output temporarily and write in batches when possible. Also, use set -euo pipefail at the script start to catch failures early and log them clearly.

Conclusion

Logging transforms Bash scripts from ad-hoc automations into reliable, production-ready tools. By applying the principles above—redirecting output, appending timestamps, categorizing messages, and managing rotation—you can build transparent and maintainable scripts that scale with your automation needs. The next time something fails at 3 a.m., you’ll thank yourself for those well-structured log files.

 

Useful links: