automation

All posts tagged automation by Linux Bash
  • Posted on

    Automating Log File Management with Bash Scripts

    Log file management is essential for maintaining a healthy system, especially when dealing with large volumes of log data. Bash scripts can automate tasks like log rotation, archiving, and cleanup to ensure disk space is conserved and logs remain organized. This guide provides a step-by-step approach to creating a script for managing log files.


    Step 1: Writing a Basic Log Management Script

    Here’s a foundational Bash script to handle basic log file management tasks such as archiving and cleanup:

    #!/bin/bash
    
    # Variables
    LOG_DIR="/var/log/myapp"     # Directory containing log files
    ARCHIVE_DIR="/var/log/archive" # Directory for archived logs
    RETENTION_DAYS=30             # Number of days to retain logs
    LOG_FILE="/var/log/log_management.log" # Log file for script actions
    
    # Function to archive logs
    archive_logs() {
      mkdir -p "$ARCHIVE_DIR"
      for LOG in "$LOG_DIR"/*.log; do
        gzip "$LOG"
        mv "$LOG.gz" "$ARCHIVE_DIR"
        echo "[$(date)] INFO: Archived $LOG to $ARCHIVE_DIR." >> "$LOG_FILE"
      done
    }
    
    # Function to clean up old logs
    cleanup_logs() {
      find "$ARCHIVE_DIR" -type f -mtime +$RETENTION_DAYS -exec rm -f {} \;
      echo "[$(date)] INFO: Cleaned up logs older than $RETENTION_DAYS days in $ARCHIVE_DIR." >> "$LOG_FILE"
    }
    
    # Main script
    case $1 in
      archive)
        archive_logs
        ;;
      cleanup)
        cleanup_logs
        ;;
      all)
        archive_logs
        cleanup_logs
        ;;
      *)
        echo "Usage: $0 {archive|cleanup|all}" >> "$LOG_FILE"
        ;;
    esac
    

    To see output, view the $LOG_FILE location, as such: cat /var/log/log_management.log


    Step 2: Automating the Script with Cron

    To automate log file management tasks, schedule the script using cron. For example, to archive logs daily and clean up old logs weekly:

    1. Open the crontab editor:

      crontab -e
      
    2. Add entries for archiving and cleanup:

      # Archive logs daily at midnight
      0 0 * * * /var/log/log_management.sh archive
      
      # Clean up old logs every Sunday at 1 AM
      0 1 * * 0 /var/log/log_management.sh cleanup
      
    3. Save and exit. The tasks will now execute automatically.


    Step 3: Enhancements and Customizations

    1. Email Notifications

    Add functionality to send email alerts for errors or status updates:

    send_email_notification() {
      local SUBJECT="Log Management Notification"
      local BODY="$1"
      echo "$BODY" | mailx -s "$SUBJECT" admin@example.com
    }
    
    # Call this function after critical actions or errors
    send_email_notification "Archived logs and cleaned up old files."
    

    2. Handling Multiple Log Directories

    Extend the script to handle logs from multiple directories:

    LOG_DIRS=("/var/log/app1" "/var/log/app2")
    for DIR in "${LOG_DIRS[@]}"; do
      LOG_DIR="$DIR"
      ARCHIVE_DIR="$DIR/archive"
      archive_logs
      cleanup_logs
    done
    

    3. Compression Alternatives

    Use alternative compression tools like bzip2 or xz for better compression ratios:

    archive_logs() {
      mkdir -p "$ARCHIVE_DIR"
      for LOG in "$LOG_DIR"/*.log; do
        xz "$LOG"
        mv "$LOG.xz" "$ARCHIVE_DIR"
        echo "[$(date)] INFO: Archived $LOG to $ARCHIVE_DIR using xz." >> "$LOG_FILE"
      done
    }
    

    4. Integration with Monitoring Tools

    Export log management data for use with monitoring tools like Grafana:

    export_metrics() {
      local METRICS_FILE="/var/log/metrics.csv"
      echo "Timestamp,Archived Logs,Deleted Logs" > "$METRICS_FILE"
      local ARCHIVED=$(ls "$ARCHIVE_DIR" | wc -l)
      local DELETED=$(find "$ARCHIVE_DIR" -type f -mtime +$RETENTION_DAYS | wc -l)
      echo "$(date),$ARCHIVED,$DELETED" >> "$METRICS_FILE"
    }
    
    # Call this function periodically to update metrics
    export_metrics
    

    Conclusion

    Automating log file management with Bash scripts simplifies tasks like archiving, cleanup, and monitoring, ensuring logs are managed efficiently. By adding customizations like email notifications, compression options, and monitoring integration, you can tailor the solution to meet specific needs. Start building your log management script today to maintain a well-organized and resource-efficient system.

  • Posted on

    Creating a Bash Script for Managing User Accounts

    Managing user accounts is a critical administrative task in Linux systems. Automating these tasks with Bash scripts can save time and reduce errors. In this guide, we will walk through creating a Bash script to handle common user account operations such as creating users, deleting users, and modifying user attributes.


    Step 1: Writing a Basic User Management Script

    Here’s a foundational Bash script to manage user accounts:

    #!/bin/bash
    
    # Variables
    LOG_FILE="/path/to/user_management.log" # Log file for user management actions
    
    # Function to create a user
    create_user() {
      local USERNAME=$1
      if id "$USERNAME" &>/dev/null; then
        echo "[$(date)] ERROR: User $USERNAME already exists." >> "$LOG_FILE"
      else
        sudo useradd "$USERNAME"
        echo "[$(date)] INFO: User $USERNAME created successfully." >> "$LOG_FILE"
      fi
    }
    
    # Function to delete a user
    delete_user() {
      local USERNAME=$1
      if id "$USERNAME" &>/dev/null; then
        sudo userdel "$USERNAME"
        echo "[$(date)] INFO: User $USERNAME deleted successfully." >> "$LOG_FILE"
      else
        echo "[$(date)] ERROR: User $USERNAME does not exist." >> "$LOG_FILE"
      fi
    }
    
    # Function to modify a user
    modify_user() {
      local USERNAME=$1
      local OPTION=$2
      local VALUE=$3
      if id "$USERNAME" &>/dev/null; then
        sudo usermod "$OPTION" "$VALUE" "$USERNAME"
        echo "[$(date)] INFO: User $USERNAME modified with $OPTION $VALUE." >> "$LOG_FILE"
      else
        echo "[$(date)] ERROR: User $USERNAME does not exist." >> "$LOG_FILE"
      fi
    }
    
    # Main script
    case $1 in
      create)
        create_user "$2"
        ;;
      delete)
        delete_user "$2"
        ;;
      modify)
        modify_user "$2" "$3" "$4"
        ;;
      *)
        echo "Usage: $0 {create|delete|modify} username [options]" >> "$LOG_FILE"
        ;;
    esac
    

    Step 2: Automating the Script with Cron

    To automate user account tasks, schedule the script using cron. For instance, to create a backup user every day at midnight:

    1. Open the crontab editor:

      crontab -e
      
    2. Add an entry to schedule the task:

      0 0 * * * /path/to/user_management.sh create backup_user
      
    3. Save and exit. The task will now execute daily.


    Step 3: Enhancements and Customizations

    1. Password Management

    Add functionality to set or reset passwords for users:

    set_password() {
      local USERNAME=$1
      echo "Enter new password for $USERNAME:"
      sudo passwd "$USERNAME"
      echo "[$(date)] INFO: Password updated for $USERNAME." >> "$LOG_FILE"
    }
    

    2. Group Management

    Include options to add or remove users from groups:

    manage_group() {
      local USERNAME=$1
      local GROUPNAME=$2
      local ACTION=$3
      if id "$USERNAME" &>/dev/null && getent group "$GROUPNAME" &>/dev/null; then
        case $ACTION in
          add)
            sudo usermod -aG "$GROUPNAME" "$USERNAME"
            echo "[$(date)] INFO: User $USERNAME added to group $GROUPNAME." >> "$LOG_FILE"
            ;;
          remove)
            sudo gpasswd -d "$USERNAME" "$GROUPNAME"
            echo "[$(date)] INFO: User $USERNAME removed from group $GROUPNAME." >> "$LOG_FILE"
            ;;
          *)
            echo "[$(date)] ERROR: Invalid action $ACTION." >> "$LOG_FILE"
            ;;
        esac
      else
        echo "[$(date)] ERROR: User or group does not exist." >> "$LOG_FILE"
      fi
    }
    

    3. Bulk User Management

    Allow the script to process a list of users from a file:

    bulk_user_management() {
      local FILE=$1
      while IFS=, read -r ACTION USERNAME OPTION VALUE; do
        case $ACTION in
          create)
            create_user "$USERNAME"
            ;;
          delete)
            delete_user "$USERNAME"
            ;;
          modify)
            modify_user "$USERNAME" "$OPTION" "$VALUE"
            ;;
          *)
            echo "[$(date)] ERROR: Invalid action $ACTION in file." >> "$LOG_FILE"
            ;;
        esac
      done < "$FILE"
    }
    

    Conclusion

    A Bash script for managing user accounts simplifies administrative tasks by automating repetitive actions. With enhancements like password management, group handling, and bulk processing, you can streamline your workflows while ensuring consistency. Start building your user management script today to save time and improve efficiency!

  • Posted on

    How to Create a Bash Script to Monitor System Resources

    Monitoring system resources is vital for ensuring stable and efficient system performance. Bash scripts offer a lightweight and customizable way to track CPU usage, memory consumption, disk space, and more. This guide walks you through creating a Bash script to monitor these resources and explores advanced customizations for enhanced functionality.


    Step 1: Writing a Basic Monitoring Script

    Here's a fundamental Bash script for monitoring CPU, memory, and disk usage:

    #!/bin/bash
    
    # Variables
    LOG_FILE="/var/log/system_monitor.log" # Log file for resource data
    THRESHOLD_CPU=80                       # CPU usage threshold (in %)
    THRESHOLD_MEM=80                       # Memory usage threshold (in %)
    THRESHOLD_DISK=90                      # Disk usage threshold (in %)
    
    # Function to log system metrics
    log_metrics() {
      echo "[$(date)] CPU: $(top -bn1 | grep 'Cpu(s)' | awk '{print $2 + $4}')%" >> "$LOG_FILE"
      echo "[$(date)] Memory: $(free | grep Mem | awk '{print $3/$2 * 100.0}')%" >> "$LOG_FILE"
      echo "[$(date)] Disk: $(df / | grep / | awk '{print $5}')" >> "$LOG_FILE"
    }
    
    # Function to check thresholds and alert
    check_thresholds() {
      CPU=$(top -bn1 | grep 'Cpu(s)' | awk '{print $2 + $4}')
      MEM=$(free | grep Mem | awk '{print $3/$2 * 100.0}')
      DISK=$(df / | grep / | awk '{print $5}' | tr -d '%')
    
      if (( $(echo "$CPU > $THRESHOLD_CPU" | bc -l) )); then
        echo "[$(date)] ALERT: CPU usage is above $THRESHOLD_CPU%: $CPU%" >> "$LOG_FILE"
      fi
    
      if (( $(echo "$MEM > $THRESHOLD_MEM" | bc -l) )); then
        echo "[$(date)] ALERT: Memory usage is above $THRESHOLD_MEM%: $MEM%" >> "$LOG_FILE"
      fi
    
      if (( $DISK > $THRESHOLD_DISK )); then
        echo "[$(date)] ALERT: Disk usage is above $THRESHOLD_DISK%: $DISK%" >> "$LOG_FILE"
      fi
    }
    
    # Main script
    log_metrics
    check_thresholds
    

    Note: you may need to install bc, try using dnf install bc or apt-get install bc


    Step 2: Automating the Script with Cron

    To ensure the script runs at regular intervals, schedule it using cron:

    1. Open the crontab editor:

      crontab -e
      
    2. Add an entry to execute the script every 5 minutes:

      */5 * * * * /path/to/system_monitor.sh
      
    3. Save and exit. The script will now run automatically.


    Step 3: Enhancements and Customizations

    1. Email Notifications

    You can configure the script to send email alerts using tools like mailx. Here’s an example:

    # Install mailx if not already installed
    # apt-get install mailutils (for Debian-based systems)
    # dnf install s-nail (for RHEL based since v9)
    
    send_email_alert() {
      local SUBJECT="Resource Usage Alert"
      local BODY="Alert: High resource usage detected. Check logs for details."
      echo "$BODY" | mailx -s "$SUBJECT" user@example.com
    }
    
    # Add this function call to the alert conditions in the script
    send_email_alert
    

    2. Detailed Metrics

    Extend the script to include more data points. For example, to log network activity:

    log_network_activity() {
      echo "[$(date)] Network Activity: $(sar -n DEV 1 1 | grep Average | grep eth0)" >> "$LOG_FILE"
    }
    
    # Call this function within log_metrics
    log_network_activity
    

    3. Graphical Visualization

    Export collected data to visualization tools. For example, generate CSV files for Grafana:

    export_to_csv() {
      echo "Timestamp,CPU Usage,Memory Usage,Disk Usage" > /var/log/system_metrics.csv
      while IFS= read -r line; do
        CPU=$(echo "$line" | grep "CPU:" | awk '{print $2}')
        MEM=$(echo "$line" | grep "Memory:" | awk '{print $2}')
        DISK=$(echo "$line" | grep "Disk:" | awk '{print $2}')
        echo "$(date),$CPU,$MEM,$DISK" >> /var/log/system_metrics.csv
      done < "$LOG_FILE"
    }
    
    # Schedule this function to run periodically using cron
    

    4. Remote Monitoring

    Use SSH to collect data from multiple servers:

    monitor_remote_server() {
      local REMOTE_SERVER="user@remote-server"
      ssh "$REMOTE_SERVER" "bash -s" < /var/log/system_monitor.sh
    }
    
    # Call this function to include remote server monitoring
    monitor_remote_server
    

    Conclusion

    Creating a Bash script to monitor system resources is an effective way to ensure system stability and performance. By automating the process and adding customizations like email alerts, detailed metrics, and remote monitoring, you can tailor the solution to meet your specific needs. Start building your own monitoring script today to stay ahead of potential issues.

  • Posted on

    Using Bash for Linux system monitoring and automation allows administrators to efficiently manage systems, optimize resources, and automate repetitive tasks. Here's an overview of how Bash can be leveraged for these purposes:


    System Monitoring with Bash

    Bash scripts can gather and display system performance data in real-time or at scheduled intervals.

    1. Monitor CPU and Memory Usage

    Use commands like top, htop, or free within Bash scripts to capture resource utilization.

    Example Script:

    #!/bin/bash
    
    echo "CPU and Memory Usage:"
    echo "----------------------"
    top -b -n1 | head -n 10
    free -h
    

    2. Disk Usage Monitoring

    The df and du commands can be used to check disk space and usage.

    Example Script:

    #!/bin/bash
    
    echo "Disk Space Usage:"
    echo "-----------------"
    df -h
    
    echo "Largest Files/Directories:"
    echo "--------------------------"
    du -ah / | sort -rh | head -n 10
    

    3. Network Monitoring

    Monitor active connections and network usage using tools like netstat, ss, or ping.

    Example Script:

    #!/bin/bash
    
    echo "Active Network Connections:"
    echo "---------------------------"
    netstat -tuln
    
    echo "Network Latency to Google:"
    echo "--------------------------"
    ping -c 4 google.com
    

    4. Log Monitoring

    Use tail, grep, or awk to analyze log files.

    Example Script:

    #!/bin/bash
    
    log_file="/var/log/syslog"
    echo "Last 10 Log Entries:"
    echo "---------------------"
    tail -n 10 $log_file
    
    echo "Error Logs:"
    echo "-----------"
    grep "error" $log_file | tail -n 10
    

    Automation with Bash

    Bash scripts are ideal for automating repetitive tasks, improving efficiency, and reducing manual errors.

    1. Automated Backups

    Use rsync or tar to automate file backups.

    Example Script:

    #!/bin/bash
    
    src_dir="/home/user/documents"
    backup_dir="/backup/documents"
    timestamp=$(date +%F_%T)
    
    mkdir -p $backup_dir
    tar -czf "$backup_dir/backup_$timestamp.tar.gz" $src_dir
    
    echo "Backup completed: $backup_dir/backup_$timestamp.tar.gz"
    

    2. Task Scheduling

    Combine Bash scripts with cron to execute tasks at specified intervals.

    Schedule a Script: 1. Edit the crontab file: crontab -e 2. Add a line to schedule the script: bash 0 2 * * * /path/to/backup_script.sh

    3. Software Updates

    Automate system updates using package managers.

    Example Script:

    #!/bin/bash
    
    echo "Updating system packages..."
    sudo apt-get update -y && sudo apt-get upgrade -y
    echo "System update completed."
    

    4. Service Monitoring and Restart

    Check and restart services automatically if they fail.

    Example Script:

    #!/bin/bash
    
    service_name="apache2"
    
    if ! systemctl is-active --quiet $service_name; then
        echo "$service_name is not running. Restarting..."
        systemctl restart $service_name
        echo "$service_name restarted."
    else
        echo "$service_name is running."
    fi
    

    5. Automated Alerts

    Send email notifications or logs when specific events occur.

    Example Script:

    #!/bin/bash
    
    log_file="/var/log/syslog"
    alert_email="admin@example.com"
    
    if grep -q "error" $log_file; then
        echo "Errors detected in $log_file" | mail -s "System Alert" $alert_email
    fi
    

    Combining Monitoring and Automation

    Advanced scripts can combine monitoring and automation, such as detecting high CPU usage and killing processes.

    Example Script:

    #!/bin/bash
    
    threshold=80
    process=$(ps aux --sort=-%cpu | awk 'NR==2 {print $2, $3}')
    
    cpu_usage=$(echo $process | awk '{print $2}')
    pid=$(echo $process | awk '{print $1}')
    
    if (( $(echo "$cpu_usage > $threshold" | bc -l) )); then
        echo "High CPU usage detected: $cpu_usage%"
        echo "Terminating process with PID $pid..."
        kill -9 $pid
        echo "Process terminated."
    fi
    

    Best Practices for Bash Monitoring and Automation

    1. Modular Scripts: Break tasks into reusable functions.
    2. Error Handling: Use checks (if conditions) to handle errors gracefully.
    3. Logging: Record script outputs and errors to log files for auditing.
    4. Testing: Test scripts in a safe environment before deploying.
    5. Permissions: Ensure scripts have appropriate permissions (chmod) and use sudo responsibly.

    By combining these techniques, administrators can use Bash to monitor Linux systems proactively, automate essential tasks, and maintain operational efficiency.