cli

All posts tagged cli by Linux Bash
  • Posted on

    Understanding and Using xargs for Command-Line Argument Passing

    xargs is a powerful command-line utility in Bash that allows you to build and execute commands using arguments that are passed via standard input (stdin). It is especially useful when you need to handle input that is too large to be processed directly by a command or when you want to optimize the execution of commands with multiple arguments.

    Here's a guide to understanding and using xargs effectively.


    1. Basic Syntax of xargs

    The basic syntax of xargs is:

    command | xargs [options] command_to_execute
    
    • command: The command that generates output (which xargs will process).
    • xargs: The command that reads input from stdin and constructs arguments.
    • command_to_execute: The command that will be executed with the arguments.

    2. Using xargs to Pass Arguments to Commands

    xargs takes the output of a command and converts it into arguments for another command. This is useful when you need to pass many arguments, such as filenames or results from other commands, to another program.

    • Example: Pass a list of files to rm to delete them: bash echo "file1.txt file2.txt file3.txt" | xargs rm

    In this case, xargs takes the list of filenames and passes them as arguments to rm, which then deletes the files.


    3. Handling Long Input with xargs

    By default, most commands have a limit on the number of arguments that can be passed at once. xargs can split input into manageable chunks and execute the command multiple times, ensuring that you don’t exceed the system's argument length limit.

    • Example: Use xargs with find to delete files in chunks: bash find . -name "*.log" | xargs rm

    Here, find generates a list of .log files, and xargs passes them to rm in batches, ensuring the command runs efficiently even with a large number of files.


    4. Using -n Option to Limit the Number of Arguments

    The -n option allows you to specify the maximum number of arguments passed to the command at once. This is helpful when a command can only handle a limited number of arguments.

    • Example: Pass a maximum of 3 files to rm at a time: bash echo "file1.txt file2.txt file3.txt file4.txt file5.txt" | xargs -n 3 rm

    This command will execute rm multiple times, deleting 3 files at a time.


    5. Using -I Option for Custom Placeholder

    The -I option allows you to specify a custom placeholder for the input argument. This gives you more flexibility in how arguments are passed to the command.

    • Example: Rename files by appending a suffix: bash echo "file1.txt file2.txt file3.txt" | xargs -I {} mv {} {}.bak

    This command renames each file by appending .bak to its name. The {} placeholder represents each filename passed from xargs.


    6. Using -p Option for Confirmation

    The -p option prompts the user for confirmation before executing the command. This can be useful when you want to ensure that the right action is taken before running potentially dangerous commands.

    • Example: Prompt before deleting files: bash echo "file1.txt file2.txt file3.txt" | xargs -p rm

    This command will ask for confirmation before deleting each file.


    7. Using xargs with find for File Operations

    xargs is frequently used in combination with find to perform operations on files. This combination allows you to efficiently process files based on specific criteria.

    • Example: Find and compress .log files: bash find . -name "*.log" | xargs gzip

    This command finds all .log files in the current directory and compresses them using gzip.


    8. Using xargs with echo for Debugging

    You can use echo with xargs to debug or visualize how arguments are being passed.

    • Example: Display arguments passed to xargs: bash echo "file1.txt file2.txt file3.txt" | xargs echo

    This will simply print the filenames passed to xargs without executing any command, allowing you to verify the arguments.


    9. Using xargs with grep to Search Files

    You can use xargs in conjunction with grep to search for patterns in a list of files generated by other commands, such as find.

    • Example: Search for the word "error" in .log files: bash find . -name "*.log" | xargs grep "error"

    This command will search for the word "error" in all .log files found by find.


    10. Using xargs to Execute Commands in Parallel

    With the -P option, xargs can run commands in parallel, which is especially useful for tasks that can be parallelized to speed up execution.

    • Example: Run gzip on files in parallel: bash find . -name "*.log" | xargs -P 4 gzip

    This command will compress .log files in parallel using 4 processes, improving performance when dealing with large numbers of files.


    11. Combining xargs with Other Commands

    xargs can be used with many other commands to optimize data processing and command execution.

    • Example: Remove all files in directories with a specific name: bash find . -type d -name "temp" | xargs rm -r

    This will delete all directories named "temp" and their contents.


    Conclusion

    xargs is an essential tool for efficiently handling large numbers of arguments in Bash. Whether you're processing the output of a command, running operations on multiple files, or managing complex command executions, xargs provides a flexible and powerful way to automate and optimize tasks. By using options like -n, -I, and -P, you can fine-tune how arguments are passed and even run commands in parallel for improved performance.

  • Posted on

    How to Use the find Command to Search Files in Linux

    The find command is one of the most powerful and versatile tools in Linux for searching files and directories. It allows you to locate files based on various criteria such as name, size, permissions, time of modification, and more. Here’s a guide on how to use the find command effectively.


    1. Basic Syntax of the find Command

    The basic syntax for the find command is:

    find [path] [expression]
    
    • path: The directory (or directories) where you want to search. You can specify one or more directories, or use . to search in the current directory.
    • expression: The conditions or filters you want to apply (e.g., file name, size, type).

    2. Searching by File Name

    To search for a file by its name, use the -name option. The search is case-sensitive by default.

    Case-Sensitive Search

    find /path/to/search -name "filename.txt"
    

    This command searches for filename.txt in the specified directory and its subdirectories.

    Case-Insensitive Search

    To make the search case-insensitive, use -iname:

    find /path/to/search -iname "filename.txt"
    

    This will match files like Filename.txt, FILENAME.TXT, etc.

    Using Wildcards in Name Search

    You can use wildcards (*, ?, etc.) to match patterns: - *: Matches any sequence of characters. - ?: Matches a single character.

    For example, to search for all .txt files:

    find /path/to/search -name "*.txt"
    

    3. Searching by File Type

    The find command allows you to filter files based on their type. The -type option can be used to specify the following types:

    • f: Regular file
    • d: Directory
    • l: Symbolic link
    • s: Socket
    • p: Named pipe (FIFO)
    • c: Character device
    • b: Block device

    Search for Regular Files

    find /path/to/search -type f
    

    This command finds all regular files in the specified directory and its subdirectories.

    Search for Directories

    find /path/to/search -type d
    

    This command finds all directories.


    4. Searching by File Size

    You can search for files based on their size using the -size option. Sizes can be specified in various units:

    • b: 512-byte blocks (default)
    • c: Bytes
    • k: Kilobytes
    • M: Megabytes
    • G: Gigabytes

    Find Files of a Specific Size

    • Exact size:

      find /path/to/search -size 100M
      

      This finds files that are exactly 100 MB in size.

    • Greater than a size:

      find /path/to/search -size +100M
      

      This finds files greater than 100 MB.

    • Less than a size:

      find /path/to/search -size -100M
      

      This finds files smaller than 100 MB.


    5. Searching by Modification Time

    The find command allows you to search for files based on when they were last modified. The -mtime option specifies the modification time in days:

    • -mtime +n: Files modified more than n days ago.
    • -mtime -n: Files modified less than n days ago.
    • -mtime n: Files modified exactly n days ago.

    Find Files Modified Within the Last 7 Days

    find /path/to/search -mtime -7
    

    Find Files Not Modified in the Last 30 Days

    find /path/to/search -mtime +30
    

    Find Files Modified Exactly 1 Day Ago

    find /path/to/search -mtime 1
    

    6. Searching by Permissions

    You can search for files based on their permissions using the -perm option.

    Find Files with Specific Permissions

    For example, to find files with 777 permissions:

    find /path/to/search -perm 0777
    

    Find Files with at Least Specific Permissions

    To find files that have at least rw-r--r-- permissions, use the - before the permission value:

    find /path/to/search -perm -644
    

    Find Files with Specific Permissions for User, Group, or Others

    You can also use symbolic notation to search for files with specific permissions for the user (u), group (g), or others (o). For example:

    find /path/to/search -perm /u+x
    

    This finds files that have the executable permission for the user.


    7. Searching by File Owner

    The -user option allows you to find files owned by a specific user.

    Find Files Owned by a Specific User

    find /path/to/search -user username
    

    Find Files Owned by a Specific Group

    Similarly, use the -group option to search for files owned by a specific group:

    find /path/to/search -group groupname
    

    8. Executing Commands on Search Results

    You can use the -exec option to perform actions on the files that match your search criteria. The {} placeholder represents the current file.

    Example: Delete All .log Files

    find /path/to/search -name "*.log" -exec rm -f {} \;
    

    This command finds all .log files and deletes them.

    Example: Display the File Details

    find /path/to/search -name "*.txt" -exec ls -l {} \;
    

    This command lists the details (using ls -l) of each .txt file found.

    Note: The \; at the end is required to terminate the -exec action.


    9. Using find with xargs for Efficiency

    When executing commands on large numbers of files, xargs is often more efficient than -exec, because it minimizes the number of times the command is run.

    Example: Delete Files Using xargs

    find /path/to/search -name "*.log" | xargs rm -f
    

    This command finds all .log files and passes the list of files to rm using xargs.


    10. Combining Multiple Conditions

    You can combine multiple search conditions using logical operators like -and, -or, and -not.

    Example: Find Files Larger Than 10MB and Modified in the Last 7 Days

    find /path/to/search -size +10M -and -mtime -7
    

    Example: Find Files That Are Not Directories

    find /path/to/search -not -type d
    

    11. Limiting Search Depth with -maxdepth

    The -maxdepth option restricts the depth of directories find will search into.

    Example: Search Only in the Top-Level Directory

    find /path/to/search -maxdepth 1 -name "*.txt"
    

    This will find .txt files only in the top-level directory (/path/to/search), not in subdirectories.


    12. Summary of Useful find Command Options

    Option Description
    -name Search by file name
    -iname Search by file name (case-insensitive)
    -type Search by file type (f = file, d = directory, l = symlink)
    -size Search by file size
    -mtime Search by modification time (in days)
    -user Search by file owner
    -group Search by file group
    -perm Search by file permissions
    -exec Execute a command on each found file
    -not Negate a condition
    -and / -or Combine multiple conditions (default is -and)
    -maxdepth Limit the depth of directory traversal
    -mindepth Limit the minimum depth of directory traversal

    Conclusion

    The find command is an indispensable tool for searching and managing files in Linux. With its wide range of options, you can tailor your search to meet almost any criteria. Whether you're looking for files by name, size, type, or modification time, find can help you locate exactly what you need, and its ability to execute commands on the results makes it incredibly powerful for automating tasks.

  • Posted on

    How to Use the find Command to Search Files in Linux

    The find command is one of the most powerful and versatile tools in Linux for searching files and directories. It allows you to locate files based on various criteria such as name, size, permissions, time of modification, and more. Here’s a guide on how to use the find command effectively.


    1. Basic Syntax of the find Command

    The basic syntax for the find command is:

    find [path] [expression]
    
    • path: The directory (or directories) where you want to search. You can specify one or more directories, or use . to search in the current directory.
    • expression: The conditions or filters you want to apply (e.g., file name, size, type).

    2. Searching by File Name

    To search for a file by its name, use the -name option. The search is case-sensitive by default.

    Case-Sensitive Search

    find /path/to/search -name "filename.txt"
    

    This command searches for filename.txt in the specified directory and its subdirectories.

    Case-Insensitive Search

    To make the search case-insensitive, use -iname:

    find /path/to/search -iname "filename.txt"
    

    This will match files like Filename.txt, FILENAME.TXT, etc.

    Using Wildcards in Name Search

    You can use wildcards (*, ?, etc.) to match patterns: - *: Matches any sequence of characters. - ?: Matches a single character.

    For example, to search for all .txt files:

    find /path/to/search -name "*.txt"
    

    3. Searching by File Type

    The find command allows you to filter files based on their type. The -type option can be used to specify the following types:

    • f: Regular file
    • d: Directory
    • l: Symbolic link
    • s: Socket
    • p: Named pipe (FIFO)
    • c: Character device
    • b: Block device

    Search for Regular Files

    find /path/to/search -type f
    

    This command finds all regular files in the specified directory and its subdirectories.

    Search for Directories

    find /path/to/search -type d
    

    This command finds all directories.


    4. Searching by File Size

    You can search for files based on their size using the -size option. Sizes can be specified in various units:

    • b: 512-byte blocks (default)
    • c: Bytes
    • k: Kilobytes
    • M: Megabytes
    • G: Gigabytes

    Find Files of a Specific Size

    • Exact size:

      find /path/to/search -size 100M
      

      This finds files that are exactly 100 MB in size.

    • Greater than a size:

      find /path/to/search -size +100M
      

      This finds files greater than 100 MB.

    • Less than a size:

      find /path/to/search -size -100M
      

      This finds files smaller than 100 MB.


    5. Searching by Modification Time

    The find command allows you to search for files based on when they were last modified. The -mtime option specifies the modification time in days:

    • -mtime +n: Files modified more than n days ago.
    • -mtime -n: Files modified less than n days ago.
    • -mtime n: Files modified exactly n days ago.

    Find Files Modified Within the Last 7 Days

    find /path/to/search -mtime -7
    

    Find Files Not Modified in the Last 30 Days

    find /path/to/search -mtime +30
    

    Find Files Modified Exactly 1 Day Ago

    find /path/to/search -mtime 1
    

    6. Searching by Permissions

    You can search for files based on their permissions using the -perm option.

    Find Files with Specific Permissions

    For example, to find files with 777 permissions:

    find /path/to/search -perm 0777
    

    Find Files with at Least Specific Permissions

    To find files that have at least rw-r--r-- permissions, use the - before the permission value:

    find /path/to/search -perm -644
    

    Find Files with Specific Permissions for User, Group, or Others

    You can also use symbolic notation to search for files with specific permissions for the user (u), group (g), or others (o). For example:

    find /path/to/search -perm /u+x
    

    This finds files that have the executable permission for the user.


    7. Searching by File Owner

    The -user option allows you to find files owned by a specific user.

    Find Files Owned by a Specific User

    find /path/to/search -user username
    

    Find Files Owned by a Specific Group

    Similarly, use the -group option to search for files owned by a specific group:

    find /path/to/search -group groupname
    

    8. Executing Commands on Search Results

    You can use the -exec option to perform actions on the files that match your search criteria. The {} placeholder represents the current file.

    Example: Delete All .log Files

    find /path/to/search -name "*.log" -exec rm -f {} \;
    

    This command finds all .log files and deletes them.

    Example: Display the File Details

    find /path/to/search -name "*.txt" -exec ls -l {} \;
    

    This command lists the details (using ls -l) of each .txt file found.

    Note: The \; at the end is required to terminate the -exec action.


    9. Using find with xargs for Efficiency

    When executing commands on large numbers of files, xargs is often more efficient than -exec, because it minimizes the number of times the command is run.

    Example: Delete Files Using xargs

    find /path/to/search -name "*.log" | xargs rm -f
    

    This command finds all .log files and passes the list of files to rm using xargs.


    10. Combining Multiple Conditions

    You can combine multiple search conditions using logical operators like -and, -or, and -not.

    Example: Find Files Larger Than 10MB and Modified in the Last 7 Days

    find /path/to/search -size +10M -and -mtime -7
    

    Example: Find Files That Are Not Directories

    find /path/to/search -not -type d
    

    11. Limiting Search Depth with -maxdepth

    The -maxdepth option restricts the depth of directories find will search into.

    Example: Search Only in the Top-Level Directory

    find /path/to/search -maxdepth 1 -name "*.txt"
    

    This will find .txt files only in the top-level directory (/path/to/search), not in subdirectories.


    12. Summary of Useful find Command Options

    Option Description
    -name Search by file name
    -iname Search by file name (case-insensitive)
    -type Search by file type (f = file, d = directory, l = symlink)
    -size Search by file size
    -mtime Search by modification time (in days)
    -user Search by file owner
    -group Search by file group
    -perm Search by file permissions
    -exec Execute a command on each found file
    -not Negate a condition
    -and / -or Combine multiple conditions (default is -and)
    -maxdepth Limit the depth of directory traversal
    -mindepth Limit the minimum depth of directory traversal

    Conclusion

    The find command is an indispensable tool for searching and managing files in Linux. With its wide range of options, you can tailor your search to meet almost any criteria. Whether you're looking for files by name, size, type, or modification time, find can help you locate exactly what you need, and its ability to execute commands on the results makes it incredibly powerful for automating tasks.

  • Posted on

    Understanding File Permissions and Ownership in Bash

    File permissions and ownership are fundamental concepts in Linux (and Unix-based systems), allowing users and groups to control access to files and directories. In Bash, file permissions determine who can read, write, or execute a file, while ownership identifies the user and group associated with that file. Understanding and managing file permissions and ownership is essential for maintaining security and managing system resources.


    1. File Permissions Overview

    Every file and directory on a Linux system has three types of permissions: - Read (r): Allows the user to open and read the contents of the file. - Write (w): Allows the user to modify or delete the file. - Execute (x): Allows the user to run the file as a program or script (for directories, it allows entering the directory).

    These permissions are set for three categories of users: - Owner: The user who owns the file. - Group: Users who belong to the same group as the file. - Others: All other users who don’t fall into the above categories.

    Example:

    A typical file permission looks like this:

    -rwxr-xr--
    

    Where: - The first character - indicates it's a file (a d would indicate a directory). - The next three characters rwx represent the owner's permissions (read, write, and execute). - The next three characters r-x represent the group's permissions (read and execute). - The final three characters r-- represent the permissions for others (read only).


    2. Viewing File Permissions

    To view the permissions of a file or directory, use the ls -l command:

    ls -l filename
    

    Example output:

    -rwxr-xr-- 1 user group 12345 Dec 20 10:30 filename
    

    Explanation:

    • -rwxr-xr--: File permissions.

    • 1: Number of hard links.

    • user: Owner of the file.

    • group: Group associated with the file.

    • 12345: Size of the file in bytes.

    • Dec 20 10:30: Last modified date and time.

    • filename: The name of the file.


    3. Changing File Permissions with chmod

    To change file permissions, you use the chmod (change mode) command.

    Syntax:

    chmod [permissions] [file/directory]
    

    Permissions can be set using symbolic mode or numeric mode.

    Symbolic Mode:

    You can modify permissions by using symbolic representation (r, w, x).

    • Add permission: +
    • Remove permission: -
    • Set exact permission: =

    Examples:

    • Add execute permission for the owner:

      chmod u+x filename
      
    • Remove write permission for the group:

      chmod g-w filename
      
    • Set read and write permissions for everyone:

      chmod a=rw filename
      

    Numeric Mode:

    Permissions are also represented by numbers: - r = 4 - w = 2 - x = 1

    The numeric mode combines these numbers to represent permissions for the owner, group, and others.

    Examples: - Set permissions to rwxr-xr-- (owner: rwx, group: r-x, others: r--): bash chmod 755 filename - Set permissions to rw-r----- (owner: rw-, group: r--, others: ---): bash chmod 640 filename

    The first digit represents the owner’s permissions, the second digit represents the group’s permissions, and the third digit represents others’ permissions.


    4. Changing File Ownership with chown

    To change the ownership of a file or directory, use the chown command.

    Syntax:

    chown [owner][:group] [file/directory]
    
    • Change owner: bash chown newuser filename
    • Change owner and group: bash chown newuser:newgroup filename
    • Change only group: bash chown :newgroup filename

    Example:

    • Change the owner to alice and the group to developers: bash chown alice:developers filename

    5. Changing Group Ownership with chgrp

    If you only want to change the group ownership of a file or directory, you can use the chgrp command.

    Syntax:

    chgrp groupname filename
    

    Example:

    • Change the group ownership to admin: bash chgrp admin filename

    6. Special Permissions

    There are also special permissions that provide more control over file execution and access:

    • Setuid (s): When set on an executable file, the file is executed with the privileges of the file’s owner, rather than the user executing it.

      • Example: chmod u+s file
    • Setgid (s): When set on a directory, files created within that directory inherit the group of the directory, rather than the user’s current group.

      • Example: chmod g+s directory
    • Sticky Bit (t): When set on a directory, only the owner of a file can delete or rename the file, even if others have write permissions.

      • Example: chmod +t directory

    7. Example of Viewing, Changing, and Managing Permissions

    • View permissions: bash ls -l myfile.txt
    • Change permissions (allow read and execute for everyone): bash chmod a+rx myfile.txt
    • Change ownership (set john as owner and staff as group): bash chown john:staff myfile.txt

    Conclusion

    Understanding file permissions and ownership is crucial for managing security and accessibility in Linux. By using commands like chmod, chown, and ls -l, you can control who can access, modify, and execute files, ensuring proper security and efficient system management. Always be cautious when changing permissions, especially with system files or directories, to avoid inadvertently compromising the security of your system.

  • Posted on

    Top 10 Bash Commands Every New Linux User Should Learn

    If you're new to Linux and Bash, learning some essential commands is the best way to start. These commands will help you navigate the system, manage files, and perform basic tasks. Here’s a list of the top 10 commands every new Linux user should master:


    1. ls – List Files and Directories

    The ls command displays the contents of a directory.

    • Basic usage: bash ls
    • Common options:
      • ls -l: Long listing format (shows details like permissions and file sizes).
      • ls -a: Includes hidden files.
      • ls -lh: Displays file sizes in human-readable format.

    2. cd – Change Directory

    Navigate through the file system with the cd command.

    • Basic usage: bash cd /path/to/directory
    • Tips:
      • cd ..: Move up one directory.
      • cd ~: Go to your home directory.
      • cd -: Switch to the previous directory.

    3. pwd – Print Working Directory

    The pwd command shows the current directory you're working in.

    • Usage: bash pwd

    4. touch – Create a New File

    The touch command creates empty files.

    • Basic usage: bash touch filename.txt

    5. cp – Copy Files and Directories

    Use cp to copy files or directories.

    • Basic usage: bash cp source_file destination_file
    • Copy directories: bash cp -r source_directory destination_directory

    6. mv – Move or Rename Files

    The mv command moves or renames files and directories.

    • Move a file: bash mv file.txt /new/location/
    • Rename a file: bash mv old_name.txt new_name.txt

    7. rm – Remove Files and Directories

    The rm command deletes files and directories.

    • Basic usage: bash rm file.txt
    • Delete directories: bash rm -r directory_name
    • Important: Be cautious with rm as it permanently deletes files.

    8. mkdir – Create Directories

    The mkdir command creates new directories.

    • Basic usage: bash mkdir new_directory
    • Create parent directories: bash mkdir -p parent/child/grandchild

    9. cat – View File Content

    The cat command displays the content of a file.

    • Basic usage: bash cat file.txt
    • Combine files: bash cat file1.txt file2.txt > combined.txt

    10. man – View Command Documentation

    The man command shows the manual page for a given command.

    • Usage: bash man command_name
    • Example: bash man ls

    Bonus Commands

    • echo: Prints text to the terminal or a file. bash echo "Hello, World!"
    • grep: Searches for patterns in text files. bash grep "search_term" file.txt
    • sudo: Runs commands with superuser privileges. bash sudo apt update

    Tips for Learning Bash Commands

    1. Practice regularly: The more you use these commands, the easier they will become.
    2. Explore options: Many commands have useful flags; use man to learn about them.
    3. Be cautious with destructive commands: Commands like rm and sudo can have significant consequences.

    With these commands, you'll be well on your way to mastering Linux and Bash!

  • Posted on

    Introduction to Bash Scripting: A Beginner's Guide

    Bash scripting is a way to automate tasks in Linux using Bash (Bourne Again Shell), a widely used shell on Unix-like systems. This guide introduces the basics of Bash scripting, enabling you to create and execute your first scripts.


    1. What is a Bash Script?

    A Bash script is a plain text file containing a series of commands executed sequentially by the Bash shell. It’s useful for automating repetitive tasks, system administration, and process management.


    2. Basics of a Bash Script

    File Format:

    1. A Bash script is a text file.
    2. It typically has a .sh extension (e.g., myscript.sh), though this isn’t mandatory.

    The Shebang Line:

    The first line of a Bash script starts with a shebang (#!), which tells the system which interpreter to use.

    Example:

    #!/bin/bash
    

    3. Creating Your First Bash Script

    Step-by-Step Process:

    1. Create a new file: bash nano myscript.sh
    2. Write your script: bash #!/bin/bash echo "Hello, World!"
    3. Save and exit:
      In nano, press CTRL+O to save, then CTRL+X to exit.

    4. Make the script executable:

      chmod +x myscript.sh
      
    5. Run the script:

      ./myscript.sh
      

      Output:

      Hello, World!
      

    4. Key Concepts in Bash Scripting

    Variables:

    Store and manipulate data.

    #!/bin/bash
    name="Alice"
    echo "Hello, $name!"
    

    User Input:

    Prompt users for input.

    #!/bin/bash
    echo "Enter your name:"
    read name
    echo "Hello, $name!"
    

    Control Structures:

    Add logic to your scripts.

    • Conditionals:

      #!/bin/bash
      if [ $1 -gt 10 ]; then
        echo "Number is greater than 10."
      else
        echo "Number is 10 or less."
      fi
      
    • Loops:

      #!/bin/bash
      for i in {1..5}; do
        echo "Iteration $i"
      done
      

    Functions:

    Reusable blocks of code.

    #!/bin/bash
    greet() {
        echo "Hello, $1!"
    }
    greet "Alice"
    

    5. Common Bash Commands Used in Scripts

    • echo: Print messages.
    • read: Read user input.
    • ls, pwd, cd: File and directory management.
    • cat, grep, awk, sed: Text processing.
    • date, whoami, df: System information.

    6. Debugging Bash Scripts

    • Use set for debugging options:
      • set -x: Prints commands as they are executed.
      • set -e: Stops execution on errors.

    Example:

    #!/bin/bash
    set -x
    echo "Debugging mode enabled"
    
    • Debug manually by printing variables: bash echo "Variable value is: $var"

    7. Best Practices for Writing Bash Scripts

    1. Use Comments: Explain your code with #. bash # This script prints a greeting echo "Hello, World!"
    2. Check User Input: Validate input to prevent errors. bash if [ -z "$1" ]; then echo "Please provide an argument." exit 1 fi
    3. Use Meaningful Variable Names:
      Instead of x=5, use counter=5.

    4. Follow File Permissions: Make scripts executable (chmod +x).

    5. Test Thoroughly: Test scripts in a safe environment before using them on critical systems.


    8. Example: A Simple Backup Script

    This script creates a compressed backup of a specified directory.

    #!/bin/bash
    
    # Prompt for the directory to back up
    echo "Enter the directory to back up:"
    read dir
    
    # Set backup file name
    backup_file="backup_$(date +%Y%m%d_%H%M%S).tar.gz"
    
    # Create the backup
    tar -czvf $backup_file $dir
    
    echo "Backup created: $backup_file"
    

    Conclusion

    Bash scripting is a powerful tool for automating tasks and enhancing productivity. By mastering the basics, you can create scripts to handle repetitive operations, simplify system management, and execute complex workflows with ease. With practice, you’ll soon be able to write advanced scripts tailored to your specific needs.

  • Posted on

    Here’s a list of basic Bash commands for beginners, organized by common use cases, with explanations and examples:


    1. Navigation and Directory Management

    • pwd (Print Working Directory):
      Displays the current directory.

      pwd
      
    • ls (List):
      Lists files and directories in the current location.

      ls
      ls -l    # Detailed view
      ls -a    # Includes hidden files
      
    • cd (Change Directory):
      Changes the current directory.

      cd /path/to/directory
      cd ~    # Go to the home directory
      cd ..   # Go up one directory
      

    2. File and Directory Operations

    • touch:
      Creates an empty file.

      touch filename.txt
      
    • mkdir (Make Directory):
      Creates a new directory.

      mkdir my_folder
      mkdir -p folder/subfolder   # Creates nested directories
      
    • cp (Copy):
      Copies files or directories.

      cp file.txt copy.txt
      cp -r folder/ copy_folder/   # Copy directories recursively
      
    • mv (Move/Rename):
      Moves or renames files or directories.

      mv oldname.txt newname.txt
      mv file.txt /path/to/destination
      
    • rm (Remove):
      Deletes files or directories.

      rm file.txt
      rm -r folder/    # Deletes directories recursively
      

    3. Viewing and Editing Files

    • cat:
      Displays the contents of a file.

      cat file.txt
      
    • less:
      Views a file one page at a time.

      less file.txt
      
    • nano:
      Opens a file in a simple text editor.

      nano file.txt
      
    • head and tail:
      Displays the beginning or end of a file.

      head file.txt
      tail file.txt
      tail -n 20 file.txt    # Show last 20 lines
      

    4. Searching and Filtering

    • grep:
      Searches for patterns within files.

      grep "text" file.txt
      grep -i "text" file.txt   # Case-insensitive search
      
    • find:
      Locates files and directories.

      find /path -name "filename.txt"
      

    5. Managing Processes

    • ps:
      Displays running processes.

      ps
      ps aux   # Detailed view of all processes
      
    • kill:
      Terminates a process by its ID (PID).

      kill 12345   # Replace 12345 with the PID
      
    • top or htop:
      Monitors real-time system processes.

      top
      

    6. Permissions and Ownership

    • chmod:
      Changes file permissions.

      chmod 644 file.txt    # Sets read/write for owner, read-only for others
      chmod +x script.sh    # Makes a script executable
      
    • chown:
      Changes file ownership.

      chown user:group file.txt
      

    7. System Information

    • whoami:
      Displays the current username.

      whoami
      
    • uname:
      Provides system information.

      uname -a   # Detailed system info
      
    • df and du:
      Checks disk usage.

      df -h      # Shows free disk space
      du -sh     # Displays size of a directory or file
      

    8. Networking

    • ping:
      Tests network connectivity.

      ping google.com
      
    • curl or wget:
      Fetches data from URLs.

      curl https://example.com
      wget https://example.com/file.txt
      

    9. Archiving and Compression

    • tar:
      Archives and extracts files.

      tar -cvf archive.tar folder/     # Create archive
      tar -xvf archive.tar             # Extract archive
      tar -czvf archive.tar.gz folder/ # Compress with gzip
      
    • zip and unzip:
      Compresses or extracts files in ZIP format.

      zip archive.zip file.txt
      unzip archive.zip
      

    10. Helpful Commands

    • man (Manual):
      Displays the manual for a command.

      man ls
      
    • history:
      Shows the history of commands entered.

      history
      
    • clear:
      Clears the terminal screen.

      clear
      

    These commands provide a solid foundation for working in Bash. As you grow more comfortable, you can explore advanced topics like scripting and automation!

  • Posted on

    Explanatory Synopsis and Overview of "The Linux Command Line"

    "The Linux Command Line" by William E. Shotts Jr. is a practical and thorough guide to the Linux command-line interface (CLI). Below is an overview of its content, restructured and summarized in my interpretation for clarity and focus:


    Part 1: Introduction to the Command Line

    This part introduces the Linux shell, emphasizing the importance of the CLI in managing Linux systems.

    • What is the Shell? Explains the shell as a command interpreter and introduces Bash as the default Linux shell.

    • Basic Navigation: Covers essential commands for exploring the file system (ls, pwd, cd) and understanding the hierarchical structure.

    • File Management: Explains creating, moving, copying, and deleting files and directories (cp, mv, rm, mkdir).

    • Viewing and Editing Files: Introduces basic tools like cat, less, nano, and echo.


    Part 2: Configuration and Customization

    Focuses on tailoring the Linux environment to enhance user productivity.

    • Environment Variables: Discusses what environment variables are, how to view them (env), and how to set them.

    • Customizing the Shell: Explains configuration files like .bashrc and .profile, as well as creating aliases and shell functions.

    • Permissions and Ownership: Introduces Linux file permissions (chmod, chown), symbolic representations, and user roles.


    Part 3: Mastering Text Processing

    This section explores tools and techniques for handling text, a critical skill for any Linux user.

    • Working with Pipes and Redirection: Explains how to chain commands and redirect input/output using |, >, and <.

    • Text Search and Filtering: Covers tools like grep and sort for searching, filtering, and organizing text.

    • Advanced Text Manipulation: Introduces powerful tools such as sed (stream editor) and awk (pattern scanning and processing).


    Part 4: Shell Scripting and Automation

    Delves into creating scripts to automate repetitive tasks.

    • Introduction to Shell Scripting: Explains script structure, how to execute scripts, and the shebang (#!).

    • Control Structures: Covers conditionals (if, case) and loops (for, while, until).

    • Functions and Debugging: Teaches how to write reusable functions and debug scripts using tools like set -x and bash -x.

    • Practical Examples: Provides real-world examples of automation, such as backups and system monitoring.


    Additional Features

    • Command Reference: Includes a concise reference for common commands and their options.

    • Appendices: Offers supplementary material, such as tips for selecting a text editor and an introduction to version control with Git.


    What Makes This Version Unique?

    This synopsis groups the content into themes to give readers a logical flow of progression: 1. Basics First: Starting with navigation and file management. 2. Customization: Encouraging users to make the CLI their own. 3. Text Processing Mastery: A vital skill for working with Linux data streams. 4. Scripting and Automation: The crown jewel of command-line expertise.

    This structure mirrors the book's balance between learning and applying concepts, making it a practical and user-friendly resource for anyone eager to excel in Linux.

  • Posted on

    Introduction to Bash: What You Need to Know to Get Started

    Bash, short for Bourne Again Shell, is a command-line interpreter widely used in Linux and Unix systems. It's both a powerful scripting language and a shell that lets you interact with your operating system through commands. Whether you're an IT professional, a developer, or simply someone curious about Linux, understanding Bash is a critical first step.


    What is Bash?

    Bash is the default shell for most Linux distributions. It interprets commands you type or scripts you write, executing them to perform tasks ranging from file management to system administration.


    Why Learn Bash?

    1. Control and Efficiency: Automate repetitive tasks and streamline workflows.
    2. Powerful Scripting: Write scripts to manage complex tasks.
    3. System Mastery: Understand and manage Linux or Unix systems effectively.
    4. Industry Standard: Bash is widely used in DevOps, cloud computing, and software development.

    Basic Concepts to Get Started

    1. The Command Line

    • Bash operates through a terminal where you input commands.
    • Common terminal emulators include gnome-terminal (Linux), Terminal.app (macOS), and cmd or WSL (Windows).

    2. Basic Commands

    • pwd: Print working directory (shows your current location in the file system).
    • ls: List files and directories.
    • cd: Change directories.
    • touch: Create a new file.
    • mkdir: Create a new directory.

    3. File Manipulation

    • cp: Copy files.
    • mv: Move or rename files.
    • rm: Remove files.
    • cat: Display file content.

    4. Using Flags

    • Many commands accept flags to modify their behavior. For example: bash ls -l This displays detailed information about files.

    Getting Started with Bash Scripts

    Bash scripts are text files containing a sequence of commands.

    1. Create a Script File
      Use a text editor to create a file, e.g., script.sh.

    2. Add a Shebang
      Start the script with a shebang (#!/bin/bash) to specify the interpreter.

      #!/bin/bash
      echo "Hello, World!"
      
    3. Make the Script Executable
      Use chmod to give execution permissions:

      chmod +x script.sh
      
    4. Run the Script
      Execute the script:

      ./script.sh
      

    Key Tips for Beginners

    • Use Tab Completion: Start typing a command or file name and press Tab to auto-complete.
    • Use Man Pages: Learn about a command with man <command>, e.g., man ls.
    • Practice Regularly: Familiarity comes with practice.

    Resources to Learn Bash

    • Online Tutorials: Websites like Linux Academy, Codecademy, or free YouTube channels.
    • Books: "The Linux Command Line" by William Shotts.
    • Interactive Platforms: Try Bash commands on a virtual machine or cloud platforms like AWS CloudShell.

    Getting started with Bash unlocks a world of productivity and power in managing systems and automating tasks. Dive in, and happy scripting!

  • Posted on

    Here are three common ways to determine which process is listening on a particular port in Linux:


    1. Using lsof (List Open Files)

    • Command: bash sudo lsof -i :<port_number>
    • Example: bash sudo lsof -i :8080
    • Output:
      • The command shows the process name, PID, and other details of the process using the specified port.

    2. Using netstat (Network Statistics)

    • Command: bash sudo netstat -tuln | grep :<port_number>
    • Example: bash sudo netstat -tuln | grep :8080
    • Output:
      • Displays the protocol (TCP/UDP), local address, foreign address, and the process (if run with -p option on supported versions).

    Note: If netstat is not installed, you can install it via: bash sudo apt install net-tools


    3. Using ss (Socket Statistics)

    • Command: bash sudo ss -tuln | grep :<port_number>
    • Example: bash sudo ss -tuln | grep :8080
    • Output:
      • Displays similar information to netstat but is faster and more modern.

    Bonus: Using /proc Filesystem

    • Command: bash sudo grep <port_number> /proc/net/tcp
    • Example: bash sudo grep :1F90 /proc/net/tcp > Replace :1F90 with the hexadecimal representation of the port (e.g., 8080 in hex is 1F90).
    • This is a more manual approach and requires converting the port to hexadecimal.
  • Posted on

    In Bash, managing timers typically involves the use of two primary tools: Bash scripts with built-in timing features (like sleep or date) and Cron jobs (via crontab) for scheduled task execution. Both tools are useful depending on the level of complexity and frequency of the tasks you're managing.

    1. Using Timers in Bash (CLI)

    In Bash scripts, you can manage timers and delays by using the sleep command or date for time-based logic.

    a. Using sleep

    The sleep command pauses the execution of the script for a specified amount of time. It can be used for simple timing operations within scripts.

    Example:

    #!/bin/bash
    
    # Wait for 5 seconds
    echo "Starting..."
    sleep 5
    echo "Done after 5 seconds."
    

    You can also specify time in minutes, hours, or days:

    sleep 10m  # Sleep for 10 minutes
    sleep 2h   # Sleep for 2 hours
    sleep 1d   # Sleep for 1 day
    

    b. Using date for Timing

    You can also use date to track elapsed time or to schedule events based on current time.

    Example (Calculating elapsed time):

    #!/bin/bash
    
    start_time=$(date +%s)  # Get the current timestamp
    echo "Starting task..."
    sleep 3  # Simulate a task
    end_time=$(date +%s)  # Get the new timestamp
    
    elapsed_time=$((end_time - start_time))  # Calculate elapsed time in seconds
    echo "Elapsed time: $elapsed_time seconds."
    

    2. Using crontab for Scheduling Tasks

    cron is a Unix/Linux service used to schedule jobs to run at specific intervals. The crontab file contains a list of jobs that are executed at scheduled times.

    a. Crontab Syntax

    A crontab entry follows this format:

    * * * * * /path/to/command
    │ │ │ │ │
    │ │ │ │ │
    │ │ │ │ └── Day of the week (0-7) (Sunday=0 or 7)
    │ │ │ └── Month (1-12)
    │ │ └── Day of the month (1-31)
    │ └── Hour (0-23)
    └── Minute (0-59)
    
    • * means "every," so a * in a field means that job should run every minute, hour, day, etc., depending on its position.
    • You can use specific values or ranges for each field (e.g., 1-5 for Monday to Friday).

    b. Setting Up Crontab

    To view or edit the crontab, use the following command:

    crontab -e
    

    Example of crontab entries: - Run a script every 5 minutes: bash */5 * * * * /path/to/script.sh - Run a script at 2:30 AM every day: bash 30 2 * * * /path/to/script.sh - Run a script every Sunday at midnight: bash 0 0 * * 0 /path/to/script.sh

    c. Managing Crontab Entries

    • List current crontab jobs: bash crontab -l
    • Remove crontab entries: bash crontab -r

    d. Logging Cron Jobs

    By default, cron jobs do not provide output unless you redirect it. To capture output or errors, you can redirect both stdout and stderr to a file:

    * * * * * /path/to/script.sh >> /path/to/logfile.log 2>&1
    

    This saves both standard output and error messages to logfile.log.

    3. Combining Bash Timers and Cron Jobs

    Sometimes you might need to use both cron and timing mechanisms within a Bash script. For example, if a task needs to be scheduled but also requires some dynamic timing based on elapsed time or conditions, you could use cron to trigger the script, and then use sleep or date inside the script to control the flow.

    Example:

    #!/bin/bash
    
    # This script is triggered every day at midnight by cron
    
    # Wait 10 minutes before executing the task
    sleep 600  # 600 seconds = 10 minutes
    
    # Execute the task after the delay
    echo "Executing task..."
    # Your task here
    

    4. Advanced Scheduling with Cron

    If you need more complex scheduling, you can take advantage of specific cron features: - Use ranges or lists in the time fields: bash 0 0,12 * * * /path/to/script.sh # Run at midnight and noon every day - Run a task every 5 minutes during certain hours: bash */5 9-17 * * * /path/to/script.sh # Every 5 minutes between 9 AM and 5 PM

    5. Practical Examples

    • Backup Every Night:

      0 2 * * * /home/user/backup.sh
      

      This runs the backup.sh script every day at 2 AM.

    • Check Server Health Every Hour:

      0 * * * * /home/user/check_server_health.sh
      

      This runs a script to check the server's health every hour.

    Conclusion

    Managing timers in Bash using cron and sleep allows you to automate tasks, control timing, and create sophisticated scheduling systems. sleep is suitable for in-script delays, while cron is ideal for recurring scheduled jobs. Combining these tools lets you create flexible solutions for a wide range of automation tasks.

  • Posted on

    OS Package Managers: Keeping Your System Up-to-Date

    Package managers are essential tools in modern operating systems (OS) that help automate the process of installing, updating, and removing software packages. These tools manage the software installed on a system, making it easier for users and administrators to keep their systems up-to-date with the latest versions of software. They provide a streamlined and efficient way to manage dependencies, handle software updates, and ensure system stability by preventing compatibility issues.

    Importance of Package Managers

    Package managers are crucial for maintaining system health and security, and they provide several benefits:

    1. Automatic Updates: Package managers track software versions and allow you to update all installed software at once with a single command. This ensures that you always have the latest security patches, performance improvements, and new features without needing to manually search for and download updates.

    2. Dependency Management: Many software packages depend on other libraries and programs to function. Package managers ensure that these dependencies are correctly installed and maintained, which reduces the likelihood of conflicts between different versions of libraries or missing dependencies.

    3. Security: Security is a major reason to use a package manager. Package managers allow you to easily update software to close vulnerabilities that could be exploited by attackers. Often, package repositories are curated and include only trusted, verified packages.

    4. Reproducibility: Package managers allow administrators to set up systems with the exact same configuration across multiple machines. This is especially important in server environments, where you want all systems to have the same set of software, libraries, and dependencies.

    5. Software Removal: Package managers make it easy to remove unwanted software. This ensures that unnecessary files, dependencies, and configurations are cleaned up, saving disk space and reducing the attack surface.

    6. Centralized Repository: Most package managers use centralized repositories where software is pre-compiled and tested, so users don’t need to manually compile code or find external download sources, minimizing risks from malicious software.


    Types of Package Managers

    There are different types of package managers depending on the operating system. Below, we will explore examples from different OS environments to see how package managers work.

    1. Linux Package Managers

    Linux distributions (distros) typically use package managers that vary based on the type of distribution. The most common Linux package managers are:

    • APT (Advanced Package Tool): Used in Debian-based systems such as Ubuntu.
    • YUM/DNF (Yellowdog Updater, Modified / Dandified YUM): Used in Red Hat-based systems such as CentOS, Fedora, and RHEL.
    • Zypper: Used in openSUSE and SUSE Linux Enterprise Server.
    • Pacman: Used in Arch Linux and Manjaro.

    Examples of Commands to Install and Update Software on Linux:

    1. APT (Ubuntu/Debian)

    - Install a package:

    sudo apt install <package-name>
    

    Example:

    sudo apt install vim
    
    • Update the system:
    sudo apt update
    sudo apt upgrade
    

    This updates the package list and upgrades all installed software to the latest available version in the repositories.

    • Upgrade a specific package:
    sudo apt install --only-upgrade <package-name>
    

    Example:

    sudo apt install --only-upgrade vim
    
    • Remove a package:
    sudo apt remove <package-name>
    

    Example:

    sudo apt remove vim
    
    1. YUM/DNF (CentOS/Fedora/RHEL)

    - Install a package:

    sudo yum install <package-name># YUM for older versions
    sudo dnf install <package-name># DNF for newer Fedora/CentOS/RHEL
    

    Example:

    sudo dnf install vim
    
    • Update the system:
    sudo dnf update
    

    This command updates the entire system, installing the latest versions of all packages.

    • Upgrade a specific package:
    sudo dnf upgrade <package-name>
    
    • Remove a package:
    sudo dnf remove <package-name>
    

    Example:

    sudo dnf remove vim
    
    1. Zypper (openSUSE)

    - Install a package:

    sudo zypper install <package-name>
    

    Example:

    sudo zypper install vim
    
    • Update the system:
    sudo zypper update
    
    • Remove a package:
    sudo zypper remove <package-name>
    

    Example:

    sudo zypper remove vim
    
    1. Pacman (Arch Linux)

    - Install a package:

    sudo pacman -S <package-name>
    

    Example:

    sudo pacman -S vim
    
    • Update the system:
    sudo pacman -Syu
    
    • Remove a package:
    sudo pacman -R <package-name>
    

    Example:

    sudo pacman -R vim
    

    2. macOS Package Manager

    On macOS, Homebrew is the most popular package manager, although there are alternatives such as MacPorts.

    • Homebrew:
      Homebrew allows macOS users to install software and libraries not included in the macOS App Store. It works by downloading and compiling the software from source or installing pre-built binaries.

    Examples of Commands to Install and Update Software on macOS:

    • Install a package:
    brew install <package-name>
    

    Example:

    brew install vim
    
    • Update the system:
    brew update
    brew upgrade
    
    • Upgrade a specific package:
    brew upgrade <package-name>
    
    • Remove a package:
    brew uninstall <package-name>
    

    Example:

    brew uninstall vim
    

    3. Windows Package Managers

    Windows traditionally didn't include package managers like Linux or macOS, but with the advent of Windows Package Manager (winget) and Chocolatey, this has changed.

    • winget (Windows Package Manager):
      Windows 10 and newer include winget, a command-line package manager for installing software.

    Examples of Commands to Install and Update Software on Windows:

    • Install a package:
    winget install <package-name>
    

    Example:

    winget install vim
    
    • Update a package:
    winget upgrade <package-name>
    
    • Update all installed software:
    winget upgrade --all
    
    • Remove a package:
    winget uninstall <package-name>
    

    Example:

    winget uninstall vim
    
    • Chocolatey:
      Chocolatey is another popular package manager for Windows, with a large repository of software.

    Install a package:

    choco install <package-name>
    

    Example:

    choco install vim
    

    Update a package:

    choco upgrade <package-name>
    

    Remove a package:

    choco uninstall <package-name>
    

    Conclusion

    Package managers provide a streamlined, automated way to manage software installation, updates, and removal. Whether you're on a Linux, macOS, or Windows system, a package manager ensures that your software is up-to-date, secure, and properly configured. By using package managers, you can easily manage dependencies, get the latest versions of software with minimal effort, and maintain system stability.

    Having the ability to run a single command to install or update software, like sudo apt update on Linux or brew upgrade on macOS, saves time and reduces the risks associated with manually downloading and managing software. Package managers have become a fundamental tool for system administrators, developers, and power users, making system maintenance and software management easier, faster, and more reliable.

  • Posted on

    Let's explore each of the programming languages and their interpreters in detail. We'll look into the context in which they're used, who typically uses them, what they are used for, and the power they offer. Additionally, I'll suggest starting points for testing each language, and provide an explanation of their benefits and popularity.

    1. Bash

    Context & Usage:

    • Who uses it: System administrators, DevOps engineers, and developers working in Linux or Unix-like environments.

    • What for: Bash is the default shell for Unix-like systems. It’s used for writing scripts to automate tasks, managing system processes, manipulating files, and running system commands.

    • Where it’s used: System administration, automation, DevOps, server management, and batch processing tasks.

    Benefits:

    • Ubiquity: Bash is available by default on almost all Unix-based systems (Linux, macOS), making it indispensable for server-side administration.

    • Powerful scripting capabilities: It allows for process control, file manipulation, regular expressions, and piping commands.

    • Simple yet powerful syntax: Despite being lightweight, it’s capable of handling complex system-level tasks.

    Hello World Example:

    echo "Hello World"
    

    In-Depth Starting Point:

    • Test file manipulation, process management, and simple system automation by scripting tasks like listing files, scheduling jobs, or managing processes.
    # List all files in the current directory
     ls -l
    # Create and manipulate a simple file
    echo "Hello, World" > hello.txt
    cat hello.txt
    

    2. Python

    Context & Usage:

    • Who uses it: Web developers, data scientists, engineers, and researchers.

    • What for: Python is a general-purpose language used for web development, data analysis, machine learning, and automation.

    • Where it’s used: Web development (Django, Flask), data science (Pandas, NumPy), machine learning (TensorFlow, scikit-learn), scripting, and automation.

    Benefits:

    • Readability & simplicity: Python's syntax is clear and easy to understand, making it a great choice for beginners and experienced developers alike.

    • Extensive ecosystem: Python boasts a vast ecosystem of libraries for everything from web frameworks to scientific computing.

    • Community support: Python’s large community ensures a wealth of resources, tutorials, and libraries.

    Hello World Example:

    python3 -c 'print("Hello World")'
    

    In-Depth Starting Point:

    • Python’s interactive shell and script-based execution allow testing of libraries like math, numpy, or pandas right at the Bash prompt.
    # Using Python's interactive shell to calculate something
    python3 -c 'import math; print(math.sqrt(16))'
    

    3. Perl

    Context & Usage:

    • Who uses it: Web developers, network administrators, and those involved in text processing, bioinformatics, and automation.

    • What for: Perl is primarily used for text processing, systems administration, and web development (CGI scripts).

    • Where it’s used: Log file parsing, web backends, network scripts, and text-based data manipulation.

    Benefits:

    • Text Processing: Perl excels at regular expressions and text manipulation, making it the go-to tool for tasks like log parsing and configuration file handling.

    • CPAN: Perl has a massive collection of reusable code and modules via the Comprehensive Perl Archive Network (CPAN).

    Hello World Example:

    perl -e 'print "Hello World\n";'
    

    In-Depth Starting Point:

    • Test regular expressions or string manipulations by working with log files.
    # Extracting IP addresses from a log file using Perl
    perl -ne 'print if /(\d+\.\d+\.\d+\.\d+)/' access.log
    

    4. Ruby

    Context & Usage:

    • Who uses it: Web developers, particularly those using Ruby on Rails for web applications.

    • What for: Ruby is mainly used for web development, but can also be used for automation scripts, GUI applications, and testing.

    • Where it’s used: Web applications, automation tasks, API development.

    Benefits:

    • Ruby on Rails: The Ruby on Rails framework has made Ruby a popular choice for rapid web development. It follows the principle of “convention over configuration,” speeding up development.

    • Elegant syntax: Ruby’s syntax is designed to be both expressive and easy to read.

    Hello World Example:

    ruby -e 'puts "Hello World"'
    

    In-Depth Starting Point:

    • Ruby can be tested interactively or through simple script execution.
    # Simple Ruby script to fetch the contents of a URL
    ruby -e 'require "net/http"; puts Net::HTTP.get(URI("http://example.com"))'
    

    5. PHP

    Context & Usage:

    • Who uses it: Web developers, especially those working on server-side scripting for web applications.

    • What for: PHP is commonly used for dynamic web page generation and server-side scripting.

    • Where it’s used: Websites (especially CMSs like WordPress), backend development, and APIs.

    Benefits:

    • Web-centric: PHP was designed specifically for web development, with powerful features for working with databases and HTML generation.

    • Ubiquity in web hosting: PHP is widely supported by web hosting providers and powers a significant portion of the web.

    Hello World Example:

    php -r 'echo "Hello World\n";'
    

    In-Depth Starting Point:

    • Test basic PHP functionality and integration with web servers.
    # A simple PHP script to output current time
    php -r 'echo "Current time: " . date("Y-m-d H:i:s") . "\n";'
    

    6. JavaScript (Node.js)

    Context & Usage:

    • Who uses it: Full-stack developers, backend developers, and those working on real-time applications.

    • What for: JavaScript (Node.js) allows JavaScript to be used on the server-side to build scalable, event-driven applications.

    • Where it’s used: Web servers, real-time applications (chat, notifications), APIs, microservices.

    Benefits:

    • Single language for full-stack: Node.js allows JavaScript to be used both on the client-side (in the browser) and server-side (on the backend).

    • Non-blocking I/O: Node.js is known for its asynchronous, non-blocking I/O model, making it highly efficient for I/O-heavy applications.

    Hello World Example:

    node -e 'console.log("Hello World");'
    

    In-Depth Starting Point:

    • Node.js can be tested for its asynchronous capabilities with event-driven scripts.
    # A simple Node.js script to log current time every second
     node -e 'setInterval(() => console.log(new Date()), 1000);'
    

    7. C

    Context & Usage:

    • Who uses it: Systems programmers, embedded system developers, and developers working on performance-critical applications.

    • What for: C is used for low-level system programming, embedded systems, and developing software that interacts directly with hardware.

    • Where it’s used: Operating systems, embedded systems, device drivers, real-time applications.

    Benefits:

    • Performance: C is a low-level language that provides fine control over system resources, making it the language of choice for high-performance applications.

    • Portability: Code written in C can be compiled to run on a wide variety of systems, from embedded devices to supercomputers.

    Hello World Example:

    #include <stdio.h>
    int main() {
        printf("Hello World\n");
        return 0;
    }
    

    In-Depth Starting Point:

    • To test C, you would need a C compiler (e.g., GCC) to compile and run the code.
    gcc hello.c -o hello && ./hello
    

    8. C++

    Context & Usage:

    • Who uses it: Systems programmers, game developers, and developers of performance-intensive applications.

    • What for: C++ is used for object-oriented programming and systems-level applications that require high performance.

    • Where it’s used: Game engines, desktop applications, real-time systems, performance-critical software.

    Benefits:

    • Object-Oriented Programming: C++ adds support for classes and objects to C, making it easier to manage large, complex codebases.

    • Performance: Like C, C++ offers fine-grained control over system resources, making it ideal for real-time applications.

    Hello World Example:

    #include <iostream>
    int main() {
        std::cout << "Hello World" << std::endl;
        return 0;
    }
    

    In-Depth Starting Point:

    • Compile and run C++ code for performance testing.
    g++ hello.cpp -o hello && ./hello
    

    9. Java

    Context & Usage:

    • Who uses it: Enterprise developers, Android developers, and backend system developers.

    • What for: Java is primarily used for building large-scale enterprise applications, Android apps, and server-side components.

    • Where it’s used: Enterprise applications, Android apps, large-scale web servers.

    Benefits:

    • Cross-Platform: Java’s “write once, run anywhere” philosophy allows Java applications to run on any system with a JVM (Java Virtual Machine).

    • Rich Ecosystem: Java has a vast ecosystem of libraries and frameworks, including Spring for backend systems and Android for mobile apps.

    Hello World Example:

    public class HelloWorld {
        public static void main(String[] args) {
            System.out.println("Hello World");
        }
    }
    

    In-Depth Starting Point:

    • Compile and run Java code to explore object-oriented principles.
    javac HelloWorld.java && java HelloWorld
    

    Next is a table with the name of the language, date of inception, and a corresponding Hello World example:

    Language Date of Inception Hello World Example
    Bash 1989 echo "Hello World"
    Python 1991 python3 -c 'print("Hello World")'
    Perl 1987 perl -e 'print "Hello World\n";'
    Ruby 1995 ruby -e 'puts "Hello World"'
    PHP 1994 php -r 'echo "Hello World\n";'
    JavaScript (Node.js) 2009 node -e 'console.log("Hello World");'
    C 1972 printf("Hello World\n");
    C++ 1983 std::cout << "Hello World" << std::endl; return 0;
    Java 1995 System.out.println("Hello World");
    Go 2009 fmt.Println("Hello World") }
    Rust 2010 fn main() { println!("Hello World"); }
    Lua 1993 lua -e 'print("Hello World")'
    Haskell 1990 main = putStrLn "Hello World"
    Shell Script 1989 echo "Hello World"
    AWK 1977 awk 'BEGIN {print "Hello World"}'
    Tcl 1988 tclsh <<< 'puts "Hello World"'
    R 1993 Rscript -e 'cat("Hello World\n")'
    Kotlin 2011 println("Hello World")
    Swift 2014 print("Hello World")
    Julia 2012 julia -e 'println("Hello World")'

    Notes:

    • Bash: Inception was around 1989 by Brian Fox. It's a shell scripting language, often used for system administration tasks.
    • Python: Created by Guido van Rossum in 1991. Known for its simplicity and readability, Python is widely used in web development, data analysis, and scripting.
    • Perl: Developed by Larry Wall in 1987. Known for text processing and used heavily in system administration and web development.
    • Ruby: Created by Yukihiro Matsumoto in 1995. Ruby is famous for its elegant syntax and the Ruby on Rails framework for web development.
    • PHP: Created by Rasmus Lerdorf in 1994, primarily for web development to build dynamic content on websites.
    • JavaScript (Node.js): JavaScript was created in 1995, but Node.js, a runtime environment, was created by Ryan Dahl in 2009. Used for building scalable server-side applications.
    • C: Developed by Dennis Ritchie in 1972. It remains one of the most influential programming languages, used for system programming, embedded systems, and applications that require high performance.
    • C++: Developed by Bjarne Stroustrup in 1983. It builds on C and adds object-oriented programming features. It's used for game development, embedded systems, and high-performance applications.
    • Java: Developed by James Gosling at Sun Microsystems in 1995. Java is widely used for enterprise-level applications, Android development, and large-scale systems.
    • Go: Created by Google in 2009. Known for its simplicity, speed, and concurrency features, Go is widely used for web servers, distributed systems, and cloud computing.
    • Rust: Created by Mozilla in 2010. Rust is known for its memory safety and performance, used in system programming, and other high-performance applications.
    • Lua: Developed by Roberto Ierusalimschy in 1993. Lua is a lightweight scripting language commonly embedded in games and applications for configuration or scripting.
    • Haskell: Created in 1990, it's a purely functional programming language used for research, academic purposes, and systems requiring high levels of mathematical computation.
    • Shell Script: A type of script commonly used in Unix-like systems to automate tasks, write system maintenance scripts, and handle administrative tasks.
    • AWK: Developed in 1977, AWK is a pattern scanning and processing language used for text and data processing tasks.
    • Tcl: Created by John Ousterhout in 1988. Known for its use in embedded systems, testing, and automation.
    • R: Created by Ross Ihaka and Robert Gentleman in 1993, R is used primarily for statistical computing and data analysis.
    • Kotlin: Developed by JetBrains in 2011, Kotlin is a modern, statically-typed language that runs on the Java Virtual Machine (JVM) and is now heavily used for Android development.
    • Swift: Created by Apple in 2014, Swift is a modern language used for iOS and macOS application development.
    • Julia: Created in 2012 for high-performance numerical and scientific computing. It's widely used in data science, machine learning, and large-scale computational tasks.

    Each of these languages has evolved in different directions based on the needs of developers in various industries, from system-level programming and web development to data analysis and machine learning.

    Conclusion:

    Each language comes with its own set of strengths, contexts, and use cases. Whether it's the system-level control of C, the ease of web development in Python or Ruby, or the performance of C++ and Rust, these languages offer rich ecosystems and excellent developer support. By testing them at the Bash prompt, you can start to get a feel for each language's capabilities, from system automation with Bash to interactive and asynchronous tasks with Node.js. Each interpreter brings something unique to the table, making them essential tools for different domains in software development.

  • Posted on

    Linux Bash (Bourne Again Shell) is incredibly versatile and fun to use. Here are 10 enjoyable things you can do with it.

    Customize Your Prompt

    Use PS1 to create a custom, colorful prompt that displays the current time, username, directory, or even emojis.

    export PS1="\[\e[1;32m\]\u@\h:\[\e[1;34m\]\w\[\e[0m\]$ "

    Play Retro Games

    Install and play classic terminal-based games like nethack, moon-buggy, or bsdgames.

    Make ASCII Art

    Use tools like toilet, figlet, or cowsay to create text-based art.

    echo "Hello Linux!" | figlet Figlet example use

    Create Random Passwords

    Generate secure passwords using /dev/urandom or Bash functions. tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 16

    Turn Your Terminal into a Weather Station

    Use curl to fetch weather data from APIs like wttr.in.

    curl wttr.in

    enter image description here

    Bash is a playground for creativity and efficiency—experiment with it, and you’ll discover even more possibilities!

  • Posted on

    Introduction

    Signals are used to interact between processes and can occur at anytime, typically they are kill signals however processes can opt to handle them programatically unless they are SIGKILL or SIGSTOP signals.

    List Of Available Signals

    Table of signals

    SIGNAL VALUE DEFAULT ACTION POSIX? MEANING
    SIGHUP 1 Terminate Yes Hangup detected on controlling terminal or death of controlling process
    SIGINT 2 Terminate Yes Interrupt from keyboard
    SIGQUIT 3 Core dump Yes Quit from keyboard
    SIGILL 4 Core dump Yes Illegal instruction
    SIGTRAP 5 Core dump No Trace/breakpoint trap for debugging
    SIGABTR SIGIOT 6 Core dump Yes Abnormal termination
    SIGBUS 7 Core dump Yes Bus error
    SIGFPE 8 Core dump Yes Floating point exception
    SIGKILL 9 Terminate Yes Kill signal (cannot be caught or ignored)
    SIGUSR1 10 Terminate Yes User-defined signal 1
    SIGSEGV 11 Core dump Yes Invalid memory reference
    SIGUSR2 12 Terminate Yes User-defined signal 2
    SIGPIPE 13 Terminate Yes Broken pipe: write to pipe with no readers
    SIGALRM 14 Terminate Yes Timer signal from alarm
    SIGTERM 15 Terminate Yes Process termination
    SIGSTKFLT 16 Terminate No Stack fault on math co-processor
    SIGCHLD 17 Ignore Yes Child stopped or terminated
    SIGCONT 18 Continue Yes Continue if stopped
    SIGSTOP 19 Stop Yes Stop process (can not be caught or ignored)
    SIGTSTP 20 Stop Yes Stop types at tty
    SIGTTIN 21 Stop Yes Background process requires tty input
    SIGTTOU 22 Stop Yes Background process requires tty output
    SIGURG 23 Ignore No Urgent condition on socket (4.2 BSD)
    SIGXCPU 24 Core dump Yes CPU time limit exceeded (4.2 BSD)
    SIGXFSZ 25 Core dump Yes File size limit exceeded (4.2 BSD)
    SIGVTALRM 26 Terminate No Virtual alarm clock (4.2 BSD)
    SIGPROF 27 Terminate No Profile alarm clock (4.2 BSD)
    SIGWINCH 28 Ignore No Window resize signal (4.3 BSD, Sun)
    SIGIO SIGPOLL 29 Terminate No I/O now possible (4.2 BSD) (System V)
    SIGPWR 30 Terminate No Power Failure (System V)
    SIGSYS SIGUNUSED 31 Terminate No Bad System Called. Unused signal

  • Posted on

    Introduction

    A computer doing more than one thing at a time is using processes, these require resources, CPU time, memory and access to other devices like CD/DVD/USB drives, etc. Each process is allocated an amount of system resources to perform its function which is controlled by the operating system whose job it is to facilitate these processes. Signals have an important part to play on the interaction of the processes, usually these send exit signals and other information to each other, or to itself.

    Programs, Processes, and Threads

    A program is a set of instructions to be carried out which may local data such as information for output to the terminal via read or external data which may come from a database. Common programs include ls, cat and rm which would reside outside of the user operating system thus they have their own operating program on disk.

    A process is a program which is executing which has associated resources, furthermore. These might include environments, open files or signal handlers etc. At the same time, two or more tasks or processes may share system resources meaning they are multi-threaded processes.

    In other operating systems there may be a distinction between heavy-weight and light-weight processes; furthermore, heavy-processes may contain a number of lightweight processes. However, in Linux, each process is considered heavy or light by the amount of shared resources they consume, for faster context switching. Again, unlike other operating systems, Linux is much faster at switching between processes, creating and destroying them too. This means the model for multi-threaded processes is similar to that of simultaneous processes as they are treated as one. All the same, Linux respects POSIX and other standards for multi-threaded processes, where each thread returns the same process ID plus returning a thread ID.

    Processes

    Processes are running programs in execution, either running or sleeping. Every process has a process ID (pid), parent process ID (ppid) and a parent group ID (ppgid). In addition every process has program code, data, variables, file descriptions and an environment.

    In Linux, init is the first process that is run, thus becoming the ancestor to all other programs executed on the system; unless they are started directly from the Linux Kernel, for which they are grouped with brackets ([]) in the ps listing. Other commonalities are the processes pid numbering, if a parent process ends before the child, the process will become adopted by init, setting its process ID (pid) to 1. If the system is more recent, this will result in the pid being set to 2 inline with systemd’s kernel thread kthreadd. In the circumstance where a child dies before its parent it enters a zombie state, using no resources and retaining only the exit code. It is the job of init to allow these processes to die gracefully and is often referred to as being the zombie killer or child reaper. Finally, the processes ID can not go above 16bit definition hence 32768 is the largest pid you will find on a Linux system; to alter this value see /proc/sys/kernel/pid_max; once this limit is reached they are restarted at 300.

    Process Attributes

    • All processes have certain attributes, as shown below:
      • Program
      • Context
      • Permissions
      • Resources

    The program is the process itself, maybe it is a set of commands in a script or a loop which never ends checking against external data and performing an action when necessary.

    The context is the state which the program is in at any point in time, thus a snapshot of CPU registers, what is in RAM and other information. Furthermore, processes can be scheduled in and out when sharing CPU time or put to sleep when waiting for user input, etc. Being able to swap out and put back the process context is known as context switching.

    Permissions are inherited from the user who executed the program however programs themselves can have an s property assigned to them In order to define their effective user ID, rather than their real ID and are referred to as setuid programs. These setuid programs run with the permissions of the creator of the program, not the user who ran it. A commonly found setuid program is passwd.

    Process Resource Isolation

    This is the practice of isolating the process from other processes upon execution, promoting security and stability. Furthermore, processes do not have direct access to hardware devices, etc. Hardware is managed by the kernel meaning system calls must be made in order to interact with said devices.

    Controlling Processes (ulimit)

    This program, umlimit, reports and resets a number of resource limits associated with processes running under its shell. See below for a typical output of ulimit -a.

    ulimit -a

    Here you see a screenshot of ulimit -a which contains values important for a system administrator to be aware of because it identifies the allocation of resources. You may want to restrict or expand resource allocation depending on the requirements of the processes and / or file access limits.

    Hard and soft resource allocation terms come into play here, with hard limits imposed by the administrator and soft limited by those restrictions but allowing for user-level limits to be imposed. Run ulimit -H -n to see hard limits and ulimit -S -n for soft limits. File descriptors are probably the most common thing that may need changing, to allow more resources, typically this is set to 1024 which may make execution of an application virtually impossible so you can change this with ulimit -n 1600 which would change it to 1600. In order to make more permanent changes you would need to edit /etc/security/limits.conf then perform a reboot.

    Process States

    Processes can take on many different states, which is managed by the scheduler. See below for the main process states: - Running - Sleeping (waiting) - Stopped - Zombie

    A running process is either using the CPU or will be waiting for its next CPU time execution slice, usually cited in the run queue - resuming when the scheduler deems it satisfactory to re-enter CPU execution.

    A sleeping process is one that is waiting on its request that can not be processed until completed. Upon completion, the kernel will wake the process and reposition it into the run queue.

    A stopped process is one that has been suspended and is used normally by developers to take a look at resource usage at any given point. This can be activated with CTRL-Z or by using a debugger.

    A zombie process is one that has entered terminated state and no other process has inquired about it, namely “reaped it” - this can often be called a “defunct process”. The process releases all of its resources except its exit state.

  • Posted on

    Introduction

    After reading this document you should be able to identify why Linux defines its filesystem hierarchy in one big tree and explain the role of the filesystem hierarchy standard, explain what is available at boot in the root directory (/), explain each subdirectory purpose and typical contents. The aim here is to be able to create a working bash script which knows where to put its different data stores including lockfiles, database(s) or temporary files; including the script itself.

    One Big Filesystem

    As with all Linux installations there is a set protocol to follow which could be looked at as one big tree starting from its root, /. This often contains different access points not just typical file or folder components, but in fact mounted drives, USB or CD/DVD media volumes and so on. Even more adapt, these can span many partitions; as one filesystem. Regardless, the end result is one big filesystem, meaning applications generally do not care what volume or partition the data resides upon. The only drawback you may encounter is different naming conventions however there are now standards to follow in the Linux-ecosystem for cross-platform conformity.

    Defining The Data

    There has to be one method for defining all data in order to satisfy clear distinctions. Firstly, you may examine data and identify whether it is shareable or not. For instance, /home data may be shared across many hosts however .pid lock files will not. Another angle you may look at is are the files variable or static, meaning if no administrator input is given they will remain the same, ie. static - or else the data changes when the filesystem is in operation without human interaction hence it is called variable. With this in mind, you must identify which trees and sub-trees are accessible by your application or command prompt to identify if they can be manipulated at runtime and where they should reside if you are creating these filetypes.

    To summarise: - Shared Data is Common To All Systems - Non-Shareable Data is Local To One System - Static Data Never Changes When Left Alone - Variable Data Will Change During Application Processing

    The Filesystem Hierarchy Standard aims to help achieve unity across all platforms however different distributions can often invent new methodologies that will generally become standard over time. While the FHS (Filesystem Hierarchy Standard) publishes its standard new conventions are currently in play despite this document, see here: linuxfoundation.org...

    DIRECTORY FHS Approved PURPOSE
    / Yes Primary directory of the entire filesystem hierarchy.
    /bin Yes Essential executable programs that must be available in single user mode.
    /boot Yes Files needed to boot the system, such as the kernel, initrd or initramfs |images, and boot configuration files and bootloader programs.
    /dev Yes Device Nodes, used to interact with hardware and software devices.
    /etc Yes System-wide configuration files.
    /home Yes User home directories, including personal settings, files, etc.
    /lib Yes Libraries required by executable binaries in /bin and /sbin.
    /lib64 No 64-bit libraries required by executable binaries in /bin and /sbin, for systems which can run both 32-bit and 64-bit programs.
    /media Yes Mount points for removable media such as CDs, DVDs, USB sticks, etc.
    /mnt Yes Temporarily mounted filesystems.
    /opt Yes Optional application software packages.
    /proc Yes Virtual pseudo-filesystem giving information about the system and processes running on it. Can be used to alter system parameters.
    /sys No Virtual pseudo-filesystem giving information about the system and processes running on it. Can be used to alter system parameters. Similar to a device tree and is part of the Unified Device Model.
    /root Yes Home directory for the root user.
    /sbin Yes Essential system binaries.
    /srv Yes Site-specific data served up by the system. Seldom used.
    /tmp Yes Temporary files; on many distributions lost across a reboot and may be a ramdisk in memory.
    /usr Yes Multi-user applications, utilities and data; theoretically read-only.
    /var Yes Variable data that changes during system operation.

    Run du --max-depth=1 -hx / to see the output of your root filesystem hierarchy.

    The Root Directory (/)

    Starting with our first directory, the root directory (/) this is often the access point mounted across multiple (or single) partitions with other locations such as /home, /var and /opt mounted later. This root partition must contain all root directories and files at boot in order to serve the system. Therefore it needs boot loader information and configuration files plus other essential startup data, which must be adequate to perform the following operations: - Boot the system - Restore the system on external devices such as USB, CD/DVD or NAS - Recover and/or repair the system (ie. in rescue mode)

    The root directory / should never have folders created directly within it; period.

    Binary Files (/bin)

    • The /bin directory must be present for a system to function containing scripts which are used indirectly by other scripts. It's important because non-privileged users and system administrators all have access to this directory plus it contains scripts needed to be served before the filesystem is even mounted. It is common place to store non-essential scripts which do not merit going in /bin to be served from /usr/bin, however /bin is becoming more acceptable to be used in common-place operation, in fact in RHEL they are the same directory. Often symbolic links are used from/bin to other folder locations in order to preserve two-way folder listings.

    They are as follows: cat, chgrp, chmod, chown, cp, date, dd, df, dmesg, echo, false, hostname, kill, ln, login, ls, mkdir, mknod, more, mount, mv, ps, pwd, rm, rmdir, sed, sh, stty, su, sync, true, umount and uname

    Other binaries that may be present during boot up and in normal operation are: test, csh, ed, tar, cpio, gunzip, zcat, netstat, ping

    The Boot Directory (/boot)

    This folder contains the vmlinuz and intramfs (also known as initrd) files which are put there in order to serve the boot operation, the first is a compressed kernel and the second is the initial RAM filesystem. Other files include config and system.map.

    Device Files (/dev)

    Device files are often used to store device nodes; commonly serving various hardware references including nodes - network cards however are more likely to be named eth0 or wlan0 meaning they are referenced by name. The directory /dev/ will automatically create nodes using udev when system hardware is found. Quite aptly, ls /dev | grep std will show you some useful output references which can be used to process data either to the terminal or into the obis.

    Configuration Files (/etc)

    Used to contain config directives (or contained folders with config directives) for system-wide programs and more importantly system services.

    Some examples are: csh.login, exports, fstab, ftpusers, gateways, gettydefs, group, host.conf, hosts.allow, hosts.deny, hosts,equiv, hosts.lpd, inetd.conf, inittab, issue, ld.so.conf, motd, mtab, mtools.conf, networks, passwd, printcap, profile, protocols, resolv.conf, rpc, securetty, services, shells, syslog.conf

    More crucially, the following helps keep the system correctly configured: - /etc/skel This folder contains the skeleton of any new users /home directory - /etc/systemd - Points to or contains configuration scripts for system services, called by service - /etc/init.d - Contains startup and shutdown scripts used by System V initialisation

    System Users (/home)

    On Linux, users working directories are given in the /home/{username} format and are typically named in a naming convention such as /home/admin /home/projects/home/stagingor/home/production. Typically, this could be their name or nickname or purpose, eg./home/steve,/home/steve-work` and so on.

    With Linux, this folder can be accessed via the ~ symbol which is given to system users in order to direct the user to the currently logged in users home directory, eg. ls ~/new-folder etc; which is also accessible by using $home. The only caveat to this is that the root user is placed in /root - all other users reside in /home, typically mirroring /etc/skel as previously outlined. (see “Configuration Files (/etc)”)

    System Libraries (/lib and /lib64)

    These folders are for libraries serving binary files found in /bin or other locations where scripts are found. These libraries are important because they maintain the upkeep of essential system programs (binaries) which help boot the system and then are used by the system once booted, fundamentally. Kernel modules (device or system drivers) are stored in /lib/modules and PAM (Pluggable Authentication Modules) are stored in /lib/security.

    For systems running 32bit and 64bit the /lib64 is usually present. More common place is to use the one folder with symbolic links to the actual destination of the library, similar to how /bin has reformed back into one folder, providing the same structure with separation of differing program importance (using symbolic links) while maintaining the single source for all scripts of that type.

    External Devices (/media)

    The /media folder is often found to be a single source for all removable media. USB, CD, DVD even the ancient Floppy Disk Drive. Linux mounts these automatically using “udev” which in turn creates references inside /media making for simple access to external devices, autonomously. As the device is removed, the name of the file in this directory is removed also.

    Temporary Mounts (/mnt)

    This folder is used for mount points, usually temporary ones. During the development of the FHS this would typically contain removable devices however /media is favoured on modern systems.

    Typical use scenarios are: - NFS - Samba - CIFS - AFS

    Generically, this should not be used by applications, instead mounted disks should be located elsewhere on the system.

    Software Packages (/opt)

    This location is where you would put system-wide software packages with everything included in one place. This is for services that want to provide everything in one place, so you would have /opt/project/bin etc. all in this folder. The directories /opt/bin, /opt/doc, /opt/include, /opt/info, /opt/lib, and /opt/man are saved for administrator usage.

    System Processes (/proc)

    These are special files which are mounted like with /dev and are constantly changing. They only contain data at the point you make the request, so typically a file may be 0kb but if you look, may contain many lines of data; this is accessed only when you run the cat or vi operation, it does indeed remain empty. Important pseudo files are /proc/interrupts, /proc/meminfo, /proc/mounts, and /proc/partitions.

    System Filesystems (/sys)

    This directory is the mount point for the sysfs pseudo-filesystem where all information resides only in memory, not on disk. This is again very much like /dev/ and /proc in that it contains mounted volumes which are created on system boot. Containing information about devices and drivers, kernel modules and so on.

    Root (/root)

    This is generally called “slash-route” pronounced phonetically and is simply the primary system administrators home folder. Other user accounts are encouraged with specific access details for better security.

    System Binaries (/sbin)

    This is very similar to /bin and as mentioned may very well have symbolic link references inside /bin with the actual program residing here. This allows for a one-solution-fits-all because /bin will display all programs whereas /sbin would be designated to the programs listed there. Some of these programs include: fdisk, fsck, getty, halt, ifconfig, init, mkfs, mkswap, reboot, route, swapon, swapoff and update.

    System Services (/srv)

    Popular for some administrators this is designed to provide system service functionality. You can be fairly lax with naming conventions here you may want to group applications into folders such as ftp, rsync, www, and cvs etc. Popular by those that use it and may be overlooked by those what don’t.

    Temporary Files (/tmp)

    Used by programs that do not want to keep the data between system boots and it may be periodically refreshed by the system. Use at your own discretion. Be aware as this is truly temporary data any large files may cause issues as often the information is stored in memory.

    System User (/usr)

    This should be thought of as a second system hierarchy. Containing non-local data it is best practice to serve administrator applications here and is often used for files and packages or software that is not needed for booting.

    DIRECTORY PURPOSE
    /usr/bin Non-essential command binaries
    /usr/etc Non-essential configuration files (usually empty)
    /usr/games Game data
    /usr/include Header files used to compile applications
    /usr/lib Library files
    /usr/lib64 Library files for 64-bit
    /usr/local Third-level hierarchy (for machine local files)
    /usr/sbin Non-essential system binaries
    /usr/share Read-only architecture-independent files
    /usr/src Source code and headers for the Linux kernel
    /usr/tmp Secondary temporary directory

    Other common destinations are /usr/share/man and /usr/local, the former is for manual pages and the latter is for predominantly read-only binaries.

    Variable Data (/var)

    This directory is intended for variable (volatile) data and as such is often updated quite frequently. Contains log files, spool directories and files administrator files and transient files such as cache data.

    SUBDIRECTORY PURPOSE
    /var/ftp Used for ftp server base
    /var/lib Persistent data modified by programs as they run
    /var/lock Lock files used to control simultaneous access to resources
    /var/log Log files
    /var/mail User mailboxes
    /var/run Information about the running system since the last boot
    /var/spool Tasks spooled or waiting to be processed, such as print queues
    /var/tmp Temporary files to be preserved across system reboot. Sometimes linked to /tmp
    /var/www Root for website hierarchies

    Transient Files (/run)

    These are meant to be files that are updated quite regularly and are not often maintained during reboots, useful for containing temporary files and runtime information. The use of run is quite new and you may find /var/run and /var/lock symbolic link references. The use of this is more common place in modern systems.

  • Posted on

    When it happens that your VPS is eating data by the second and there is disk read/write issues one port of call you are bound to visit is searching and identifying large files on your system.

    Now, you would have been forgiven for thinking this is a complicated procedure considering some Linux Bash solutions for fairly simple things, but no. Linux Bash wins again!

    du -sh /path/to/folder/* | sort -rh

    Here, du is getting the sizes and sort is organising them, -h is telling du to display human-readable format.

    The output should be something like this:

    2.3T    /path/to/directory
    1.8T    /path/to/other
    

    It does take a while to organise as it is being done recursively however given 3-5mins and most scenarios will be fine.

  • Posted on

    So yeah, getting used to Bash is about finding the right way to do things. However, learning one-liners and picking up information here and there is all very useful, finding something you don't need to Google in order to recall is extremely important.

    Take the case of recursive find and replace. We've all been there, it needs to be done frequently but you're either a) scared or b) forgetful. Then you use the same snippet from a web resource again and again and eventually make a serious error and boom, simplicity is now lost on you!

    So here it is, something you can remember so that you don't use different methods depending on what Google throws up today.

    grep -Rl newertext . | xargs sed -i 's/newertext/newesttext/g'

    Notice, we use grep to search -R recursively with results fed to xargs which does a simple sed replace with g global/infinite properties.

    Say you want to keep it simple though and find and review the files before doing the "simple" replace. Well, do this.

    grep -R -l "foo" ./*

    The uppercase -R denotes to follow symlinks and the ./* indicates the current directory. The -l requests a list of filenames matched. Neat.