Linux Bash

Providing immersive and explanatory content in a simple way anybody can understand.

  • Posted on

    Explanatory Synopsis and Overview of "The Linux Command Line"

    "The Linux Command Line" by William E. Shotts Jr. is a practical and thorough guide to the Linux command-line interface (CLI). Below is an overview of its content, restructured and summarized in my interpretation for clarity and focus:


    Part 1: Introduction to the Command Line

    This part introduces the Linux shell, emphasizing the importance of the CLI in managing Linux systems.

    • What is the Shell? Explains the shell as a command interpreter and introduces Bash as the default Linux shell.

    • Basic Navigation: Covers essential commands for exploring the file system (ls, pwd, cd) and understanding the hierarchical structure.

    • File Management: Explains creating, moving, copying, and deleting files and directories (cp, mv, rm, mkdir).

    • Viewing and Editing Files: Introduces basic tools like cat, less, nano, and echo.


    Part 2: Configuration and Customization

    Focuses on tailoring the Linux environment to enhance user productivity.

    • Environment Variables: Discusses what environment variables are, how to view them (env), and how to set them.

    • Customizing the Shell: Explains configuration files like .bashrc and .profile, as well as creating aliases and shell functions.

    • Permissions and Ownership: Introduces Linux file permissions (chmod, chown), symbolic representations, and user roles.


    Part 3: Mastering Text Processing

    This section explores tools and techniques for handling text, a critical skill for any Linux user.

    • Working with Pipes and Redirection: Explains how to chain commands and redirect input/output using |, >, and <.

    • Text Search and Filtering: Covers tools like grep and sort for searching, filtering, and organizing text.

    • Advanced Text Manipulation: Introduces powerful tools such as sed (stream editor) and awk (pattern scanning and processing).


    Part 4: Shell Scripting and Automation

    Delves into creating scripts to automate repetitive tasks.

    • Introduction to Shell Scripting: Explains script structure, how to execute scripts, and the shebang (#!).

    • Control Structures: Covers conditionals (if, case) and loops (for, while, until).

    • Functions and Debugging: Teaches how to write reusable functions and debug scripts using tools like set -x and bash -x.

    • Practical Examples: Provides real-world examples of automation, such as backups and system monitoring.


    Additional Features

    • Command Reference: Includes a concise reference for common commands and their options.

    • Appendices: Offers supplementary material, such as tips for selecting a text editor and an introduction to version control with Git.


    What Makes This Version Unique?

    This synopsis groups the content into themes to give readers a logical flow of progression: 1. Basics First: Starting with navigation and file management. 2. Customization: Encouraging users to make the CLI their own. 3. Text Processing Mastery: A vital skill for working with Linux data streams. 4. Scripting and Automation: The crown jewel of command-line expertise.

    This structure mirrors the book's balance between learning and applying concepts, making it a practical and user-friendly resource for anyone eager to excel in Linux.

  • Posted on

    Understanding Bash Shell: What is it and Why is it Popular?

    The Bash shell, short for Bourne Again Shell, is a Unix shell and command-line interface that is widely used in Linux, macOS, and other Unix-based operating systems. Bash serves as both an interactive command processor and a powerful scripting language, making it a versatile tool for system administration, development, and everyday tasks.


    What is the Bash Shell?

    A shell is a program that interprets user commands and communicates with the operating system to execute them. The Bash shell was introduced in 1989 as a free and improved version of the Bourne shell (sh), offering advanced features while maintaining compatibility with older scripts.

    Key Features of Bash:

    1. Command-Line Interface: Allows users to execute commands, manage files, and control processes.
    2. Scripting Language: Supports writing and executing scripts for task automation.
    3. Customizable Environment: Offers aliases, environment variables, and configuration files like .bashrc for personalization.
    4. Job Control: Manages running processes with features like background execution and job suspension.
    5. Rich Built-in Utilities: Includes commands like echo, read, test, and others for script functionality.

    Why is Bash Popular?

    1. Default Shell on Linux: Bash is the default shell in most Linux distributions, ensuring widespread use and familiarity.
    2. Compatibility: Fully backward-compatible with the Bourne shell, enabling support for legacy scripts.
    3. Powerful Scripting: Simplifies the automation of repetitive or complex tasks with robust scripting capabilities.
    4. Cross-Platform Availability: Runs on Linux, macOS, and Windows (via WSL), making it accessible across operating systems.
    5. Community Support: A vast community provides documentation, tutorials, and examples for beginners and advanced users.
    6. Flexibility: Highly customizable with aliases, scripts, and extensions to suit user workflows.

    Popular Use Cases of Bash

    1. System Administration

      • Automating backups.
      • Managing user accounts and permissions.
      • Monitoring system performance.
    2. Development and DevOps

      • Setting up development environments.
      • Continuous Integration/Continuous Deployment (CI/CD) pipelines.
      • Managing version control systems like Git.
    3. Everyday Tasks

      • Batch renaming files.
      • Searching text with grep or finding files with find.
      • Downloading files using wget or curl.
    4. Data Processing

      • Parsing and transforming text files.
      • Combining tools like awk, sed, and sort.

    Advantages of Using Bash

    1. Lightweight and Fast: Minimal resource consumption compared to graphical tools.
    2. Accessible: Available on nearly every Unix-like system.
    3. Extensible: Supports the addition of functions, aliases, and external tools.
    4. Powerful Integration: Seamlessly integrates with system utilities and programming tools.

    Learning Bash: Where to Begin

    1. Understand the Basics:

      • Familiarize yourself with basic commands (ls, cd, pwd, mkdir).
      • Practice file and directory management.
    2. Explore Scripting:

      • Start with simple scripts (e.g., a "Hello World" script).
      • Learn about variables, loops, and conditionals.
    3. Experiment with Advanced Tools:

      • Use tools like grep, sed, and awk for text processing.
      • Combine multiple commands with pipes (|) and redirection (>, >>).
    4. Utilize Resources:

      • Online tutorials and courses.
      • Books like "The Linux Command Line" or "Bash Guide for Beginners".

    Conclusion

    Bash's combination of simplicity, power, and versatility makes it an essential tool for anyone working with Linux or Unix-based systems. Whether you are a system administrator, developer, or enthusiast, mastering Bash unlocks unparalleled efficiency and control over your computing environment. Dive into Bash today and experience why it remains a cornerstone of modern computing!

  • Posted on

    Introduction to Bash: What You Need to Know to Get Started

    Bash, short for Bourne Again Shell, is a command-line interpreter widely used in Linux and Unix systems. It's both a powerful scripting language and a shell that lets you interact with your operating system through commands. Whether you're an IT professional, a developer, or simply someone curious about Linux, understanding Bash is a critical first step.


    What is Bash?

    Bash is the default shell for most Linux distributions. It interprets commands you type or scripts you write, executing them to perform tasks ranging from file management to system administration.


    Why Learn Bash?

    1. Control and Efficiency: Automate repetitive tasks and streamline workflows.
    2. Powerful Scripting: Write scripts to manage complex tasks.
    3. System Mastery: Understand and manage Linux or Unix systems effectively.
    4. Industry Standard: Bash is widely used in DevOps, cloud computing, and software development.

    Basic Concepts to Get Started

    1. The Command Line

    • Bash operates through a terminal where you input commands.
    • Common terminal emulators include gnome-terminal (Linux), Terminal.app (macOS), and cmd or WSL (Windows).

    2. Basic Commands

    • pwd: Print working directory (shows your current location in the file system).
    • ls: List files and directories.
    • cd: Change directories.
    • touch: Create a new file.
    • mkdir: Create a new directory.

    3. File Manipulation

    • cp: Copy files.
    • mv: Move or rename files.
    • rm: Remove files.
    • cat: Display file content.

    4. Using Flags

    • Many commands accept flags to modify their behavior. For example: bash ls -l This displays detailed information about files.

    Getting Started with Bash Scripts

    Bash scripts are text files containing a sequence of commands.

    1. Create a Script File
      Use a text editor to create a file, e.g., script.sh.

    2. Add a Shebang
      Start the script with a shebang (#!/bin/bash) to specify the interpreter.

      #!/bin/bash
      echo "Hello, World!"
      
    3. Make the Script Executable
      Use chmod to give execution permissions:

      chmod +x script.sh
      
    4. Run the Script
      Execute the script:

      ./script.sh
      

    Key Tips for Beginners

    • Use Tab Completion: Start typing a command or file name and press Tab to auto-complete.
    • Use Man Pages: Learn about a command with man <command>, e.g., man ls.
    • Practice Regularly: Familiarity comes with practice.

    Resources to Learn Bash

    • Online Tutorials: Websites like Linux Academy, Codecademy, or free YouTube channels.
    • Books: "The Linux Command Line" by William Shotts.
    • Interactive Platforms: Try Bash commands on a virtual machine or cloud platforms like AWS CloudShell.

    Getting started with Bash unlocks a world of productivity and power in managing systems and automating tasks. Dive in, and happy scripting!

  • Posted on

    How to Install an Apache Web Server Powered by PHP and MySQL on Linux CLI Using Package Managers

    This guide outlines the steps to install Apache, PHP, and MySQL (often referred to as the LAMP stack: Linux, Apache, MySQL, and PHP) on a Linux system using package managers such as APT (for Debian/Ubuntu-based distributions) and DNF (for CentOS/RHEL-based distributions).

    Prerequisites

    • A Linux server (Debian/Ubuntu/CentOS 8+).
    • Root or sudo privileges to install and configure packages.
    • Access to the command line or SSH.

    Step-by-Step Installation:


    Step 1: Update the System

    Before installing any packages, it’s important to ensure that your system is up to date.

    For Debian/Ubuntu (APT)

    sudo apt update
    sudo apt upgrade -y
    

    For CentOS 8+ (DNF)

    sudo dnf update -y
    

    Step 2: Install Apache Web Server

    Apache is the most widely used web server to serve static and dynamic content.

    For Debian/Ubuntu (APT)

    Install Apache with the following command:

    sudo apt install apache2 -y
    

    Enable Apache to start on boot:

    sudo systemctl enable apache2
    

    Start Apache:

    sudo systemctl start apache2
    

    Verify that Apache is running:

    sudo systemctl status apache2
    

    For CentOS 8+ (DNF)

    Install Apache (httpd) using the following:

    sudo dnf install httpd -y
    

    Enable Apache to start on boot:

    sudo systemctl enable httpd
    

    Start Apache:

    sudo systemctl start httpd
    

    Verify that Apache is running:

    sudo systemctl status httpd
    

    To ensure Apache is accessible through the firewall, run:

    sudo firewall-cmd --permanent --add-service=http
    sudo firewall-cmd --reload
    

    Step 3: Install PHP

    Apache alone can serve static content, but to serve dynamic content (such as web applications), we need PHP.

    For Debian/Ubuntu (APT)

    Install PHP and the required Apache module:

    sudo apt install php libapache2-mod-php -y
    

    Restart Apache to load the PHP module:

    sudo systemctl restart apache2
    

    For CentOS 8+ (DNF)

    On CentOS 8+, PHP can be installed from the default CentOS repositories or from the Remi repository if you need a more recent version of PHP. To install PHP from the default CentOS 8+ repositories:

    sudo dnf install php php-mysqlnd php-fpm -y
    

    After installation, restart Apache to apply the PHP configuration:

    sudo systemctl restart httpd
    

    Step 4: Install MySQL (or MariaDB)

    MySQL (or MariaDB, which is a drop-in replacement for MySQL) is needed to store and retrieve data for web applications.

    For Debian/Ubuntu (APT)

    Install MySQL:

    sudo apt install mysql-server -y
    

    Secure the MySQL installation:

    sudo mysql_secure_installation
    

    Start MySQL:

    sudo systemctl start mysql
    

    Enable MySQL to start on boot:

    sudo systemctl enable mysql
    

    Verify MySQL is running:

    sudo systemctl status mysql
    

    For CentOS 8+ (DNF)

    CentOS 8 comes with MariaDB as the default MySQL-compatible database server. To install MariaDB:

    sudo dnf install mariadb-server -y
    

    Start MariaDB:

    sudo systemctl start mariadb
    

    Enable MariaDB to start on boot:

    sudo systemctl enable mariadb
    

    Secure the MariaDB installation:

    sudo mysql_secure_installation
    

    Verify MariaDB is running:

    sudo systemctl status mariadb
    

    Step 5: Verify the Installation of Apache, PHP, and MySQL

    Now that Apache, PHP, and MySQL/MariaDB are installed, it’s important to verify that everything is working correctly.

    Check Apache and PHP Integration

    1. Create a PHP info file to check if PHP is correctly integrated with Apache:
    sudo nano /var/www/html/info.php
    
    1. Add the following PHP code:
    <?php
    phpinfo();
    ?>
    
    1. Save and exit the file (Press CTRL+X, then Y, then Enter).

    2. In your web browser, visit:

    http://localhost/info.php
    

    You should see a page displaying detailed PHP configuration information, confirming that Apache and PHP are working together.

    Check MySQL/MariaDB Connection

    To check if MySQL/MariaDB is working, log into the database:

    sudo mysql -u root -p
    

    Enter your root password when prompted, and if you see the MariaDB/MySQL prompt (MariaDB [(none)]>), then the installation was successful.


    Step 6: Configure Apache to Use PHP

    If you haven’t already, ensure Apache is configured to handle PHP files.

    For Debian/Ubuntu (APT)

    Apache should automatically be configured to use PHP after installing the libapache2-mod-php module. If it's not, you can enable the PHP module manually with:

    sudo a2enmod php7.x  # Replace 7.x with your PHP version
    sudo systemctl restart apache2
    

    For CentOS 8+ (DNF)

    CentOS 8 will automatically configure Apache to work with PHP. If it's not working, you can ensure that PHP is set up with the following command:

    sudo systemctl restart httpd
    

    Step 7: Remove the PHP Info File (Optional)

    After verifying PHP is working, it's a good idea to delete the info.php file for security reasons:

    sudo rm /var/www/html/info.php
    

    Step 8: Additional Configuration (Optional)

    You may want to configure Apache to serve multiple websites or adjust certain PHP settings.

    Set Up Virtual Hosts (Optional)

    If you want to serve multiple websites, create a configuration file for each site.

    1. Create a new configuration file for your site:
    sudo nano /etc/httpd/conf.d/mywebsite.conf  # CentOS
    
    1. Add the virtual host configuration:
    <VirtualHost *:80>
        ServerAdmin webmaster@mywebsite.com
        DocumentRoot /var/www/mywebsite
        ServerName mywebsite.com
        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined
    </VirtualHost>
    
    1. Create the directory for the site:
    sudo mkdir /var/www/mywebsite
    
    1. Restart Apache to apply the changes:
    sudo systemctl restart httpd  # CentOS
    

    Conclusion

    You now have a fully functional LAMP stack (Apache, PHP, and MySQL/MariaDB) on your Linux system, ready to serve dynamic websites. Here's a recap of what we did:

    • Installed Apache to serve web content.

    • Installed PHP for dynamic content.

    • Installed MySQL/MariaDB for database management.

    • Configured Apache and PHP to work together.

    • Created virtual hosts for managing multiple sites (optional).

    With Apache, PHP, and MySQL/MariaDB running, your server is ready for hosting web applications, whether it's for a simple website or a full-fledged content management system (CMS).

  • Posted on

    Switching to Linux from another operating system (e.g., Windows or macOS) can be both exciting and challenging. While Linux offers flexibility, power, and control, it also comes with a steep learning curve for those not familiar with its unique characteristics. Some concepts and practices may seem baffling or even frustrating at first, leading to what many describe as "WTF" moments. Here are the 10 most challenging "WTF" topics when switching to Linux:

    1. Package Management and Software Installation

    • What is it? In Linux, software is managed using package managers (e.g., APT, DNF, YUM, Pacman). Instead of downloading executable files from websites, most software is installed via a repository or a package manager.
    • Why it’s confusing: Coming from Windows or macOS, where software is typically downloaded as standalone apps or installers, the concept of repositories, package versions, dependencies, and the need to use terminal commands to install software can be overwhelming.
    • WTF Moment: “Why is it so hard to install this app? I thought I was just supposed to click a button!”
    • Why it’s Important: Learning package management helps users understand the core concept of system stability and security. By installing software via official repositories, you ensure compatibility with your distribution and avoid the risks of malware from unverified sources. Understanding package management also prepares users to handle software dependencies, updates, and removals more efficiently.

    2. The Terminal (Command Line Interface)

    • What is it? The terminal (or shell) is a command-line interface where users input text commands to interact with the system.
    • Why it’s confusing: Most new Linux users come from graphical user interfaces (GUIs) where everything is done through menus and clicks. The terminal can feel foreign and intimidating because you’re expected to know commands to perform tasks.
    • WTF Moment: “I have to type all of that just to copy a file? Where are the graphical tools?”
    • Why it’s Important: Mastering the terminal opens up a vast array of possibilities. Efficiency and automation are significantly enhanced when you learn to navigate the terminal. It also teaches you about the low-level control you have over your system, offering flexibility that is impossible in graphical environments. The terminal is an essential tool for troubleshooting, scripting, and system administration, making it crucial for anyone serious about using Linux.

    3. File System Layout (The Linux Filesystem Hierarchy)

    • What is it? Unlike Windows, which uses drive letters (e.g., C:\, D:) and macOS’s hierarchical file structure, Linux has a unique filesystem layout that includes directories like /bin, /usr, /home, /var, and more.
    • Why it’s confusing: Coming from Windows or macOS, users expect a simpler file structure, but Linux has different conventions and places files in specific directories based on function.
    • WTF Moment: “Why is everything in this weird /etc and /lib folder? Where are my programs?”
    • Why it’s Important: Learning about permissions and user roles is crucial for understanding security and system integrity. Linux’s strict permission model ensures that system files and critical resources are protected from accidental or malicious changes. The concept of sudo and root access also fosters the practice of least privilege, which minimizes the risk of unauthorized access or damage to the system.

    4. Permissions and Ownership (Sudo, Root, and User Rights)

    • What is it? Linux has a strict user permission system where files and system settings are owned by specific users and groups. The sudo command is used to temporarily gain root (administrator) access.
    • Why it’s confusing: In Windows, administrative privileges are handled in a more straightforward way through an account with admin rights. On Linux, users frequently need to understand the difference between their own privileges and the root (superuser) privileges to perform critical system tasks.
    • WTF Moment: “Why can’t I just install this app? It says I don’t have permission!”
    • Why it’s a WTF Moment: The wide variety of Linux distributions, each with its own strengths, package managers, and philosophies, can be overwhelming. The decision of which distro to choose can feel like a paradox of choice.

    5. Distributions (Distros) and Their Variants

    • What is it? There are hundreds of Linux distributions (distros), each with its own purpose and package management system (e.g., Ubuntu, Fedora, Arch, Debian, etc.).
    • Why it’s confusing: The sheer number of distros and their differences in terms of installation, usage, package management, and software availability can be overwhelming for new users. Choosing the right one can feel like an insurmountable decision.
    • WTF Moment: “Why are there so many versions of Linux? Why is Ubuntu different from Fedora, and what’s the difference between them?”
    • Why it’s Important: Understanding the distinction between distros helps users select the best tool for the job, based on their needs. It encourages users to think critically about customizability, performance, and community support. This decentralization is also central to Linux’s philosophy, giving users the freedom to tailor their system to their exact needs. It’s an exercise in flexibility and choice, which is core to Linux’s appeal.

    6. Software Compatibility (Running Windows Apps)

    • What is it? Linux doesn’t natively run Windows applications. However, there are tools like Wine, Proton, and virtual machines that allow Windows software to run on Linux.
    • Why it’s confusing: New users coming from Windows often expect to be able to run their familiar applications (like Microsoft Office, Photoshop, or games) on Linux, but that’s not always straightforward.
    • WTF Moment: “I need a whole different tool just to run this Windows app? Why can’t I just install it like I would on Windows?”
    • Why it’s Important: While this limitation can be frustrating, it encourages users to explore native Linux applications and open-source alternatives, fostering a shift toward Linux-first thinking. It also promotes an understanding of the principles of software development and the importance of creating cross-platform tools. Overcoming this challenge helps users gain a deeper appreciation for system compatibility and the diversity of available software in the open-source ecosystem.

    7. System Updates and Upgrades

    • What is it? Linux distributions are frequently updated with new features, bug fixes, and security patches. The upgrade process may vary depending on the distro.
    • Why it’s confusing: In Windows and macOS, updates often occur automatically and are less frequent, whereas Linux systems may require users to run commands to update or upgrade their system and software, sometimes resulting in unexpected issues during upgrades.
    • WTF Moment: “Why is my system upgrading right now? Where’s the update button? Why do I need to do this in the terminal?”
    • Why it’s Important: The process of updating and upgrading on Linux teaches users about the underlying package management system and the role of distribution maintainers in system security and stability. This level of control allows users to decide when and how to implement updates, which is an important aspect of customizability. It also reinforces the idea of minimal disruption in a system that prioritizes uptime and reliability.

    8. Drivers and Hardware Compatibility

    • What is it? In Linux, hardware drivers, especially for proprietary hardware (e.g., Nvidia graphics cards, Wi-Fi adapters), may not be as seamless as on Windows or macOS.
    • Why it’s confusing: Most Linux distributions come with a wide range of open-source drivers out-of-the-box, but certain hardware may require proprietary drivers or additional configuration.
    • WTF Moment: “My Wi-Fi card isn’t working! Why is it so hard to install drivers for my hardware?”
    • Why it’s Important: Dealing with drivers on Linux helps users understand the importance of open-source drivers and the challenges faced by hardware manufacturers in providing Linux-compatible drivers. It also underscores the community-driven nature of Linux development, as many hardware drivers are developed and maintained by the community. Navigating this issue encourages users to advocate for better hardware support and to seek out Linux-compatible hardware.

    9. Software Dependency Issues (Library Conflicts)

    • What is it? Some software packages in Linux require specific libraries or dependencies that need to be installed first. If the correct versions of these libraries aren’t present, the software won’t work.
    • Why it’s confusing: Unlike Windows, where most applications come with all the required dependencies bundled, Linux apps often rely on shared system libraries, leading to dependency hell (when two packages need different versions of the same library).
    • WTF Moment: “I tried to install an app, but it says it’s missing a library. What is that? Why can’t I just click and install it?”
    • Why it’s Important: This challenge introduces users to the concept of software dependencies, libraries, and the importance of package management in ensuring that software works properly on a Linux system. Learning to resolve dependency issues encourages users to become familiar with the internals of software packaging and system libraries, which is crucial for troubleshooting and advanced system administration.

    10. The Concept of "Everything is a File"

    • What is it? In Linux, almost everything is treated as a file, including hardware devices, system processes, and even certain types of system resources.
    • Why it’s confusing: In Windows, hardware devices, processes, and resources are typically managed through control panels or system preferences. In Linux, understanding how these entities are represented as files in directories like /dev or /proc can be baffling.
    • WTF Moment: “Why is my printer just a file in /dev? I don’t even know how to open it! Why is everything a file?”
    • Why it’s Important: This concept is foundational to understanding Linux’s elegance and simplicity. It reflects the Unix philosophy of making the system as transparent and flexible as possible. By treating everything as a file, Linux users can interact with hardware, processes, and system resources in a consistent and predictable way. This uniform approach simplifies tasks like device management, logging, and process control, leading to a more streamlined experience once understood.

    Conclusion

    Switching to Linux can be a challenging journey, especially for those coming from more familiar operating systems like Windows or macOS. The learning curve may feel steep, but these "WTF" moments are part of the process of understanding and embracing Linux's unique strengths. Once a user overcomes these initial hurdles, they often find themselves with a more customizable, secure, and powerful operating system, with the added benefit of being part of a global open-source community.

  • Posted on

    The change in popularity of open-source operating systems, particularly in the wake of Red Hat's decision to shift CentOS to an upstream provider (CentOS Stream), has been significant. This decision fundamentally altered the landscape of enterprise Linux distributions and led to the rise of alternative distributions such as AlmaLinux and Rocky Linux. Here's a closer look at the changes in popularity, the rationale behind them, and why people should consider switching to distributions like AlmaLinux or Rocky Linux.

    The Shift in CentOS's Role and Its Impact

    Historically, CentOS (Community ENTerprise Operating System) was a free and open-source distribution that closely mirrored Red Hat Enterprise Linux (RHEL), providing a stable, production-ready platform for users who needed enterprise-level features without the cost of RHEL's commercial support. Many businesses, hosting providers, and developers relied on CentOS for its stability and compatibility with RHEL, especially in production environments where software stability and long-term support were critical.

    However, in December 2020, Red Hat announced a significant change to CentOS's role: CentOS would no longer be a direct downstream rebuild of RHEL. Instead, it would be rebranded as CentOS Stream, which is positioned as a rolling-release distribution that sits between Fedora (a community-driven, cutting-edge distribution) and RHEL (the enterprise version). CentOS Stream became a preview of what would eventually be included in RHEL, making it less stable and more volatile compared to the previous CentOS model.

    The Response to Red Hat’s Change

    Red Hat’s decision to shift CentOS to CentOS Stream was met with backlash from a significant portion of the community, especially from enterprises and developers who had relied on CentOS for its stability and RHEL compatibility. Many in the open-source community expressed concerns that CentOS Stream would not be suitable for production environments where stability and long-term support were crucial.

    As a result, several organizations and community members started looking for alternatives to CentOS, leading to the creation of new distributions designed to fill the gap. These alternatives aimed to provide a stable, RHEL-compatible experience without the "rolling release" nature of CentOS Stream.

    The Rise of AlmaLinux and Rocky Linux

    In response to the change in CentOS, two major alternatives emerged as the most notable successors:

    1. AlmaLinux

      • Background: AlmaLinux was created by CloudLinux, a company known for its work with enterprise Linux servers. CloudLinux promised to continue offering a free, open-source RHEL-compatible distribution, aiming to fill the void left by CentOS's shift to CentOS Stream. AlmaLinux is designed to be binary-compatible with RHEL, ensuring that users can migrate seamlessly from CentOS to AlmaLinux without compatibility issues.
      • Key Features:
        • Fully RHEL-compatible.
        • Long-term support (LTS) with regular security updates.
        • Free and open-source.
        • Backed by CloudLinux’s enterprise experience, providing extra stability for businesses.
      • Popularity: AlmaLinux quickly gained traction due to its backing from CloudLinux, its close alignment with RHEL, and its strong focus on long-term stability.
    2. Rocky Linux

      • Background: Rocky Linux was founded by Gregory Kurtzer, one of the original creators of CentOS. Rocky Linux’s goal was to provide a community-driven, RHEL-compatible distribution that would continue the spirit of CentOS as a downstream rebuild of RHEL, with a focus on stability and reliability.
      • Key Features:
        • Full binary compatibility with RHEL.
        • Community-driven and nonprofit, with a focus on openness and transparency.
        • Long-term support and stability, making it ideal for production environments.
      • Popularity: Rocky Linux quickly attracted a strong community, particularly due to its connection to the original CentOS team and its focus on maintaining the stability CentOS users valued.

    Why Switch to AlmaLinux or Rocky Linux?

    Given the changes in CentOS, AlmaLinux and Rocky Linux have become the go-to choices for many who seek an alternative. Here’s why people should consider switching to these distributions:

    1. Stability and Reliability:

      • Both AlmaLinux and Rocky Linux are designed to provide RHEL compatibility without the rolling-release model of CentOS Stream. This means they offer stable, production-ready environments ideal for enterprise use, hosting, and mission-critical applications.
      • Organizations that need long-term support and a stable OS for their infrastructure benefit from the continuity these distributions offer.
    2. Free and Open-Source:

      • Just like CentOS, both AlmaLinux and Rocky Linux are free and open-source. They provide the same enterprise-grade features as RHEL without the associated costs of a subscription, making them an excellent choice for businesses with tight budgets.
      • This openness also ensures that users can fully audit, customize, and contribute to the distributions.
    3. Seamless Migration from CentOS:

      • Both distributions are designed to be binary-compatible with RHEL, ensuring that software and applications that ran on CentOS will work seamlessly on AlmaLinux or Rocky Linux.
      • The migration path from CentOS to either AlmaLinux or Rocky Linux is straightforward, with tools and resources available to make the transition as smooth as possible.
    4. Community-Driven Development:

      • Rocky Linux is a community-driven project, offering transparency and a strong emphasis on collaboration. It benefits from the contributions of the same people who helped build CentOS, ensuring that it stays aligned with the needs of its user base.
      • AlmaLinux, while backed by CloudLinux, also embraces community input and contributions, making it a robust choice for those seeking a free RHEL alternative supported by a company with a strong track record in the Linux space.
    5. Long-Term Support:

      • Both distributions provide long-term support (LTS), ensuring that users receive updates and patches over the course of many years, just like RHEL. This is a crucial factor for enterprises that need reliable, secure platforms for their systems without frequent disruptions.
    6. Enterprise-Ready:

      • Both AlmaLinux and Rocky Linux are suitable for enterprise environments where uptime, security, and reliability are paramount. With RHEL compatibility, they can run the same enterprise software and applications, but at no cost for the operating system itself.
    7. Growing Ecosystem:

      • As both distributions continue to grow, they’re gaining wider support within the enterprise Linux ecosystem, with many hosting providers and developers ensuring compatibility with AlmaLinux and Rocky Linux. As a result, businesses and developers can confidently use these distributions knowing they are supported by the broader open-source community and ecosystem.

    Conclusion

    The shift of CentOS to CentOS Stream significantly impacted the Linux ecosystem, particularly for users who depended on CentOS as a stable, RHEL-compatible platform. However, the rise of AlmaLinux and Rocky Linux has provided a much-needed alternative for those seeking a stable, long-term, free RHEL clone. These distributions offer a smooth migration path, strong community support, and enterprise-grade stability. Whether you're running servers, hosting environments, or mission-critical applications, switching to AlmaLinux or Rocky Linux ensures a stable and reliable platform that maintains the open-source principles of CentOS while filling the gap left by its change in direction.

  • Posted on

    As a system administrator, understanding the nuances of each Linux desktop environment is crucial when making an informed decision about which to deploy. Each environment offers distinct advantages in terms of system resources, customization, user experience, and compatibility with various distributions and use cases. Below is a breakdown of what system administrators should know about each of these desktop environments and window managers, along with insights into their popularity and relevance in the broader Linux ecosystem.

    1. GNOME

    • What to Know: GNOME is known for its simplicity and modern look. It prioritizes a clean, consistent user interface and workflow, often regarded as the "default" Linux desktop. However, it can be more resource-intensive, so it's less ideal for older or less powerful hardware.
    • Popularity: GNOME is one of the most popular desktop environments, often the default in distributions like Ubuntu (GNOME Shell), Fedora, and Debian.
    • Relevance: GNOME is a popular choice for users seeking a polished, user-friendly desktop, and for enterprises or professional environments where usability and consistency matter.

    2. KDE Plasma

    • What to Know: KDE Plasma is highly customizable, offering a rich user experience with advanced features. It's feature-packed but can be demanding on resources, although recent versions have optimized it significantly. Ideal for power users who want control over their desktop.
    • Popularity: KDE Plasma is widely used and is the default for distributions like Kubuntu, KDE neon, and openSUSE.
    • Relevance: KDE is suitable for users who need a visually appealing, full-featured desktop with extensive customization options.

    3. Xfce

    • What to Know: Xfce is known for being lightweight while still offering a traditional desktop experience. It's ideal for older hardware or users who want a fast, stable, and customizable environment without heavy resource usage.
    • Popularity: Xfce is one of the top choices for lightweight Linux distributions such as Xubuntu and Manjaro Xfce.
    • Relevance: It’s an excellent choice for low-resource systems and users who want performance without sacrificing essential functionality.

    4. Cinnamon

    • What to Know: Cinnamon is a user-friendly, full-featured desktop that provides a traditional desktop layout (similar to Windows). It’s known for its balance of usability and performance.
    • Popularity: Cinnamon is the default desktop for Linux Mint, one of the most popular Linux distributions.
    • Relevance: Cinnamon is great for users migrating from Windows who want a similar desktop experience with the power of Linux.

    5. MATE

    • What to Know: MATE is a continuation of GNOME 2, focusing on simplicity and stability. It’s lightweight but still offers a traditional desktop experience, making it a solid choice for users who prefer classic interfaces.
    • Popularity: MATE is the default for distributions like Ubuntu MATE and is appreciated in the lightweight desktop niche.
    • Relevance: MATE is perfect for those who prefer classic desktop paradigms without requiring significant system resources.

    6. LXQt

    • What to Know: LXQt is the successor to LXDE, designed to be a lightweight and fast desktop environment. It’s still evolving but has already gained traction due to its minimal resource consumption.
    • Popularity: It’s the default for Lubuntu and is used by some lightweight distributions.
    • Relevance: Ideal for low-end hardware, it offers a simple, efficient desktop experience with a low footprint.

    7. LXDE

    • What to Know: LXDE (Lightweight X11 Desktop Environment) is another lightweight desktop for low-resource systems. Though it has been largely superseded by LXQt, it’s still available and widely used for older systems.
    • Popularity: LXDE is used in lightweight distributions like Lubuntu and Debian LXDE.
    • Relevance: LXDE is perfect for users with older or resource-constrained systems.

    8. Pantheon

    • What to Know: Pantheon is a sleek, modern desktop environment designed for the elementary OS distribution. It’s visually appealing and focused on simplicity, providing a macOS-like experience.
    • Popularity: Pantheon is the default for elementary OS.
    • Relevance: It’s a great choice for users who prefer a simple, intuitive, and attractive desktop.

    9. Deepin

    • What to Know: Deepin is a visually rich and user-friendly desktop, designed to provide a modern, polished experience with deep integration of multimedia and system settings.
    • Popularity: Deepin is the default for the Deepin Linux distribution.
    • Relevance: This is an excellent choice for users who want a beautiful, easy-to-use desktop with strong multimedia features.

    10. Budgie

    • What to Know: Budgie offers a clean and modern desktop, focused on simplicity and efficiency. It provides a visually appealing interface and integrates well with the GNOME stack.
    • Popularity: Budgie is the default desktop for Solus and is growing in popularity in other distributions.
    • Relevance: Budgie is ideal for users who want a simple and beautiful desktop with a modern user experience.

    11. Enlightenment

    • What to Know: Enlightenment is highly customizable and offers a lightweight, minimalistic experience with advanced visual effects. It’s not for beginners due to its steep learning curve.
    • Popularity: Enlightenment is used in some distributions like Bodhi Linux.
    • Relevance: This is a good option for users who want to experiment and need maximum customization.

    12. i3

    • What to Know: i3 is a tiling window manager that doesn’t focus on a traditional desktop experience. It’s minimalistic, efficient, and highly customizable.
    • Popularity: i3 is popular among advanced users and is used in distributions like Arch Linux.
    • Relevance: It’s ideal for power users and those who want to maximize efficiency with a keyboard-driven workflow.

    13. Sway

    • What to Know: Sway is a Wayland-compatible replacement for i3, offering similar tiling functionality but with improved security and modern features.
    • Popularity: It’s growing in popularity among i3 users who want to use Wayland.
    • Relevance: Perfect for users who want the efficiency of i3 with the benefits of Wayland.

    14. Awesome

    • What to Know: Awesome is another tiling window manager focused on advanced users. It offers a high degree of customization, but also has a steep learning curve.
    • Popularity: Awesome is used by advanced users and developers, particularly in minimalist distributions.
    • Relevance: This is for users who prioritize performance and customization over traditional desktop features.

    15. Openbox

    • What to Know: Openbox is a highly customizable stacking window manager. It's lightweight and suitable for users who want a minimalist environment.
    • Popularity: Openbox is used in distributions like Arch Linux, CrunchBang++, and others.
    • Relevance: Openbox is ideal for users who prefer to build their desktop from the ground up with a focus on performance.

    16. Fluxbox

    • What to Know: Fluxbox is another lightweight window manager that focuses on simplicity and speed, similar to Openbox but with different configuration styles.
    • Popularity: Fluxbox is popular in lightweight distributions and for experienced users.
    • Relevance: Fluxbox is suitable for users seeking a minimalistic approach to their desktop environment.

    17. Cwm

    • What to Know: Cwm is a small, efficient window manager designed for simplicity. It has a minimalist design but includes some useful features.
    • Popularity: It’s favored by users who appreciate simplicity and speed.
    • Relevance: Ideal for users who want a no-frills, fast environment with minimal resource consumption.

    18. JWM (Joe’s Window Manager)

    • What to Know: JWM is a lightweight window manager with a focus on performance. It has a simple, classic interface and is suitable for older hardware.
    • Popularity: It’s used in lightweight distributions like Puppy Linux.
    • Relevance: JWM is ideal for users with limited resources and those looking for a minimal desktop setup.

    19. Herbstluftwm

    • What to Know: Herbstluftwm is a tiling window manager known for being scriptable and highly customizable, suitable for users who want to automate their desktop setup.
    • Popularity: It’s used by advanced users and enthusiasts.
    • Relevance: Great for users who want a highly personalized, lightweight, and efficient window manager.

    20. Blackbox

    • What to Know: Blackbox is a minimal window manager that provides a lightweight and basic environment, focusing on simplicity and speed.
    • Popularity: It's used in lightweight Linux distributions and by advanced users.
    • Relevance: Blackbox is for users who value simplicity and performance above all else.

    21. Window Maker

    • What to Know: Window Maker is a window manager that mimics the NeXTSTEP environment. It is lightweight and provides a simple desktop experience.
    • Popularity: Window Maker is often used in older Linux distributions.
    • Relevance: It's suitable for users who want a retro, efficient desktop with minimal resources.

    22. IceWM

    • What to Know: IceWM is a lightweight window manager that provides a simple desktop environment. It supports several visual themes and is ideal for older systems.
    • Popularity: Used in lightweight

      Linux distributions like AntiX and Slitaz.

    • Relevance: Ideal for users who want a minimalist environment on very low-resource hardware.

    23. AfterStep

    • What to Know: AfterStep is a window manager that emphasizes simplicity and efficiency, providing a desktop that can be customized through configuration files.
    • Popularity: Used in niche, minimalistic distributions.
    • Relevance: It’s a great choice for those who prefer a very lightweight, resource-conserving setup.

    24. Sugar

    • What to Know: Sugar is designed specifically for educational purposes, with a focus on learning and child-friendly interfaces. It’s part of the OLPC project.
    • Popularity: Mostly used in educational setups, particularly in the OLPC (One Laptop per Child) project.
    • Relevance: Sugar is vital in contexts where the desktop environment needs to be tailored to educational and developmental environments.

    Conclusion on Relevance of Having Multiple Desktop Environments in Linux:

    The diversity in desktop environments and window managers for Linux reflects the flexibility and versatility of the Linux ecosystem. For system administrators, this range of options is crucial, as it allows customization based on the following factors: - Resource Constraints: Environments like Xfce, LXQt, and i3 are ideal for lightweight setups, while GNOME and KDE Plasma offer feature-rich environments. - User Experience: Linux provides choices that cater to different user preferences, from traditional interfaces (Cinnamon, MATE) to modern, minimalistic setups (i3, Sway). - Use Cases: Some environments like Pantheon and Deepin are perfect for users seeking a polished, modern look, while others like Sugar focus on specific purposes (education).

    The broad array of desktop environments makes Linux adaptable for nearly any use case, whether for personal, enterprise, or educational purposes, while ensuring users can tailor their desktop to fit the needs of the hardware and workflow.

  • Posted on

    Here are three common ways to determine which process is listening on a particular port in Linux:


    1. Using lsof (List Open Files)

    • Command: bash sudo lsof -i :<port_number>
    • Example: bash sudo lsof -i :8080
    • Output:
      • The command shows the process name, PID, and other details of the process using the specified port.

    2. Using netstat (Network Statistics)

    • Command: bash sudo netstat -tuln | grep :<port_number>
    • Example: bash sudo netstat -tuln | grep :8080
    • Output:
      • Displays the protocol (TCP/UDP), local address, foreign address, and the process (if run with -p option on supported versions).

    Note: If netstat is not installed, you can install it via: bash sudo apt install net-tools


    3. Using ss (Socket Statistics)

    • Command: bash sudo ss -tuln | grep :<port_number>
    • Example: bash sudo ss -tuln | grep :8080
    • Output:
      • Displays similar information to netstat but is faster and more modern.

    Bonus: Using /proc Filesystem

    • Command: bash sudo grep <port_number> /proc/net/tcp
    • Example: bash sudo grep :1F90 /proc/net/tcp > Replace :1F90 with the hexadecimal representation of the port (e.g., 8080 in hex is 1F90).
    • This is a more manual approach and requires converting the port to hexadecimal.
  • Posted on

    In Bash, managing timers typically involves the use of two primary tools: Bash scripts with built-in timing features (like sleep or date) and Cron jobs (via crontab) for scheduled task execution. Both tools are useful depending on the level of complexity and frequency of the tasks you're managing.

    1. Using Timers in Bash (CLI)

    In Bash scripts, you can manage timers and delays by using the sleep command or date for time-based logic.

    a. Using sleep

    The sleep command pauses the execution of the script for a specified amount of time. It can be used for simple timing operations within scripts.

    Example:

    #!/bin/bash
    
    # Wait for 5 seconds
    echo "Starting..."
    sleep 5
    echo "Done after 5 seconds."
    

    You can also specify time in minutes, hours, or days:

    sleep 10m  # Sleep for 10 minutes
    sleep 2h   # Sleep for 2 hours
    sleep 1d   # Sleep for 1 day
    

    b. Using date for Timing

    You can also use date to track elapsed time or to schedule events based on current time.

    Example (Calculating elapsed time):

    #!/bin/bash
    
    start_time=$(date +%s)  # Get the current timestamp
    echo "Starting task..."
    sleep 3  # Simulate a task
    end_time=$(date +%s)  # Get the new timestamp
    
    elapsed_time=$((end_time - start_time))  # Calculate elapsed time in seconds
    echo "Elapsed time: $elapsed_time seconds."
    

    2. Using crontab for Scheduling Tasks

    cron is a Unix/Linux service used to schedule jobs to run at specific intervals. The crontab file contains a list of jobs that are executed at scheduled times.

    a. Crontab Syntax

    A crontab entry follows this format:

    * * * * * /path/to/command
    │ │ │ │ │
    │ │ │ │ │
    │ │ │ │ └── Day of the week (0-7) (Sunday=0 or 7)
    │ │ │ └── Month (1-12)
    │ │ └── Day of the month (1-31)
    │ └── Hour (0-23)
    └── Minute (0-59)
    
    • * means "every," so a * in a field means that job should run every minute, hour, day, etc., depending on its position.
    • You can use specific values or ranges for each field (e.g., 1-5 for Monday to Friday).

    b. Setting Up Crontab

    To view or edit the crontab, use the following command:

    crontab -e
    

    Example of crontab entries: - Run a script every 5 minutes: bash */5 * * * * /path/to/script.sh - Run a script at 2:30 AM every day: bash 30 2 * * * /path/to/script.sh - Run a script every Sunday at midnight: bash 0 0 * * 0 /path/to/script.sh

    c. Managing Crontab Entries

    • List current crontab jobs: bash crontab -l
    • Remove crontab entries: bash crontab -r

    d. Logging Cron Jobs

    By default, cron jobs do not provide output unless you redirect it. To capture output or errors, you can redirect both stdout and stderr to a file:

    * * * * * /path/to/script.sh >> /path/to/logfile.log 2>&1
    

    This saves both standard output and error messages to logfile.log.

    3. Combining Bash Timers and Cron Jobs

    Sometimes you might need to use both cron and timing mechanisms within a Bash script. For example, if a task needs to be scheduled but also requires some dynamic timing based on elapsed time or conditions, you could use cron to trigger the script, and then use sleep or date inside the script to control the flow.

    Example:

    #!/bin/bash
    
    # This script is triggered every day at midnight by cron
    
    # Wait 10 minutes before executing the task
    sleep 600  # 600 seconds = 10 minutes
    
    # Execute the task after the delay
    echo "Executing task..."
    # Your task here
    

    4. Advanced Scheduling with Cron

    If you need more complex scheduling, you can take advantage of specific cron features: - Use ranges or lists in the time fields: bash 0 0,12 * * * /path/to/script.sh # Run at midnight and noon every day - Run a task every 5 minutes during certain hours: bash */5 9-17 * * * /path/to/script.sh # Every 5 minutes between 9 AM and 5 PM

    5. Practical Examples

    • Backup Every Night:

      0 2 * * * /home/user/backup.sh
      

      This runs the backup.sh script every day at 2 AM.

    • Check Server Health Every Hour:

      0 * * * * /home/user/check_server_health.sh
      

      This runs a script to check the server's health every hour.

    Conclusion

    Managing timers in Bash using cron and sleep allows you to automate tasks, control timing, and create sophisticated scheduling systems. sleep is suitable for in-script delays, while cron is ideal for recurring scheduled jobs. Combining these tools lets you create flexible solutions for a wide range of automation tasks.

  • Posted on

    OS Package Managers: Keeping Your System Up-to-Date

    Package managers are essential tools in modern operating systems (OS) that help automate the process of installing, updating, and removing software packages. These tools manage the software installed on a system, making it easier for users and administrators to keep their systems up-to-date with the latest versions of software. They provide a streamlined and efficient way to manage dependencies, handle software updates, and ensure system stability by preventing compatibility issues.

    Importance of Package Managers

    Package managers are crucial for maintaining system health and security, and they provide several benefits:

    1. Automatic Updates: Package managers track software versions and allow you to update all installed software at once with a single command. This ensures that you always have the latest security patches, performance improvements, and new features without needing to manually search for and download updates.

    2. Dependency Management: Many software packages depend on other libraries and programs to function. Package managers ensure that these dependencies are correctly installed and maintained, which reduces the likelihood of conflicts between different versions of libraries or missing dependencies.

    3. Security: Security is a major reason to use a package manager. Package managers allow you to easily update software to close vulnerabilities that could be exploited by attackers. Often, package repositories are curated and include only trusted, verified packages.

    4. Reproducibility: Package managers allow administrators to set up systems with the exact same configuration across multiple machines. This is especially important in server environments, where you want all systems to have the same set of software, libraries, and dependencies.

    5. Software Removal: Package managers make it easy to remove unwanted software. This ensures that unnecessary files, dependencies, and configurations are cleaned up, saving disk space and reducing the attack surface.

    6. Centralized Repository: Most package managers use centralized repositories where software is pre-compiled and tested, so users don’t need to manually compile code or find external download sources, minimizing risks from malicious software.


    Types of Package Managers

    There are different types of package managers depending on the operating system. Below, we will explore examples from different OS environments to see how package managers work.

    1. Linux Package Managers

    Linux distributions (distros) typically use package managers that vary based on the type of distribution. The most common Linux package managers are:

    • APT (Advanced Package Tool): Used in Debian-based systems such as Ubuntu.
    • YUM/DNF (Yellowdog Updater, Modified / Dandified YUM): Used in Red Hat-based systems such as CentOS, Fedora, and RHEL.
    • Zypper: Used in openSUSE and SUSE Linux Enterprise Server.
    • Pacman: Used in Arch Linux and Manjaro.

    Examples of Commands to Install and Update Software on Linux:

    1. APT (Ubuntu/Debian)

    - Install a package:

    sudo apt install <package-name>
    

    Example:

    sudo apt install vim
    
    • Update the system:
    sudo apt update
    sudo apt upgrade
    

    This updates the package list and upgrades all installed software to the latest available version in the repositories.

    • Upgrade a specific package:
    sudo apt install --only-upgrade <package-name>
    

    Example:

    sudo apt install --only-upgrade vim
    
    • Remove a package:
    sudo apt remove <package-name>
    

    Example:

    sudo apt remove vim
    
    1. YUM/DNF (CentOS/Fedora/RHEL)

    - Install a package:

    sudo yum install <package-name># YUM for older versions
    sudo dnf install <package-name># DNF for newer Fedora/CentOS/RHEL
    

    Example:

    sudo dnf install vim
    
    • Update the system:
    sudo dnf update
    

    This command updates the entire system, installing the latest versions of all packages.

    • Upgrade a specific package:
    sudo dnf upgrade <package-name>
    
    • Remove a package:
    sudo dnf remove <package-name>
    

    Example:

    sudo dnf remove vim
    
    1. Zypper (openSUSE)

    - Install a package:

    sudo zypper install <package-name>
    

    Example:

    sudo zypper install vim
    
    • Update the system:
    sudo zypper update
    
    • Remove a package:
    sudo zypper remove <package-name>
    

    Example:

    sudo zypper remove vim
    
    1. Pacman (Arch Linux)

    - Install a package:

    sudo pacman -S <package-name>
    

    Example:

    sudo pacman -S vim
    
    • Update the system:
    sudo pacman -Syu
    
    • Remove a package:
    sudo pacman -R <package-name>
    

    Example:

    sudo pacman -R vim
    

    2. macOS Package Manager

    On macOS, Homebrew is the most popular package manager, although there are alternatives such as MacPorts.

    • Homebrew:
      Homebrew allows macOS users to install software and libraries not included in the macOS App Store. It works by downloading and compiling the software from source or installing pre-built binaries.

    Examples of Commands to Install and Update Software on macOS:

    • Install a package:
    brew install <package-name>
    

    Example:

    brew install vim
    
    • Update the system:
    brew update
    brew upgrade
    
    • Upgrade a specific package:
    brew upgrade <package-name>
    
    • Remove a package:
    brew uninstall <package-name>
    

    Example:

    brew uninstall vim
    

    3. Windows Package Managers

    Windows traditionally didn't include package managers like Linux or macOS, but with the advent of Windows Package Manager (winget) and Chocolatey, this has changed.

    • winget (Windows Package Manager):
      Windows 10 and newer include winget, a command-line package manager for installing software.

    Examples of Commands to Install and Update Software on Windows:

    • Install a package:
    winget install <package-name>
    

    Example:

    winget install vim
    
    • Update a package:
    winget upgrade <package-name>
    
    • Update all installed software:
    winget upgrade --all
    
    • Remove a package:
    winget uninstall <package-name>
    

    Example:

    winget uninstall vim
    
    • Chocolatey:
      Chocolatey is another popular package manager for Windows, with a large repository of software.

    Install a package:

    choco install <package-name>
    

    Example:

    choco install vim
    

    Update a package:

    choco upgrade <package-name>
    

    Remove a package:

    choco uninstall <package-name>
    

    Conclusion

    Package managers provide a streamlined, automated way to manage software installation, updates, and removal. Whether you're on a Linux, macOS, or Windows system, a package manager ensures that your software is up-to-date, secure, and properly configured. By using package managers, you can easily manage dependencies, get the latest versions of software with minimal effort, and maintain system stability.

    Having the ability to run a single command to install or update software, like sudo apt update on Linux or brew upgrade on macOS, saves time and reduces the risks associated with manually downloading and managing software. Package managers have become a fundamental tool for system administrators, developers, and power users, making system maintenance and software management easier, faster, and more reliable.

  • Posted on

    The terms Bash and SH refer to two different types of shell environments used in Linux and other Unix-like operating systems. While they both serve the purpose of interacting with the system through command-line interfaces (CLI), they have notable differences in terms of features, compatibility, and functionality.

    1. Origins and Compatibility

    • SH (Bourne Shell): The Bourne shell, commonly referred to as sh, was developed by Stephen Bourne at AT&T Bell Labs in the 1970s. It became the standard shell for Unix systems, providing basic functionalities such as file manipulation, variable management, and scripting. Its design focused on simplicity and portability, making it a versatile tool for system administrators and users alike. SH is typically available on nearly all Unix-based systems, even today.

    • Bash (Bourne Again Shell): Bash is an enhanced version of the Bourne shell. Developed by Brian Fox in 1987 for the GNU Project, Bash incorporates features from other shells like C Shell (csh) and Korn Shell (ksh), in addition to expanding on the original Bourne shell's capabilities. While Bash is fully compatible with sh and can run most Bourne shell scripts without modification, it includes additional features that make it more user-friendly and versatile for modern system usage.

    2. Features and Enhancements

    • Bash: One of the primary reasons Bash is preferred over SH is its expanded set of features. These include:

      • Command-line editing: Bash supports advanced command-line editing, allowing users to move the cursor and edit commands using keyboard shortcuts (e.g., using the arrow keys to navigate through previous commands).
      • Job control: Bash provides the ability to suspend and resume jobs (e.g., using Ctrl+Z to suspend a process and fg to resume it).
      • Arrays: Bash allows for more complex data structures, including indexed arrays and associative arrays, which are not available in sh.
      • Brace Expansion: Bash supports brace expansion, which allows users to generate arbitrary strings based on patterns, improving script conciseness and flexibility.
      • Advanced scripting capabilities: Bash offers powerful tools such as command substitution, extended pattern matching, and built-in string manipulation, making it more suitable for complex scripting tasks.
    • SH: By contrast, the original sh shell has fewer built-in features compared to Bash. It lacks features like command-line editing and job control, and its scripting capabilities are simpler. While this makes it more lightweight and portable, it also means that writing complex scripts can be more cumbersome in SH. However, sh scripts are typically more compatible across different Unix-like systems because sh is considered the "lowest common denominator" shell.

    3. Portability and Use Cases

    • SH: SH is favored for portability, especially in situations where scripts need to run across a wide range of Unix-like systems. Because SH has been part of Unix for so long, it's typically available on nearly all Unix-based systems. Scripts written in pure SH syntax tend to be more portable and can be executed without modifications on different systems, making SH the go-to choice for system administrators who require maximum compatibility.

    • Bash: While Bash is widely available on Linux distributions and many Unix-like systems, it is not as universally present as SH. There may be cases, such as on certain minimal or embedded systems, where Bash is not installed by default. However, since many Linux distributions use Bash as the default shell, it is often the preferred choice for modern system scripting and daily interactive use due to its rich set of features.

    4. Scripting and Interactivity

    • SH: Scripts written for SH are typically more focused on basic automation tasks and are often simpler in structure. Given its limited feature set, scripts may require more manual workarounds for tasks that would be straightforward in Bash. However, SH scripts are usually more compatible across different systems, making them ideal for system-wide installation scripts or software that needs to be distributed widely.

    • Bash: Bash provides a more interactive experience with its advanced command-line editing and job control. It's excellent for personal use, administrative tasks, and when writing more complex scripts that require advanced functions like arithmetic operations, loops, or conditional branching. Bash also supports functions and more sophisticated ways to handle errors, making it suitable for creating highly maintainable and robust scripts.

    5. Conclusion

    In summary, Bash is a superset of the sh shell, offering enhanced features and a more interactive, user-friendly experience. However, sh remains valuable for its simplicity and portability, particularly in environments where compatibility across diverse systems is critical. While Bash is often the preferred choice for users and administrators on modern Linux systems, sh retains its importance in the context of system compatibility and for users who need minimal, universal shell scripts.

  • Posted on

    Let's explore each of the programming languages and their interpreters in detail. We'll look into the context in which they're used, who typically uses them, what they are used for, and the power they offer. Additionally, I'll suggest starting points for testing each language, and provide an explanation of their benefits and popularity.

    1. Bash

    Context & Usage:

    • Who uses it: System administrators, DevOps engineers, and developers working in Linux or Unix-like environments.

    • What for: Bash is the default shell for Unix-like systems. It’s used for writing scripts to automate tasks, managing system processes, manipulating files, and running system commands.

    • Where it’s used: System administration, automation, DevOps, server management, and batch processing tasks.

    Benefits:

    • Ubiquity: Bash is available by default on almost all Unix-based systems (Linux, macOS), making it indispensable for server-side administration.

    • Powerful scripting capabilities: It allows for process control, file manipulation, regular expressions, and piping commands.

    • Simple yet powerful syntax: Despite being lightweight, it’s capable of handling complex system-level tasks.

    Hello World Example:

    echo "Hello World"
    

    In-Depth Starting Point:

    • Test file manipulation, process management, and simple system automation by scripting tasks like listing files, scheduling jobs, or managing processes.
    # List all files in the current directory
     ls -l
    # Create and manipulate a simple file
    echo "Hello, World" > hello.txt
    cat hello.txt
    

    2. Python

    Context & Usage:

    • Who uses it: Web developers, data scientists, engineers, and researchers.

    • What for: Python is a general-purpose language used for web development, data analysis, machine learning, and automation.

    • Where it’s used: Web development (Django, Flask), data science (Pandas, NumPy), machine learning (TensorFlow, scikit-learn), scripting, and automation.

    Benefits:

    • Readability & simplicity: Python's syntax is clear and easy to understand, making it a great choice for beginners and experienced developers alike.

    • Extensive ecosystem: Python boasts a vast ecosystem of libraries for everything from web frameworks to scientific computing.

    • Community support: Python’s large community ensures a wealth of resources, tutorials, and libraries.

    Hello World Example:

    python3 -c 'print("Hello World")'
    

    In-Depth Starting Point:

    • Python’s interactive shell and script-based execution allow testing of libraries like math, numpy, or pandas right at the Bash prompt.
    # Using Python's interactive shell to calculate something
    python3 -c 'import math; print(math.sqrt(16))'
    

    3. Perl

    Context & Usage:

    • Who uses it: Web developers, network administrators, and those involved in text processing, bioinformatics, and automation.

    • What for: Perl is primarily used for text processing, systems administration, and web development (CGI scripts).

    • Where it’s used: Log file parsing, web backends, network scripts, and text-based data manipulation.

    Benefits:

    • Text Processing: Perl excels at regular expressions and text manipulation, making it the go-to tool for tasks like log parsing and configuration file handling.

    • CPAN: Perl has a massive collection of reusable code and modules via the Comprehensive Perl Archive Network (CPAN).

    Hello World Example:

    perl -e 'print "Hello World\n";'
    

    In-Depth Starting Point:

    • Test regular expressions or string manipulations by working with log files.
    # Extracting IP addresses from a log file using Perl
    perl -ne 'print if /(\d+\.\d+\.\d+\.\d+)/' access.log
    

    4. Ruby

    Context & Usage:

    • Who uses it: Web developers, particularly those using Ruby on Rails for web applications.

    • What for: Ruby is mainly used for web development, but can also be used for automation scripts, GUI applications, and testing.

    • Where it’s used: Web applications, automation tasks, API development.

    Benefits:

    • Ruby on Rails: The Ruby on Rails framework has made Ruby a popular choice for rapid web development. It follows the principle of “convention over configuration,” speeding up development.

    • Elegant syntax: Ruby’s syntax is designed to be both expressive and easy to read.

    Hello World Example:

    ruby -e 'puts "Hello World"'
    

    In-Depth Starting Point:

    • Ruby can be tested interactively or through simple script execution.
    # Simple Ruby script to fetch the contents of a URL
    ruby -e 'require "net/http"; puts Net::HTTP.get(URI("http://example.com"))'
    

    5. PHP

    Context & Usage:

    • Who uses it: Web developers, especially those working on server-side scripting for web applications.

    • What for: PHP is commonly used for dynamic web page generation and server-side scripting.

    • Where it’s used: Websites (especially CMSs like WordPress), backend development, and APIs.

    Benefits:

    • Web-centric: PHP was designed specifically for web development, with powerful features for working with databases and HTML generation.

    • Ubiquity in web hosting: PHP is widely supported by web hosting providers and powers a significant portion of the web.

    Hello World Example:

    php -r 'echo "Hello World\n";'
    

    In-Depth Starting Point:

    • Test basic PHP functionality and integration with web servers.
    # A simple PHP script to output current time
    php -r 'echo "Current time: " . date("Y-m-d H:i:s") . "\n";'
    

    6. JavaScript (Node.js)

    Context & Usage:

    • Who uses it: Full-stack developers, backend developers, and those working on real-time applications.

    • What for: JavaScript (Node.js) allows JavaScript to be used on the server-side to build scalable, event-driven applications.

    • Where it’s used: Web servers, real-time applications (chat, notifications), APIs, microservices.

    Benefits:

    • Single language for full-stack: Node.js allows JavaScript to be used both on the client-side (in the browser) and server-side (on the backend).

    • Non-blocking I/O: Node.js is known for its asynchronous, non-blocking I/O model, making it highly efficient for I/O-heavy applications.

    Hello World Example:

    node -e 'console.log("Hello World");'
    

    In-Depth Starting Point:

    • Node.js can be tested for its asynchronous capabilities with event-driven scripts.
    # A simple Node.js script to log current time every second
     node -e 'setInterval(() => console.log(new Date()), 1000);'
    

    7. C

    Context & Usage:

    • Who uses it: Systems programmers, embedded system developers, and developers working on performance-critical applications.

    • What for: C is used for low-level system programming, embedded systems, and developing software that interacts directly with hardware.

    • Where it’s used: Operating systems, embedded systems, device drivers, real-time applications.

    Benefits:

    • Performance: C is a low-level language that provides fine control over system resources, making it the language of choice for high-performance applications.

    • Portability: Code written in C can be compiled to run on a wide variety of systems, from embedded devices to supercomputers.

    Hello World Example:

    #include <stdio.h>
    int main() {
        printf("Hello World\n");
        return 0;
    }
    

    In-Depth Starting Point:

    • To test C, you would need a C compiler (e.g., GCC) to compile and run the code.
    gcc hello.c -o hello && ./hello
    

    8. C++

    Context & Usage:

    • Who uses it: Systems programmers, game developers, and developers of performance-intensive applications.

    • What for: C++ is used for object-oriented programming and systems-level applications that require high performance.

    • Where it’s used: Game engines, desktop applications, real-time systems, performance-critical software.

    Benefits:

    • Object-Oriented Programming: C++ adds support for classes and objects to C, making it easier to manage large, complex codebases.

    • Performance: Like C, C++ offers fine-grained control over system resources, making it ideal for real-time applications.

    Hello World Example:

    #include <iostream>
    int main() {
        std::cout << "Hello World" << std::endl;
        return 0;
    }
    

    In-Depth Starting Point:

    • Compile and run C++ code for performance testing.
    g++ hello.cpp -o hello && ./hello
    

    9. Java

    Context & Usage:

    • Who uses it: Enterprise developers, Android developers, and backend system developers.

    • What for: Java is primarily used for building large-scale enterprise applications, Android apps, and server-side components.

    • Where it’s used: Enterprise applications, Android apps, large-scale web servers.

    Benefits:

    • Cross-Platform: Java’s “write once, run anywhere” philosophy allows Java applications to run on any system with a JVM (Java Virtual Machine).

    • Rich Ecosystem: Java has a vast ecosystem of libraries and frameworks, including Spring for backend systems and Android for mobile apps.

    Hello World Example:

    public class HelloWorld {
        public static void main(String[] args) {
            System.out.println("Hello World");
        }
    }
    

    In-Depth Starting Point:

    • Compile and run Java code to explore object-oriented principles.
    javac HelloWorld.java && java HelloWorld
    

    Next is a table with the name of the language, date of inception, and a corresponding Hello World example:

    Language Date of Inception Hello World Example
    Bash 1989 echo "Hello World"
    Python 1991 python3 -c 'print("Hello World")'
    Perl 1987 perl -e 'print "Hello World\n";'
    Ruby 1995 ruby -e 'puts "Hello World"'
    PHP 1994 php -r 'echo "Hello World\n";'
    JavaScript (Node.js) 2009 node -e 'console.log("Hello World");'
    C 1972 printf("Hello World\n");
    C++ 1983 std::cout << "Hello World" << std::endl; return 0;
    Java 1995 System.out.println("Hello World");
    Go 2009 fmt.Println("Hello World") }
    Rust 2010 fn main() { println!("Hello World"); }
    Lua 1993 lua -e 'print("Hello World")'
    Haskell 1990 main = putStrLn "Hello World"
    Shell Script 1989 echo "Hello World"
    AWK 1977 awk 'BEGIN {print "Hello World"}'
    Tcl 1988 tclsh <<< 'puts "Hello World"'
    R 1993 Rscript -e 'cat("Hello World\n")'
    Kotlin 2011 println("Hello World")
    Swift 2014 print("Hello World")
    Julia 2012 julia -e 'println("Hello World")'

    Notes:

    • Bash: Inception was around 1989 by Brian Fox. It's a shell scripting language, often used for system administration tasks.
    • Python: Created by Guido van Rossum in 1991. Known for its simplicity and readability, Python is widely used in web development, data analysis, and scripting.
    • Perl: Developed by Larry Wall in 1987. Known for text processing and used heavily in system administration and web development.
    • Ruby: Created by Yukihiro Matsumoto in 1995. Ruby is famous for its elegant syntax and the Ruby on Rails framework for web development.
    • PHP: Created by Rasmus Lerdorf in 1994, primarily for web development to build dynamic content on websites.
    • JavaScript (Node.js): JavaScript was created in 1995, but Node.js, a runtime environment, was created by Ryan Dahl in 2009. Used for building scalable server-side applications.
    • C: Developed by Dennis Ritchie in 1972. It remains one of the most influential programming languages, used for system programming, embedded systems, and applications that require high performance.
    • C++: Developed by Bjarne Stroustrup in 1983. It builds on C and adds object-oriented programming features. It's used for game development, embedded systems, and high-performance applications.
    • Java: Developed by James Gosling at Sun Microsystems in 1995. Java is widely used for enterprise-level applications, Android development, and large-scale systems.
    • Go: Created by Google in 2009. Known for its simplicity, speed, and concurrency features, Go is widely used for web servers, distributed systems, and cloud computing.
    • Rust: Created by Mozilla in 2010. Rust is known for its memory safety and performance, used in system programming, and other high-performance applications.
    • Lua: Developed by Roberto Ierusalimschy in 1993. Lua is a lightweight scripting language commonly embedded in games and applications for configuration or scripting.
    • Haskell: Created in 1990, it's a purely functional programming language used for research, academic purposes, and systems requiring high levels of mathematical computation.
    • Shell Script: A type of script commonly used in Unix-like systems to automate tasks, write system maintenance scripts, and handle administrative tasks.
    • AWK: Developed in 1977, AWK is a pattern scanning and processing language used for text and data processing tasks.
    • Tcl: Created by John Ousterhout in 1988. Known for its use in embedded systems, testing, and automation.
    • R: Created by Ross Ihaka and Robert Gentleman in 1993, R is used primarily for statistical computing and data analysis.
    • Kotlin: Developed by JetBrains in 2011, Kotlin is a modern, statically-typed language that runs on the Java Virtual Machine (JVM) and is now heavily used for Android development.
    • Swift: Created by Apple in 2014, Swift is a modern language used for iOS and macOS application development.
    • Julia: Created in 2012 for high-performance numerical and scientific computing. It's widely used in data science, machine learning, and large-scale computational tasks.

    Each of these languages has evolved in different directions based on the needs of developers in various industries, from system-level programming and web development to data analysis and machine learning.

    Conclusion:

    Each language comes with its own set of strengths, contexts, and use cases. Whether it's the system-level control of C, the ease of web development in Python or Ruby, or the performance of C++ and Rust, these languages offer rich ecosystems and excellent developer support. By testing them at the Bash prompt, you can start to get a feel for each language's capabilities, from system automation with Bash to interactive and asynchronous tasks with Node.js. Each interpreter brings something unique to the table, making them essential tools for different domains in software development.

  • Posted on

    If you're looking to dive deep into Bash and become proficient with Linux command-line tools, here are three highly regarded books that are both informative and widely acclaimed:

    The Linux Command Line: A Complete Introduction by William E. Shotts, Jr. Why it’s great: This book is a fantastic starting point for beginners who want to understand the basics of Linux and the command line. It covers Bash fundamentals, command syntax, file system navigation, and more. Shotts takes a clear, approachable, and comprehensive approach, gradually building up your skills.

    Key Features:

    • Clear explanations of common command-line tools and Bash concepts.
    • Emphasis on hands-on practice with examples and exercises.
    • Introduction to shell scripting and text manipulation utilities.
    • Great for beginners, but also helpful as a reference for experienced users.

    Bash Cookbook: Solutions and Examples for Bash Users by Carl Albing, JP Vossen, and Cameron Newham Why it’s great: This is an excellent book for users who are already familiar with Bash but want to explore advanced techniques and solve practical problems. The "cookbook" format provides problem-solution pairs that cover a wide range of tasks, from basic automation to complex system administration.

    Key Features:

    • Hundreds of practical, real-world examples.
    • Detailed explanations of various Bash features and best practices.
    • Covers a wide variety of topics, such as text processing, file manipulation, and working with processes.
    • Suitable for intermediate to advanced users who want to deepen their knowledge and learn tricks and shortcuts.

    Learning the Bash Shell by Cameron Newham Why it’s great: As a highly respected guide, this book is a great balance of theory and practical application. It covers everything from basic shell operations to scripting and more complex shell programming. It's well-suited for those who want to become proficient in Bash and shell scripting.

    Key Features:

    • A deep dive into Bash syntax, variables, loops, and conditionals.
    • Covers regular expressions and advanced topics like process substitution and job control.
    • Provides useful scripts for everyday tasks, making it a great reference.
    • Focuses on both understanding Bash and writing efficient shell scripts.

    In summary:

    The Linux Command Line is perfect for beginners. Bash Cookbook offers practical, hands-on solutions for intermediate to advanced users. Learning the Bash Shell strikes a balance, with comprehensive coverage of Bash scripting and shell programming for all levels.

    These books provide a solid foundation, deep insights, and practical examples, making them invaluable resources for mastering Bash.

  • Posted on

    Linux Bash (Bourne Again Shell) is incredibly versatile and fun to use. Here are 10 enjoyable things you can do with it.

    Customize Your Prompt

    Use PS1 to create a custom, colorful prompt that displays the current time, username, directory, or even emojis.

    export PS1="\[\e[1;32m\]\u@\h:\[\e[1;34m\]\w\[\e[0m\]$ "

    Play Retro Games

    Install and play classic terminal-based games like nethack, moon-buggy, or bsdgames.

    Make ASCII Art

    Use tools like toilet, figlet, or cowsay to create text-based art.

    echo "Hello Linux!" | figlet Figlet example use

    Create Random Passwords

    Generate secure passwords using /dev/urandom or Bash functions. tr -dc 'A-Za-z0-9' < /dev/urandom | head -c 16

    Turn Your Terminal into a Weather Station

    Use curl to fetch weather data from APIs like wttr.in.

    curl wttr.in

    enter image description here

    Bash is a playground for creativity and efficiency—experiment with it, and you’ll discover even more possibilities!

  • Posted on

    Introduction

    Signals are used to interact between processes and can occur at anytime, typically they are kill signals however processes can opt to handle them programatically unless they are SIGKILL or SIGSTOP signals.

    List Of Available Signals

    Table of signals

    SIGNAL VALUE DEFAULT ACTION POSIX? MEANING
    SIGHUP 1 Terminate Yes Hangup detected on controlling terminal or death of controlling process
    SIGINT 2 Terminate Yes Interrupt from keyboard
    SIGQUIT 3 Core dump Yes Quit from keyboard
    SIGILL 4 Core dump Yes Illegal instruction
    SIGTRAP 5 Core dump No Trace/breakpoint trap for debugging
    SIGABTR SIGIOT 6 Core dump Yes Abnormal termination
    SIGBUS 7 Core dump Yes Bus error
    SIGFPE 8 Core dump Yes Floating point exception
    SIGKILL 9 Terminate Yes Kill signal (cannot be caught or ignored)
    SIGUSR1 10 Terminate Yes User-defined signal 1
    SIGSEGV 11 Core dump Yes Invalid memory reference
    SIGUSR2 12 Terminate Yes User-defined signal 2
    SIGPIPE 13 Terminate Yes Broken pipe: write to pipe with no readers
    SIGALRM 14 Terminate Yes Timer signal from alarm
    SIGTERM 15 Terminate Yes Process termination
    SIGSTKFLT 16 Terminate No Stack fault on math co-processor
    SIGCHLD 17 Ignore Yes Child stopped or terminated
    SIGCONT 18 Continue Yes Continue if stopped
    SIGSTOP 19 Stop Yes Stop process (can not be caught or ignored)
    SIGTSTP 20 Stop Yes Stop types at tty
    SIGTTIN 21 Stop Yes Background process requires tty input
    SIGTTOU 22 Stop Yes Background process requires tty output
    SIGURG 23 Ignore No Urgent condition on socket (4.2 BSD)
    SIGXCPU 24 Core dump Yes CPU time limit exceeded (4.2 BSD)
    SIGXFSZ 25 Core dump Yes File size limit exceeded (4.2 BSD)
    SIGVTALRM 26 Terminate No Virtual alarm clock (4.2 BSD)
    SIGPROF 27 Terminate No Profile alarm clock (4.2 BSD)
    SIGWINCH 28 Ignore No Window resize signal (4.3 BSD, Sun)
    SIGIO SIGPOLL 29 Terminate No I/O now possible (4.2 BSD) (System V)
    SIGPWR 30 Terminate No Power Failure (System V)
    SIGSYS SIGUNUSED 31 Terminate No Bad System Called. Unused signal

  • Posted on

    Introduction

    A computer doing more than one thing at a time is using processes, these require resources, CPU time, memory and access to other devices like CD/DVD/USB drives, etc. Each process is allocated an amount of system resources to perform its function which is controlled by the operating system whose job it is to facilitate these processes. Signals have an important part to play on the interaction of the processes, usually these send exit signals and other information to each other, or to itself.

    Programs, Processes, and Threads

    A program is a set of instructions to be carried out which may local data such as information for output to the terminal via read or external data which may come from a database. Common programs include ls, cat and rm which would reside outside of the user operating system thus they have their own operating program on disk.

    A process is a program which is executing which has associated resources, furthermore. These might include environments, open files or signal handlers etc. At the same time, two or more tasks or processes may share system resources meaning they are multi-threaded processes.

    In other operating systems there may be a distinction between heavy-weight and light-weight processes; furthermore, heavy-processes may contain a number of lightweight processes. However, in Linux, each process is considered heavy or light by the amount of shared resources they consume, for faster context switching. Again, unlike other operating systems, Linux is much faster at switching between processes, creating and destroying them too. This means the model for multi-threaded processes is similar to that of simultaneous processes as they are treated as one. All the same, Linux respects POSIX and other standards for multi-threaded processes, where each thread returns the same process ID plus returning a thread ID.

    Processes

    Processes are running programs in execution, either running or sleeping. Every process has a process ID (pid), parent process ID (ppid) and a parent group ID (ppgid). In addition every process has program code, data, variables, file descriptions and an environment.

    In Linux, init is the first process that is run, thus becoming the ancestor to all other programs executed on the system; unless they are started directly from the Linux Kernel, for which they are grouped with brackets ([]) in the ps listing. Other commonalities are the processes pid numbering, if a parent process ends before the child, the process will become adopted by init, setting its process ID (pid) to 1. If the system is more recent, this will result in the pid being set to 2 inline with systemd’s kernel thread kthreadd. In the circumstance where a child dies before its parent it enters a zombie state, using no resources and retaining only the exit code. It is the job of init to allow these processes to die gracefully and is often referred to as being the zombie killer or child reaper. Finally, the processes ID can not go above 16bit definition hence 32768 is the largest pid you will find on a Linux system; to alter this value see /proc/sys/kernel/pid_max; once this limit is reached they are restarted at 300.

    Process Attributes

    • All processes have certain attributes, as shown below:
      • Program
      • Context
      • Permissions
      • Resources

    The program is the process itself, maybe it is a set of commands in a script or a loop which never ends checking against external data and performing an action when necessary.

    The context is the state which the program is in at any point in time, thus a snapshot of CPU registers, what is in RAM and other information. Furthermore, processes can be scheduled in and out when sharing CPU time or put to sleep when waiting for user input, etc. Being able to swap out and put back the process context is known as context switching.

    Permissions are inherited from the user who executed the program however programs themselves can have an s property assigned to them In order to define their effective user ID, rather than their real ID and are referred to as setuid programs. These setuid programs run with the permissions of the creator of the program, not the user who ran it. A commonly found setuid program is passwd.

    Process Resource Isolation

    This is the practice of isolating the process from other processes upon execution, promoting security and stability. Furthermore, processes do not have direct access to hardware devices, etc. Hardware is managed by the kernel meaning system calls must be made in order to interact with said devices.

    Controlling Processes (ulimit)

    This program, umlimit, reports and resets a number of resource limits associated with processes running under its shell. See below for a typical output of ulimit -a.

    ulimit -a

    Here you see a screenshot of ulimit -a which contains values important for a system administrator to be aware of because it identifies the allocation of resources. You may want to restrict or expand resource allocation depending on the requirements of the processes and / or file access limits.

    Hard and soft resource allocation terms come into play here, with hard limits imposed by the administrator and soft limited by those restrictions but allowing for user-level limits to be imposed. Run ulimit -H -n to see hard limits and ulimit -S -n for soft limits. File descriptors are probably the most common thing that may need changing, to allow more resources, typically this is set to 1024 which may make execution of an application virtually impossible so you can change this with ulimit -n 1600 which would change it to 1600. In order to make more permanent changes you would need to edit /etc/security/limits.conf then perform a reboot.

    Process States

    Processes can take on many different states, which is managed by the scheduler. See below for the main process states: - Running - Sleeping (waiting) - Stopped - Zombie

    A running process is either using the CPU or will be waiting for its next CPU time execution slice, usually cited in the run queue - resuming when the scheduler deems it satisfactory to re-enter CPU execution.

    A sleeping process is one that is waiting on its request that can not be processed until completed. Upon completion, the kernel will wake the process and reposition it into the run queue.

    A stopped process is one that has been suspended and is used normally by developers to take a look at resource usage at any given point. This can be activated with CTRL-Z or by using a debugger.

    A zombie process is one that has entered terminated state and no other process has inquired about it, namely “reaped it” - this can often be called a “defunct process”. The process releases all of its resources except its exit state.

  • Posted on

    Introduction

    After reading this document you should be able to identify why Linux defines its filesystem hierarchy in one big tree and explain the role of the filesystem hierarchy standard, explain what is available at boot in the root directory (/), explain each subdirectory purpose and typical contents. The aim here is to be able to create a working bash script which knows where to put its different data stores including lockfiles, database(s) or temporary files; including the script itself.

    One Big Filesystem

    As with all Linux installations there is a set protocol to follow which could be looked at as one big tree starting from its root, /. This often contains different access points not just typical file or folder components, but in fact mounted drives, USB or CD/DVD media volumes and so on. Even more adapt, these can span many partitions; as one filesystem. Regardless, the end result is one big filesystem, meaning applications generally do not care what volume or partition the data resides upon. The only drawback you may encounter is different naming conventions however there are now standards to follow in the Linux-ecosystem for cross-platform conformity.

    Defining The Data

    There has to be one method for defining all data in order to satisfy clear distinctions. Firstly, you may examine data and identify whether it is shareable or not. For instance, /home data may be shared across many hosts however .pid lock files will not. Another angle you may look at is are the files variable or static, meaning if no administrator input is given they will remain the same, ie. static - or else the data changes when the filesystem is in operation without human interaction hence it is called variable. With this in mind, you must identify which trees and sub-trees are accessible by your application or command prompt to identify if they can be manipulated at runtime and where they should reside if you are creating these filetypes.

    To summarise: - Shared Data is Common To All Systems - Non-Shareable Data is Local To One System - Static Data Never Changes When Left Alone - Variable Data Will Change During Application Processing

    The Filesystem Hierarchy Standard aims to help achieve unity across all platforms however different distributions can often invent new methodologies that will generally become standard over time. While the FHS (Filesystem Hierarchy Standard) publishes its standard new conventions are currently in play despite this document, see here: linuxfoundation.org...

    DIRECTORY FHS Approved PURPOSE
    / Yes Primary directory of the entire filesystem hierarchy.
    /bin Yes Essential executable programs that must be available in single user mode.
    /boot Yes Files needed to boot the system, such as the kernel, initrd or initramfs |images, and boot configuration files and bootloader programs.
    /dev Yes Device Nodes, used to interact with hardware and software devices.
    /etc Yes System-wide configuration files.
    /home Yes User home directories, including personal settings, files, etc.
    /lib Yes Libraries required by executable binaries in /bin and /sbin.
    /lib64 No 64-bit libraries required by executable binaries in /bin and /sbin, for systems which can run both 32-bit and 64-bit programs.
    /media Yes Mount points for removable media such as CDs, DVDs, USB sticks, etc.
    /mnt Yes Temporarily mounted filesystems.
    /opt Yes Optional application software packages.
    /proc Yes Virtual pseudo-filesystem giving information about the system and processes running on it. Can be used to alter system parameters.
    /sys No Virtual pseudo-filesystem giving information about the system and processes running on it. Can be used to alter system parameters. Similar to a device tree and is part of the Unified Device Model.
    /root Yes Home directory for the root user.
    /sbin Yes Essential system binaries.
    /srv Yes Site-specific data served up by the system. Seldom used.
    /tmp Yes Temporary files; on many distributions lost across a reboot and may be a ramdisk in memory.
    /usr Yes Multi-user applications, utilities and data; theoretically read-only.
    /var Yes Variable data that changes during system operation.

    Run du --max-depth=1 -hx / to see the output of your root filesystem hierarchy.

    The Root Directory (/)

    Starting with our first directory, the root directory (/) this is often the access point mounted across multiple (or single) partitions with other locations such as /home, /var and /opt mounted later. This root partition must contain all root directories and files at boot in order to serve the system. Therefore it needs boot loader information and configuration files plus other essential startup data, which must be adequate to perform the following operations: - Boot the system - Restore the system on external devices such as USB, CD/DVD or NAS - Recover and/or repair the system (ie. in rescue mode)

    The root directory / should never have folders created directly within it; period.

    Binary Files (/bin)

    • The /bin directory must be present for a system to function containing scripts which are used indirectly by other scripts. It's important because non-privileged users and system administrators all have access to this directory plus it contains scripts needed to be served before the filesystem is even mounted. It is common place to store non-essential scripts which do not merit going in /bin to be served from /usr/bin, however /bin is becoming more acceptable to be used in common-place operation, in fact in RHEL they are the same directory. Often symbolic links are used from/bin to other folder locations in order to preserve two-way folder listings.

    They are as follows: cat, chgrp, chmod, chown, cp, date, dd, df, dmesg, echo, false, hostname, kill, ln, login, ls, mkdir, mknod, more, mount, mv, ps, pwd, rm, rmdir, sed, sh, stty, su, sync, true, umount and uname

    Other binaries that may be present during boot up and in normal operation are: test, csh, ed, tar, cpio, gunzip, zcat, netstat, ping

    The Boot Directory (/boot)

    This folder contains the vmlinuz and intramfs (also known as initrd) files which are put there in order to serve the boot operation, the first is a compressed kernel and the second is the initial RAM filesystem. Other files include config and system.map.

    Device Files (/dev)

    Device files are often used to store device nodes; commonly serving various hardware references including nodes - network cards however are more likely to be named eth0 or wlan0 meaning they are referenced by name. The directory /dev/ will automatically create nodes using udev when system hardware is found. Quite aptly, ls /dev | grep std will show you some useful output references which can be used to process data either to the terminal or into the obis.

    Configuration Files (/etc)

    Used to contain config directives (or contained folders with config directives) for system-wide programs and more importantly system services.

    Some examples are: csh.login, exports, fstab, ftpusers, gateways, gettydefs, group, host.conf, hosts.allow, hosts.deny, hosts,equiv, hosts.lpd, inetd.conf, inittab, issue, ld.so.conf, motd, mtab, mtools.conf, networks, passwd, printcap, profile, protocols, resolv.conf, rpc, securetty, services, shells, syslog.conf

    More crucially, the following helps keep the system correctly configured: - /etc/skel This folder contains the skeleton of any new users /home directory - /etc/systemd - Points to or contains configuration scripts for system services, called by service - /etc/init.d - Contains startup and shutdown scripts used by System V initialisation

    System Users (/home)

    On Linux, users working directories are given in the /home/{username} format and are typically named in a naming convention such as /home/admin /home/projects/home/stagingor/home/production. Typically, this could be their name or nickname or purpose, eg./home/steve,/home/steve-work` and so on.

    With Linux, this folder can be accessed via the ~ symbol which is given to system users in order to direct the user to the currently logged in users home directory, eg. ls ~/new-folder etc; which is also accessible by using $home. The only caveat to this is that the root user is placed in /root - all other users reside in /home, typically mirroring /etc/skel as previously outlined. (see “Configuration Files (/etc)”)

    System Libraries (/lib and /lib64)

    These folders are for libraries serving binary files found in /bin or other locations where scripts are found. These libraries are important because they maintain the upkeep of essential system programs (binaries) which help boot the system and then are used by the system once booted, fundamentally. Kernel modules (device or system drivers) are stored in /lib/modules and PAM (Pluggable Authentication Modules) are stored in /lib/security.

    For systems running 32bit and 64bit the /lib64 is usually present. More common place is to use the one folder with symbolic links to the actual destination of the library, similar to how /bin has reformed back into one folder, providing the same structure with separation of differing program importance (using symbolic links) while maintaining the single source for all scripts of that type.

    External Devices (/media)

    The /media folder is often found to be a single source for all removable media. USB, CD, DVD even the ancient Floppy Disk Drive. Linux mounts these automatically using “udev” which in turn creates references inside /media making for simple access to external devices, autonomously. As the device is removed, the name of the file in this directory is removed also.

    Temporary Mounts (/mnt)

    This folder is used for mount points, usually temporary ones. During the development of the FHS this would typically contain removable devices however /media is favoured on modern systems.

    Typical use scenarios are: - NFS - Samba - CIFS - AFS

    Generically, this should not be used by applications, instead mounted disks should be located elsewhere on the system.

    Software Packages (/opt)

    This location is where you would put system-wide software packages with everything included in one place. This is for services that want to provide everything in one place, so you would have /opt/project/bin etc. all in this folder. The directories /opt/bin, /opt/doc, /opt/include, /opt/info, /opt/lib, and /opt/man are saved for administrator usage.

    System Processes (/proc)

    These are special files which are mounted like with /dev and are constantly changing. They only contain data at the point you make the request, so typically a file may be 0kb but if you look, may contain many lines of data; this is accessed only when you run the cat or vi operation, it does indeed remain empty. Important pseudo files are /proc/interrupts, /proc/meminfo, /proc/mounts, and /proc/partitions.

    System Filesystems (/sys)

    This directory is the mount point for the sysfs pseudo-filesystem where all information resides only in memory, not on disk. This is again very much like /dev/ and /proc in that it contains mounted volumes which are created on system boot. Containing information about devices and drivers, kernel modules and so on.

    Root (/root)

    This is generally called “slash-route” pronounced phonetically and is simply the primary system administrators home folder. Other user accounts are encouraged with specific access details for better security.

    System Binaries (/sbin)

    This is very similar to /bin and as mentioned may very well have symbolic link references inside /bin with the actual program residing here. This allows for a one-solution-fits-all because /bin will display all programs whereas /sbin would be designated to the programs listed there. Some of these programs include: fdisk, fsck, getty, halt, ifconfig, init, mkfs, mkswap, reboot, route, swapon, swapoff and update.

    System Services (/srv)

    Popular for some administrators this is designed to provide system service functionality. You can be fairly lax with naming conventions here you may want to group applications into folders such as ftp, rsync, www, and cvs etc. Popular by those that use it and may be overlooked by those what don’t.

    Temporary Files (/tmp)

    Used by programs that do not want to keep the data between system boots and it may be periodically refreshed by the system. Use at your own discretion. Be aware as this is truly temporary data any large files may cause issues as often the information is stored in memory.

    System User (/usr)

    This should be thought of as a second system hierarchy. Containing non-local data it is best practice to serve administrator applications here and is often used for files and packages or software that is not needed for booting.

    DIRECTORY PURPOSE
    /usr/bin Non-essential command binaries
    /usr/etc Non-essential configuration files (usually empty)
    /usr/games Game data
    /usr/include Header files used to compile applications
    /usr/lib Library files
    /usr/lib64 Library files for 64-bit
    /usr/local Third-level hierarchy (for machine local files)
    /usr/sbin Non-essential system binaries
    /usr/share Read-only architecture-independent files
    /usr/src Source code and headers for the Linux kernel
    /usr/tmp Secondary temporary directory

    Other common destinations are /usr/share/man and /usr/local, the former is for manual pages and the latter is for predominantly read-only binaries.

    Variable Data (/var)

    This directory is intended for variable (volatile) data and as such is often updated quite frequently. Contains log files, spool directories and files administrator files and transient files such as cache data.

    SUBDIRECTORY PURPOSE
    /var/ftp Used for ftp server base
    /var/lib Persistent data modified by programs as they run
    /var/lock Lock files used to control simultaneous access to resources
    /var/log Log files
    /var/mail User mailboxes
    /var/run Information about the running system since the last boot
    /var/spool Tasks spooled or waiting to be processed, such as print queues
    /var/tmp Temporary files to be preserved across system reboot. Sometimes linked to /tmp
    /var/www Root for website hierarchies

    Transient Files (/run)

    These are meant to be files that are updated quite regularly and are not often maintained during reboots, useful for containing temporary files and runtime information. The use of run is quite new and you may find /var/run and /var/lock symbolic link references. The use of this is more common place in modern systems.

  • Posted on

    If you're suffering with server lag, ping latency, degraded system functionality or generally slow response times you may need to add extra RAM to your server.

    Of course, adding RAM is a solution - but it's not the only option you have. Why not think about utilising swap space, this enables you to let your HDD act as a virtual-backup solution to out-of-memory situations.

    These days of course using your HDD for RAM doesn't seem as bad as what it sounds, with SSD's widely used the temptation to switch to swap space is quite understandable.

    Plus, for most use-cases, you will only need as much as 2-4GB of RAM in order to compile your favourite popular software title - such as Apache, or similar. To simply run the program may require less RAM thus less overhead for cost. You could use the package manager to install Apache to avoid having to compile it but why do that when you a) want to compile and b) can simply add swap space.

    To add swap space, follow these instructions:

    sudo fallocate -l 1G /swapfile
    

    Here, we are creating a swap file with a size of 1G. If you need more swap, replace 1G with the desired size.

    A good rule of thumb is to have 3x more swap than your physical RAM. So for 2GB of RAM you would have 6GB of swap space.

    If the fallocate utility is not available on your system or you get an error message saying fallocate failed: Operation not supported, use the dd command to create the swap file:

    sudo dd if=/dev/zero of=/swapfile bs=1024 count=1048576
    

    Next, set the swap space files permissions so that only the root user can read and write the swap file:

    sudo chmod 600 /swapfile
    

    Next, set up a Linux swap area on the file:

    sudo mkswap /swapfile
    

    Activate the swap by executing the following command:

    sudo swapon /swapfile
    

    And that should do it. Check the memory is allocated by simply using free.

    sudo free -h
    

    Finally, make the changes permanent by adding a swap entry in the /etc/fstab file:

    echo  '/swapfile swap swap defaults 0 0' >> /etc/fstab
    

    Now, when your server reboots the swap space will be reconfigured automatically.


    And that's it! If you enjoyed this post please feel free to leave a like or a comment using the messaging options below.

  • Posted on

    When it happens that your VPS is eating data by the second and there is disk read/write issues one port of call you are bound to visit is searching and identifying large files on your system.

    Now, you would have been forgiven for thinking this is a complicated procedure considering some Linux Bash solutions for fairly simple things, but no. Linux Bash wins again!

    du -sh /path/to/folder/* | sort -rh

    Here, du is getting the sizes and sort is organising them, -h is telling du to display human-readable format.

    The output should be something like this:

    2.3T    /path/to/directory
    1.8T    /path/to/other
    

    It does take a while to organise as it is being done recursively however given 3-5mins and most scenarios will be fine.

  • Posted on

    So yeah, getting used to Bash is about finding the right way to do things. However, learning one-liners and picking up information here and there is all very useful, finding something you don't need to Google in order to recall is extremely important.

    Take the case of recursive find and replace. We've all been there, it needs to be done frequently but you're either a) scared or b) forgetful. Then you use the same snippet from a web resource again and again and eventually make a serious error and boom, simplicity is now lost on you!

    So here it is, something you can remember so that you don't use different methods depending on what Google throws up today.

    grep -Rl newertext . | xargs sed -i 's/newertext/newesttext/g'

    Notice, we use grep to search -R recursively with results fed to xargs which does a simple sed replace with g global/infinite properties.

    Say you want to keep it simple though and find and review the files before doing the "simple" replace. Well, do this.

    grep -R -l "foo" ./*

    The uppercase -R denotes to follow symlinks and the ./* indicates the current directory. The -l requests a list of filenames matched. Neat.

  • Posted on

    Safe and Secure SSH Connections

    In a modern world where cyber-warfare is common place and every-day users are targets from organised crime, it goes without saying that you are likely to run into problems rather quickly if you don't use every available means of security.

    The scope of this article is to connect via SSH Keys however you should also be doing some other more mundane tasks like encrypting the connection (preferably with a VPN on your router) and using altered ports, plus limiting access to SSH users, if you have them.

    So what is the safest way to connect to your remote Linux OS distribution, by command line? Well quite simply, it is done with SSH Keys which you generate so that the connection can be established. These keys are then used as a form of password and where the remote user has these pre-generated keys on their system, SSH shares them and if allowed, serves the connection.

    Generating Your Keys

    From command line on the machine you are connecting from, do the following:

    ssh-keygen - Leave as default values

    This creates files inside your home directories .ssh folder. This is a hidden folder that you usually don't need access to. To see what's inside, do ls .ssh from your home path.

    Now, do the following, from your home path:

    cat .ssh/id_rsa.pub

    This is your public password. Share this with unlimited amounts of remote servers and while you are using this account, you will have access.

    Sharing Your Keys

    On a mundane level, you can provide the key you generated via any method you like, only your machine and account will be able to use it.

    Now, take the output of cat .ssh/id_rsa.pub, and do echo "key-here" >> .ssh/authorized_keys and voila, the magic is done. You can now do ssh user@example.com, password-free.

    So that's one way of achieving passwordless login via SSH, although there is an easier way. Do:

    ssh-copy-id user@example.com
    

    This will auto-install the keys for you, assuming you can connect to the server via SSH using other authentication methods - such as password.

    Removing Keys

    To remove access to a users account, do vi .ssh/authorized_keys and delete the line corresponding to the users account.

    It really is that simple!

    Voila

    Congratulations, you're all set up! Don't forget, while it is perfectly safe to share your id_rsa.pub key, do so with caution. Using it on your website homepage may attract unwanted attention!

    Peace.

  • Posted on

    Get to Know Linux Bash in Under 30 Minutes: A Quick Guide for Beginners

    Linux Bash (Bourne Again Shell) is the default command-line interface for most Linux distributions and macOS. For new users, it might feel overwhelming at first, but once you understand the basics, Bash can become a powerful tool for managing your system, automating tasks, and improving productivity.

    In this quick guide, we’ll walk you through the essentials of Bash in under 30 minutes. Whether you're a beginner or just looking to refresh your knowledge, this guide will help you feel comfortable with the Linux command line.

    1. What is Bash?

    Bash is a command-line interpreter that allows users to interact with their operating system by entering text-based commands. It's a shell program that interprets and runs commands, scripts, and system operations. It’s often referred to as a command-line shell or simply a shell.

    Bash allows users to navigate their filesystem, run programs, manage processes, and even write complex scripts. Most Linux distributions come with Bash as the default shell, making it essential for anyone using Linux systems.

    2. Basic Commands

    Let’s start by covering some essential commands that you’ll use regularly in Bash:

    • pwd: Stands for “print working directory.” It shows you the current directory you're in.

      $ pwd
      /home/user
      
    • ls: Lists the contents of the current directory.

      $ ls
      Documents  Downloads  Pictures
      
    • cd: Changes the current directory. Use cd .. to go up one level.

      $ cd Documents
      $ cd ..
      
    • cp: Copies files or directories.

      $ cp file1.txt file2.txt
      
    • mv: Moves or renames files.

      $ mv oldname.txt newname.txt
      
    • rm: Removes files or directories.

      $ rm file.txt
      
    • mkdir: Creates a new directory.

      $ mkdir new_folder
      

    3. Navigating the Filesystem

    The filesystem in Linux is hierarchical, starting from the root directory (/). Here’s how you can move around:

    • Absolute paths start from the root. Example: /home/user/Documents
    • Relative paths are based on your current directory. Example: Documents (if you're already in /home/user).

    Use cd to navigate to any directory. To go to your home directory, simply type cd without arguments:

    $ cd
    

    To go to the root directory:

    $ cd /
    

    To navigate up one directory:

    $ cd ..
    

    4. Redirection and Pipelines

    Bash allows you to redirect input and output, as well as chain multiple commands together using pipes.

    • Redirection: Redirect output to a file using >. Use >> to append to a file.

      $ echo "Hello, World!" > hello.txt
      $ cat hello.txt  # Prints "Hello, World!"
      
    • Pipes (|): You can send the output of one command to another. For example:

      $ ls | grep "text"  # Lists all files containing "text"
      

    5. Wildcards

    Wildcards are symbols that represent other characters. They are useful for matching multiple files or directories.

    • *: Matches any number of characters.

      $ ls *.txt  # Lists all .txt files in the current directory
      
    • ?: Matches a single character.

      $ ls file?.txt  # Matches file1.txt, file2.txt, etc.
      
    • []: Matches a single character within a range.

      $ ls file[1-3].txt  # Matches file1.txt, file2.txt, file3.txt
      

    6. Managing Processes

    Bash allows you to interact with processes running on your system:

    • ps: Lists the running processes.

      $ ps
      
    • top: Provides a dynamic view of system processes.

      $ top
      
    • kill: Terminates a process by its ID (PID).

      $ kill 1234  # Replace 1234 with the actual PID
      
    • &: Run a command in the background.

      $ my_script.sh &
      

    7. Using Variables

    Bash allows you to define and use variables. Variables can store information such as strings, numbers, or the output of commands.

    • To define a variable:

      $ my_variable="Hello, Bash!"
      
    • To use a variable:

      $ echo $my_variable
      Hello, Bash!
      
    • You can also assign the output of a command to a variable:

      $ current_dir=$(pwd)
      $ echo $current_dir
      

    8. Writing Scripts

    One of the most powerful features of Bash is its ability to write scripts—sequences of commands that you can execute as a single file.

    1. Open a text editor and create a new file, such as myscript.sh.
    2. Add the following code to the script: bash #!/bin/bash echo "Hello, Bash!"
    3. Save and exit the editor.
    4. Make the script executable:

      $ chmod +x myscript.sh
      
    5. Run the script:

      $ ./myscript.sh
      

    The #!/bin/bash at the top of the file is called a "shebang" and tells the system which interpreter to use to execute the script.

    9. Learning More

    To become proficient in Bash, it’s important to keep experimenting and learning. Here are some useful resources to continue your Bash journey:

    • man pages: Bash comes with built-in documentation for most commands. For example:

      $ man ls
      
    • Bash Help: For a quick reference to Bash syntax and commands, use:

      $ help
      
    • Online Tutorials: Websites like LinuxCommand.org and The Linux Documentation Project provide comprehensive tutorials.


    Conclusion

    Mastering Bash doesn’t require an extensive amount of time or effort. By learning the basics of navigation, file management, process handling, and scripting, you can start using the Linux command line to automate tasks, manage your system more effectively, and boost your productivity.

    Now that you've learned the fundamentals, you can explore more advanced topics like loops, conditionals, and scripting techniques. Bash is an incredibly powerful tool that, once understood, can unlock a new world of efficiency on your Linux system.

    Happy exploring!

  • Posted on

    Linux is an open-source Operating System which is released with different flavours (or distros) under the guise of free-to-use software. Anybody can download and run Linux free-of-charge and with no restraints on the end-user; you could release, distribute and profit from Linux with relative ease with no worry of associated cost or licensing infringement.

    It is fair to say Linux has formidably and profoundly revolutionised and defined the process of interacting with electronic devices. You can find Linux in cars, refrigerators, televisions and of course, as a desktop-grade or headless operating system. Once you become accustomed to Linux, you quickly see just why all the top 500 supercomputers all run Linux.

    Linux has been around since the mid-1990’s and is is one of the most reliable, secure and hassle-free operating systems available. Put simply, Linux has become the largest open sources software project in the world. Professional and hobbyist programmers and developers from around the world contribute to the Linux kernel, adding features, finding and fixing bugs and security flaws, live patching and providing new ideas—all while sharing their contributions back to the community.

    Wikipedia

    Linux is a family of open-source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991, by Linus Torvalds.

    Direct Link to Linux on Wikipedia

    Open Source

    Linux is a free, open source operating system, released under the GNU General Public License (GPL). Anyone can run, study, modify, and redistribute the source code, or even sell copies of their modified code, as long as they do so under the same license.

    Command Line

    The command line is your direct access to a computer. It's where you ask software to perform hardware actions that point-and-click graphical user interfaces (GUIs) simply can't ask.

    Command lines are available on many operating systems—proprietary or open source. But it’s usually associated with Linux, because both command lines and open source software, together, give users unrestricted access to their computer.

    Installing Linux

    For many people, the idea of installing an operating system might seem like a very daunting task. Believe it or not, Linux offers one of the easiest installations of all operating systems. In fact, most versions of Linux offer what is called a Live distribution, which means you run the operating system from either a CD/DVD or USB flash drive without making any changes to your hard drive. You get the full functionality without having to commit to the installation. Once you’ve tried it out, and decided you wanted to use it, you simply double-click the “Install” icon and walk through the simple installation wizard.

    Installing Software on Linux

    Just as the operating system itself is easy to install, so too are applications. Most modern Linux distributions include what most would consider an app store. This is a centralized location where software can be searched and installed. Ubuntu Linux (and many other distributions) rely on GNOME Software, Elementary OS has the AppCenter, Deepin has the Deepin Software Center, openSUSE has their AppStore, and some distributions rely on Synaptic.

    Regardless of the name, each of these tools do the same thing: a central place to search for and install Linux software. Of course, these pieces of software depend upon the presence of a GUI. For GUI-less servers, you will have to depend upon the command-line interface for installation.

    Let’s look at two different tools to illustrate how easy even the command line installation can be. Our examples are for Debian-based distributions and Fedora-based distributions. The Debian-based distros will use the apt-get tool for installing software and Fedora-based distros will require the use of the yum tool. Both work very similarly. We’ll illustrate using the apt-get command. Let’s say you want to install the wget tool (which is a handy tool used to download files from the command line). To install this using apt-get, the command would like like this:

    sudo apt-get install wget
    

    The sudo command is added because you need super user privileges in order to install software. Similarly, to install the same software on a Fedora-based distribution, you would first su to the super user (literally issue the command su and enter the root password), and issue this command:

    yum install wget
    

    That’s all there is to installing software on a Linux machine. It’s not nearly as challenging as you might think. Still in doubt?

    You can install a complete LAMP (Linux Apache MySQL PHP) server on either a server or desktop distribution. It really is that easy.

    More resources

    If you’re looking for one of the most reliable, secure, and dependable platforms for both the desktop and the server, look no further than one of the many Linux distributions. With Linux you can assure your desktops will be free of trouble, your servers up, and your support requests minimal.

  • Posted on

    If you’ve ever used a Linux operating system used on most Virtual Private Servers, you may have heard of bash. It’s a Unix shell that reads and executes various commands.

    What Is Bash?

    Bash, short for Bourne-Again Shell, is a Unix shell and a command language interpreter. It reads shell commands and interacts with the operating system to execute them.

    Why Use Bash Scripts?

    Bash scripts can help with your workflow as they compile many lengthy commands into a single executable script file. For example, if you have multiple commands that you have to run at a specific time interval, you can compile a bash script instead of typing out the commands manually one by one. You then execute the script directly, when it’s necessary.

    Pro Tip Linux has a bash shell command manual. Type man command to find descriptions of all the technical terms and input parameters.

    Get Familiar With Bash Commands

    Bash is available on almost all types of Unix-based operating systems and doesn’t require a separate installation. You will need a Linux command prompt, also known as the Linux terminal. On Windows you would use something like PuTTy. It’s a program that contains the shell and lets you execute bash scripts. 

    1. Comments

    Comments feature a description on certain lines of your script. The terminal doesn’t parse comments during execution, so they won’t affect the output.

    There are two ways to add comments to a script. The first method is by typing # at the beginning of a single-line comment. # Command below prints a Hello World text echo “Hello, world!”

    2. Variables

    Variables are symbols that represent a character, strings of characters, or numbers. You only need to type the variable name in a command line to use the defined strings or numbers.

    To assign a variable, type the variable name and the string value like here: testvar=“This is a test variable”

    In this case, testvar is the variable name and This is a test variable is the string value. When assigning a variable, we recommend using a variable name that’s easy to remember and represents its value.

    To read the variable value in the command line, use the $ symbol before the variable name. Take a look at the example below:

    testvar=“This is a test variable”
    echo $testvar
    

    In order to let the user enter the variable contents use:

    read testvar
    echo $testvar
    

    3. Functions

    A function compiles a set of commands into a group. If you need to execute the command again, simply write the function instead of the whole set of commands.

    There are several ways of writing functions. The first way is by starting with the function name and following it with parentheses and brackets:

    function_name () {
        first command
        second command
    }
    

    Or, if you want to write it in a single line: function_name () { first command; second command; }

    4. Loops

    Loop bash commands are useful if you want to execute commands multiple times. There are three types of them you can run in bash – for, while, and until. The for loop runs the command for a list of items:

    for item in [list]
    do
        commands
    done
    

    The following example uses a for loop to print all the days of the week:

    for days in Monday Tuesday Wednesday Thursday Friday Saturday Sunday
    do
        echo “Day: $days”
    done
    

    On line 2, “days” automatically becomes a variable, with the values being the day names that follow. Then, in the echo command, we use the $ symbol to call the variable values.

    The output of that script will be as follows:

    Day: Monday
    Day: Tuesday
    Day: Wednesday
    Day: Thursday
    Day: Friday
    Day: Saturday
    Day: Sunday
    

    Notice that even with just one command line in the loop script, it prints out seven echo outputs.

    The next type of loop is while. The script will evaluate a condition. If the condition is true, it will keep executing the commands until the output no longer meets the defined condition.

    while [condition]
        do
    commands
    done
    

    5. Conditional Statements

    Many programming languages, including bash, use conditional statements like if, then, and else for decision-making. They execute commands and print out outputs depending on the conditions. The if statement is followed by a conditional expression. After that, it’s followed by then and the command to define the output of the condition. The script will execute the command if the condition expressed in the if statement is true.

    However, if you want to execute a different command if the condition is false, add an else statement to the script and follow it with the command.

    Let’s take a look at simple if, then, and else statements. Before the statement, we will include a variable so the user can input a value:

    echo “Enter a number”
    read num
    if [[$num -gt 10]]
    then
    echo “The number is greater than 10”
    else
    echo “The number is not greater than 10”
    

    OK, so that's it. The 5 building blocks of Bash in plain English. Simple, right?!

  • Posted on

    1) Ubuntu

    Ubuntu Logo

    – Ubuntu is by far the most popular Linux distribution with an intuitive GUI (Graphical User Interface) that is easy to learn and very familiar for Windows users.

    – It is essentially Debian-based and easy to install with top-notch commercial support although this is largely irrelevant if you can point and click with a basic understanding of how to interact with applications as you do in Windows.

    – Most preferred Linux distribution for non-tech people.

    Ubuntu Project Home Page

    2) CloudLinux

    CloudLinux – CloudLinux is on a mission to make Linux secure, stable, and profitable.

    – Based solely on the same platform as Red Hat Enterprise Linux for stable releases and usually command prompt based servers.

    – License fees are reasonable for small businesses so it is still a go-to option in the Linux flavour world.

    CloudLinux Project Home Page

    3) Red Hat Enterprise Linux (a.k.a. RHEL)

    Red Hat Enterprise Linux (RHEL) Logo

    – Another most famous and open-source Linux Distribution is Red Hat Enterprise Linux (RHEL). It is a stable, secure yet powerful software suited mostly to the server classification; however it does provide a wealth of tools, apps and other front end software, if not run headless.

    – RHEL was devised by Red Hat for commercial purposes. It offers tremendous support for Cloud, Big Data, IoT, Virtualization, and Containers.

    – Its components are based on Fedora, a community-driven project.

    – RHEL supports 64-bit ARM, Power, and IBM System z machines.

    – The subscription of Red Hat allows the user to receive the latest enterprise-ready software, knowledge base, product security, and technical support.

    Red Hat Enterprise Linux

    4) AlmaLinux

    AlmaLinux

    – Stable and open-source derivative of Red Hat Enterprise Linux, an easy way to run the commercial product in open-source format.

    – A popular free Linux distros for VPS and operationally compatible with RHEL.

    – AlmaLinux is a open-source distribution owned and governed by the community. As such, they are free and focused on the community's needs and long-term stability. Both Operating Systems have a growing community with an increasing number of partners and sponsors.

    – Considered the go-to Operating System of choice since CentOS announced the end-of-life for CentOS 8, in favour of being an upstream provider to RHEL (releasing software before RHEL)

    AlmaLinux Project Home Page

    5) Rocky Linux

    Rocky Linux Logo

    – Similar to AlmaLinux, this OS is a stable and open-source derivative of Red Hat Enterprise Linux. Use open-source instead of paying license fees.

    – A popular free Linux distros for VPS and operationally compatible with RHEL.

    – Rocky Linux is of course open-source and while they don't have the backing financially like AlmaLinux, it's still a worthy community contributed effort.

    Rocky Linux Project Home Page

    6) SUSE

    SUSE Logo

    – The subsequent widespread distribution is SLES which is based on OpenSUSE.

    – Both OpenSUSE & SUSE Linux Enterprise Server have the same parent company – SUSE.

    – SUSE is a german-based open-source software company.

    – The commercial product of SUSE is SLED and OpenSUSE is the non-commercial distro.

    SUSE Project Home Page

    7) Debian

    Debain Logo

    – It is open-source and considered a stable Linux distribution.

    – Ships in with over 51000 packages and uses a unified packaging system.

    – Used by every domain, including Educational Institutions, Companies, Non-profit, and Government organizations.

    – Supports more significant of computer architectures. It includes 64-bit ARM (Aarch64), IBM System z, 32-bit PC (i386), 64-bit PC (amd64), and many more.

    – At last, it is integrated with a bug tracking system. By reading its documentation and content available for web related to Debian helps you in its support.

    Debain Project Home Page


    Welcome to the world of open-source distros relevant today. It is all about the 7 best Linux Distros for VPS Hosting of 2023. Let us know which distribution you or your company using today. If you plan to purchase the Linux VPS Server and are confused between the Linux Distros, connect via the comments or lookup more content on here for some easy learning.